content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Suspended Sediment Transport Equation (Cohesive)
Suspended Sediment Transport Equation (Cohesive)¶
The transport equation for cohesive sediment uses the same 2D form of advection and diffusion equation as that used for non-cohesive sediment:
\[(\bar{c} H)_t + \nabla_h \cdot (\bar{c} H ({\bf u}_\alpha + \bar{{\bf u} }_2)) =\nabla_h \cdot (k H (\nabla_h \bar{c})) + P - D \label{ad}\]
where \(\bar{c}\) is the non-dimensional depth-averaged sediment concentration normalized by sediment density. \(H(\bf{u}_\alpha + \bar{\bf{u}}_2) =M\) represents the flow rate per unit width defined
in Shi et al. (2012), in which \(H=h+\eta\) is the total water depth. The roller-induced extra undertow can be taken into account as an option (see Wave Breaking, roller and undertow, and Physics
(dispersion, breaking, friction)). \(k\) is the horizontal sediment diffusion coefficient used for cohesive sediment and is usually defined by users (such as in the DHI model). Some researchers
showed that the diffusion coefficient is a function of current flux, such as in Kimiaghalam et al. (2019) who connected the diffusion coefficient to river discharges.
In the advection-diffusion equation, \(P\) and \(D\) represent, respectively, the erosion rate and deposition rate for cohesive sediment. There are two sets of formulas to calculate the erosion rate.
For hard bed, Partheniades’ (1965) formula is used:
\[P = E \left(\frac{\tau_b}{\tau_{cr}} -1 \right)\]
For soft bed, Parchure and Mehta’s (1985) formula is applied:
\[P = E e^{\alpha \sqrt{\tau_b-\tau_{cr}}}\]
where \(E\) is the erodibility specified by users, \(\tau_b\) is the bed shear stress, and \(\tau_{cr}\) is the critical shear stress for erosion. The bed shear stress can be calculated using Soulsby
et al. (1993):
\[\tau_b = \rho_w \left(\frac{ 0.4}{1+\ln (k_s/30 h)} \right)^2 U_c^2\]
which is the same as for the non-cohesive sediment, and the critical bed shear stress is usually specified by users. For soft bed, \(\alpha\) is a so-called alpha-coefficient specified by users.
The erosion rate, \(P\), has the dimension of velocity (m/s) considering the convection-diffusion equation for non-dimensional sediment concentration.
The deposition rate \(D\) can be calculated using the formula of Krone (1962):
where \(w_s\) is the settling velocity which can be evaluated using a number of formulas from different sources and usually based on laboratory experiments. It should be related to processes of
flocs, aggregate dimensions, drag, local concentration, salinity and other environmental factors. Users can define their own formulas by modifying the sediment module. Here, we provide a general
formulation that can describe the evolution in conditions of flocculation (In Kombiadou and Krestenitis, 2014):
\[w_s = \frac{a \bar{c}^n}{(\bar{c}^2 + b^2)^m}\]
The coefficients have a large range, differing in various estuarine and riverine areas. \(a=0.01-0.23, b=1.3-25.0, n=0.4-2.8\) and \(m=1.0-2.8\). The default values in the model are \(a=0.1; b=2.0; n
=0.5; m=1.5.\) For \(\bar{c}=0.1 g/l\), for example, \(w_s = 3.9E^{-3} m/s\).
In (75), \(c_b\) is the near-bed concentration calculated by
\[c_b = \beta \bar{c}\]
in which \(\beta\) is the parameter. By default, we use \(\beta = 1\). It can also be specified by
\[\beta = 1+\frac{P_e}{1.25+4.75 P_d^{2.5}}\]
where \(P_e\) is the Peclet number:
\[P_e = \frac{6 w_s}{\kappa u_{*c}}\]
in which \(\kappa\) is von Karman constant and \(u_{*c}\) is the friction velocity which can be calculated by van Rijn (1984):
\[u_{*c} = \frac{\kappa}{-1 + \log (30 H / k_s)} U_c\]
\(P_d\) is the probability of deposition defined by
\[P_d = 1- \left( \frac{\tau_b}{\tau_{cd}} \right)\]
where \(\tau_{cd}\) is the critical shear stress for deposition defined by users.
Summary of Input Parameters
1. \(k\): diffusion coefficient, k_coh (default 10E-6). Different from the non-cohesive sediment transport, this parameter needs to be specified by users.
2. \(\tau_{cr}\): critical shear stress for erosion, Tau_cr_coh (default 0.001)
3. \(\tau_{cd}\): critical shear stress for deposition, Tau_crd_coh (default 0.001)
4. \(a,b,m\) and \(n\): Empirical parameters used to calculate settling velocity, default values are a_coh = 0.1, b_coh=2.0, n_coh=0.5, and m_coh=1.5
5. \(E\): erodibility parameter, default E_coh=0.0001
6. \(\alpha\): alpha-coefficient used to calculate the erosion rate for soft bed, default alpha_coh = 1.0
An example of model setup can be found in /simple_cases/single_vessel_cohesive/. See Single vessel + cohesive sediment for documentation.
Kimiaghalam, N., Goharrokhi,M., Clark, S. P., 2016, Estimating cohesive sediment erosion and deposition rates in wide rivers, Canadian Journal of Civil Engineering, 43(2): 164-172 doi.org/10.1139/
Krone, R. B., 1962, Flume Studies of the Transport of Sediment in Estuarine Shoaling Processes. Final Report to San Francisco District U. S. Army Corps of Engineers, Washington D.C. website for Krone
Parchure, T. M. and A. J. Mehta, 1985, Erosion of soft cohesive sediment deposits, Journal of Hydraulic Engineering – ASCE 111 no. 10: 1308–1326 doi/10.1061
Partheniades, E. 1965, Erosion and deposition of cohesive soils, Journal of the hydraulics division. Proceedings of the ASCE 91 no. HY1: 105–139 website for Partheniades 1965
Parchure, T. M. and A. J. Mehta, 1985, Erosion of soft cohesive sediment deposits, Journal of Hydraulic Engineering – ASCE 111 no. 10: 1308–1326 doi:10.1061
Shi, F., J.T. Kirby, J.C. Harris, J.D. Geiman, and S.T. Grilli, 2012, A high-order adaptive time-stepping TVD solver for Boussinesq modeling of breaking waves and coastal inundation. Ocean Modelling,
43-44: 36-51. DOI: 10.1016/j.ocemod.2011.12.004
Soulsby R. L., Hamm L., Klopman, G., Myrhaug, D., Simons R.R., Thomas, G. P., 1993, Wave-current interaction within and outside the bottom boundary layer, Coastal Engineering, Volume 21, Issues 1–3,
December 1993, Pages 41-69, doi:10.1016/0378-3839(93)90045-A
van Rijn, L.C., 1984, Sediment Pick‐Up Functions, Journal of Hydraulic Engineering Vol. 110, Issue 10 (October 1984) doi:10.1061/(ASCE)0733-9429(1984)110:10(1494) | {"url":"https://fengyanshi.github.io/build/html/sed_equation_cohesive.html","timestamp":"2024-11-08T12:57:03Z","content_type":"text/html","content_length":"21137","record_id":"<urn:uuid:f1979a92-1f33-4ad0-a20f-928bb2423b2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00802.warc.gz"} |
HP Forums
This is a proposal for a new feature for the WP34S.
I wonder, why I never heard about the idea of a LastXY register in RPN Calculators.
Obviously the more than fourty year old LastX register works perfectly for correcting a misexecuted monadic function like X^2. But for dyadic functions the user has to think about the correct inverse
function of that he just has executed, and he has to enter it explicitely by an extra keystroke and some extra keystrokes for recalling the LastX register twice. Nobody would have difficulties to
find the inverse function of x + - / etc., but still there is more comfort without having to think about it. Even more important is, this procedure does not always produce the exact Y value as it was
before the operation. Think of multiplying after having done a division or worse.
If you would give the user a dedicated LastXY register, accessible by typing RCL U, if you like to call it Undo Register, which restores both, X and Y register, like they were just before the last
operation, the user can forget about which operation he just executed, has the full precision, and can easily type the new dyadic function.
Even better in my opinion could be the implementation, that RCL L restores automatically either X or XY register depending on the last executed function, which the WP34S knows, was either a monadic
or a dyadic function.
Of course, this could make old programs, that use the LastX functionality, become incompatible, but the advantage of having improved a fundamental old feature, which it not longer limited by
available memory, could be worth of thinking about an implementation, couldn't it? | {"url":"https://www.hpmuseum.org/forum/archive/index.php?thread-2524.html","timestamp":"2024-11-08T11:40:59Z","content_type":"application/xhtml+xml","content_length":"27184","record_id":"<urn:uuid:8e6b22c2-d6bd-4221-8e29-62aff3712fc2>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00387.warc.gz"} |
Show that the line through the points (4,7,8),(2,3,4) is parall... | Filo
Question asked by Filo student
Show that the line through the points is parallel to the line through the points .
a. {"4. Find the equation of the line which passes through the point and is parallel to the vector .","5. Find the equation of the line in vector and in cartesian form that passes through the point
with position vector and is in the direction .","6. Find the cartesian equation of the line which passes through the point and parallel to the line given by .","7. The cartesian equation of a
line is . Write its vector form. Find the vector and the cartesian equations of the lines that passes through the origin and ."}
Not the question you're searching for?
+ Ask your question
Video solutions (3)
Learn from their 1-to-1 discussion with Filo tutors.
5 mins
Uploaded on: 1/22/2023
Was this solution helpful?
5 mins
Uploaded on: 8/15/2023
Was this solution helpful?
Found 4 tutors discussing this question
Discuss this question LIVE for FREE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Show that the line through the points is parallel to the line through the points .
Updated On Aug 15, 2023
Topic Algebra
Subject Mathematics
Class Class 12
Answer Type Video solution: 3
Upvotes 349
Avg. Video Duration 5 min | {"url":"https://askfilo.com/user-question-answers-mathematics/show-that-the-line-through-the-points-is-parallel-to-the-33323039383833","timestamp":"2024-11-08T12:11:05Z","content_type":"text/html","content_length":"433331","record_id":"<urn:uuid:182e26a9-eb15-4b49-9ba9-6096f05ce7df>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00565.warc.gz"} |
substitution cipher hill climbing
(2020). In this video, we show the basics of cryptology (cryptology = cryptography and cryptanalysis). Currently I am trying hill-climbing. Hill cipher is a polygraphic substitution cipher based on
linear algebra. The Hill Cipher uses an area of mathematics called Linear Algebra , and in particular requires the user to have an elementary understanding of matrices . Hill Cipher was the first
Cipher invented by Lester S. Hill in 1929 in which it was practical to operate on more than three symbols at a single time. Fitness is determined by comparing quadgram statistics of decrypted text
against the engilish quadgrams. Climbing the Hill Cipher Algorithm Hill Cipher is a polygraphic substitution cipher based on linear algebra. As a running example, we use the ciphertext below; it is
the encryption, by a randomly chosen substitution cipher, of the first two sentences in Section 1.1 of Stinsonâ s highly recommended book Crpytography: Theory and Practice , punctuation and spaces â
¦ hill climbing and simulated annealing, are used to break transposition ciphers. hand, a slight variant of simple substitution cipher called the homophonic substitution cipher is much more complex
and robust to the frequency based attacks. Itispossibleto break a cipher using a â wrongâ language, but the Changed: [Main Window] Some references to 'Brute force' solving method have been changed
to 'Hill climbing' in the status bar to more accurately reflect the solving method used. 107 6.4 A comparison of hill climbing with random balanced generation, n=12. 108 6.5 Average number of
improvement steps by the hill climbing algorithm Each letter is treated as a digit in base 26 : A = 0, B =1, and so on. Jarl Van Eycke and Louie Helm have now solved this challenge. Cryptologia: Vol.
Break Simple Substitution Cipher (automated cryptanalysis) Use a hill climbing algo to maximize fitness score accross iterations of keys (mutate, check, rinse/repeat). Wikipedia Reference: Hill
Cipher Hill Substitution Ciphers In this Lab, matrices are used to encode and decode messages. As far as I know, this is the shortest bigram cipher challenge ever broken. HillClimbing and Simulated
Annealing algorithms can be used to crack Playfair ciphers as well as all other 5*5 grid based ciphers, and also simple substitution ciphers and Vigenere ciphers. File:Hill's message protector fig4
cropped.png Hill's cipher machine, from figure 4 of the patent In classical cryptography, the Hill cipher is a polygraphic substitution cipher based on linear algebra.Invented by Lester S. Hill in
1929, it was the first polygraphic cipher in which it was practical (though barely) to operate on more than three symbols at once. A bigram (also known as a digraph) is a â ¦ Invented by Lester S.
Hill in 1929 and thus got itâ s name. Plain text: Binary Key: HILL Choose the key in such a way that it always forms a square matrix A small change produces a neighboring state. I am trying to break
a special substitution cipher (where the text is divided into several columns and each column has its own substitution table). We then consider a combination cipher, consisting of a homophonic
substitution and a column transposition. Hill's patented cipher machine appears in â ¦ Simple Monoalphabetic Substitution (maS): A simple monoalphabetic substitution cipher re places each 44, No.
Hill cipher is a polygraphic substitution cipher based on linear algebra.Each letter is represented by a number modulo 26. slide 15 Hill climbing â ¢ Question: Whatâ s a neighbor? For breaking a
classical cipher, it is useful to knowthelanguageoftheplaintext. Perhaps the simplest way to encode a message is to simply replace each letter of In cryptography (field related to
encryption-decryption) hill cipher is a polygraphic cipher based on linear algebra. Hill Substitution Ciphers Text Reference: Section 4.1, p. 223 In this set of exercises, using matrices to encode
and decode messages is examined. The Hill cipher, invented in 1929 by Lester S. Hill, is a polygraphic substitution which can combine much larger groups of letters simultaneously using linear
algebra. Slippery hill-climbing technique for ciphertext-only cryptanalysis of periodic polyalphabetic substitution ciphers. So, technically it is a polygraphic substitution cipher, as it can work on
digraphs, trigraphs (3 letter blocks) or theoretically any sized blocks. rithms, e.g. The Playfair cipher or Playfair square or Wheatstoneâ Playfair cipher is a manual symmetric encryption technique
and was the first literal digram substitution cipher. The scheme was invented in 1854 by Charles Wheatstone, but bears the name of Lord Playfair for promoting its use. 6.3 A comparison of hill
climbing with random balanced generation, n=8. Blasto A Python implementation of Hill-Climbing for cracking classic ciphers. Blasto currently supports following ciphers : Substitution cipher
(includes Caesar Shift, Affine, etc...) Turning Grille cipher It also supports following The Hill cipher, invented in 1929 by Lester S. Hill, is a polygraphic substitution which can combine much
larger groups of letters simultaneously using linear algebra. Hill climbing is a mathematical optimization The ideas are due to Lester Hill in 1929. Improved: [Nihilist Substitution] Now solves
cipher and not just convert it to simple substitution. Hill Cipher Hill cipher is a polyalphabetic cipher introduced by Lester Hill in 1929. An important thing with Playfair cipher is that it's weak:
all circular horizontal or vertical permutations of the 5x5 grid is an equivalent key. Ruby scripts that solve simple substitution ciphers (example cryptoquotes) using shotgun hill-climbing and
looking at tetragraphs - pfharlock/substitution-solver Finally, we conclude this section with a discussion of hill climbing in the context of substitution cipher cryptanalysis. When information is
sent using Cipher, and the receiver receives the encrypted code, the receiver has to guess which Cipher was used to encrypt the code, and then only it can be decrypted. Q&A for software developers,
mathematicians and others interested in cryptography Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to
learn, share their knowledge, and build their careers. Each letter is represented by a number modulo 26. monoalphabetic substitution ciphers), (2) homo phonic substitution ciphers, (3) nomenclatures,
and ( 4) code books. Last week, I introduced a 750-letter ciphertext created with a bigram substitution. The infamous Zodiac 340 cipher has a good chance of being a homophonic substitution The
algorithm continues to repeat this process with the new generation, ideally attaining higher scores each time. 3, pp. This got me thinking about whether hill climbing could be an effective attack on
the monoalphabetic substitution cipher. Keywords Combinatorial optimization Cryptanalysis Cryptography Transposition cipher 205-222. Let us discuss the technique of hill cipher. The substitution
cipher, one of the oldest forms of encryption algorithms according to [Sin00], takes each character of a plaintext message and uses a substitution process to replace it with a new character in the
cipherâ ¦ In this paper, we have used hill climbing, simulated annealing and combination of these two for breaking transposition ciphers in ciphertext-only attack mode. veloped nested hill climb
approach, particularly when the ciphertext message is short. In this paper, classical substitution ciphers, namely, Playfair, Vigenère and Hill ciphers are considered. (vaguely) Problems tend to have
structures. Each letter is treated as a digit in base 26 : A = 0, B =1, and so on. Improved: [Two-Square] Added hill climbing solver. This video here shows how hill climbing can be used to break
classical ciphers. The neighborhood â ¦ Substitution Ciphers Caesar Cipher The Caesar cipher, also called a Caesar shift, gets its name from Julius Caesar, who occasionally used this encoding method
in his own private messages. It was the first cipher that was able to operate on 3 symbols Substitution cipher breaker This online calculator tries to decode substitution cipher without knowing the
key. By comparing quadgram statistics of decrypted text against the engilish quadgrams polygraphic substitution cipher cryptanalysis, we this... Me thinking about whether hill climbing â ¢ Question:
Whatâ s a neighbor ) code books discussion of hill and! 4 ) code books wikipedia Reference: hill cipher hill substitution ciphers ), ( )., ( 2 ) homo phonic substitution ciphers 3 ) nomenclatures,
and on... Substitution ciphers, ( 3 ) nomenclatures, and so on Helm have Now solved challenge... This challenge column transposition wikipedia Reference: hill cipher is a polygraphic substitution
cipher cryptanalysis Lester S. hill 1929., ( 2 ) homo phonic substitution ciphers, ( 2 ) homo phonic ciphers. Climbing and simulated annealing, are used to break classical ciphers, ideally attaining
higher scores each time =. Is treated as a digit in base 26: a = 0, B =1, so... Bigram ( also known as a digit in base 26: a = 0, B =1, so! Climbing solver wikipedia Reference: hill cipher is a â ¦
substitution cipher hill climbing cipher is a polygraphic substitution cipher on. And simulated annealing, are used to encode and decode messages annealing, used! And not just convert it to simple
substitution cipher hill substitution ciphers this. Bigram ( also known as a digit in base 26: a = 0, B,! Cipher and not just convert it to simple substitution Eycke and Louie Helm Now... Engilish
quadgrams hill substitution ciphers in this Lab, matrices are used to transposition. A comparison of hill climbing and simulated annealing, are used to and. Nomenclatures, and so on â ¦ hill cipher
is a polygraphic substitution cipher based on linear algebra.Each is. Solves cipher and not just convert it to simple substitution and a column transposition in the context of substitution
cryptanalysis... Now solves cipher and not just convert it to simple substitution this,. Consisting of a homophonic substitution and a column transposition this got me thinking whether... 2 ) homo
phonic substitution ciphers in this Lab, matrices are to! Periodic polyalphabetic substitution ciphers in this Lab, matrices are used to break classical ciphers invented by Lester hill. Added hill
climbing and simulated annealing, are used to break transposition ciphers substitution ciphers Reference hill. As I know, this is the shortest bigram cipher challenge ever broken ciphers, ( 2 )
phonic! Against the engilish quadgrams solves cipher and not just convert it to simple substitution an effective on... Cipher Algorithm hill cipher hill substitution ciphers, ( 3 ) nomenclatures, and
( 4 ) code.. And not just convert it to simple substitution 107 6.4 a comparison of hill â ¢. For promoting its use ciphertext-only cryptanalysis of periodic polyalphabetic substitution ciphers in
this Lab, matrices used... And decode messages to knowthelanguageoftheplaintext is the shortest bigram cipher challenge ever broken ciphertext! Know, this is the shortest bigram cipher challenge ever
broken name substitution cipher hill climbing Lord Playfair for promoting its use of... Van Eycke and Louie Helm have Now solved this challenge Reference: cipher... A homophonic substitution and a
column transposition hill in 1929 bigram ( also known as a digraph ) is polygraphic. Cipher is a polygraphic substitution cipher homo phonic substitution ciphers, ( 2 ) phonic... Two-Square ] Added
hill climbing with random balanced generation, ideally attaining higher scores each.... With the new generation, n=12 107 6.4 a comparison of hill climbing random... The substitution cipher hill
climbing substitution cipher based on linear algebra.Each letter is treated as a digit in 26... Improved: [ Nihilist substitution ] Now solves cipher and not just convert it simple. Ever broken
shortest bigram cipher challenge ever broken comparing quadgram statistics of decrypted text against the engilish quadgrams the! Be used to encode and decode messages how hill climbing in the context
substitution! Due to Lester hill in 1929 ( 4 ) code books by S...., we conclude this section with a discussion of hill climbing and simulated annealing, are used break! The engilish quadgrams Louie
Helm have Now solved this challenge of hill climbing can be used to encode decode! Represented by a number modulo 26 me thinking about whether hill climbing and simulated annealing, are used break...
And so on of Lord Playfair for promoting its use far as I know, is. Playfair for promoting its use got me thinking about whether hill climbing in the context of cipher! Simple substitution got me
thinking about whether hill climbing solver be used to classical... This section with a discussion of hill climbing with random balanced generation, n=12 ) a... Whatâ S a neighbor process with
substitution cipher hill climbing new generation, n=12 number modulo 26 cipher Algorithm cipher... Balanced generation, n=12 known as a digit in base 26: a = 0 B. Ciphers in this Lab, matrices are
used to break classical ciphers as I know, is. Be an effective attack on the monoalphabetic substitution cipher based on linear algebra.Each is! The shortest bigram cipher challenge ever broken break
classical ciphers climbing â ¢ Question: a... Far as I know, this is the shortest bigram cipher challenge ever broken combination,! Homophonic substitution and a column transposition for promoting
its use climbing with random balanced generation, attaining... Itâ S name the ideas are due to Lester hill in 1929 repeat process! Nested hill climb approach, particularly when the ciphertext
message is short attack... ) code books Now solves cipher and not just convert it to simple substitution a digraph ) is â ¦. Eycke and Louie Helm have Now solved this challenge a neighbor we then
consider combination... Bigram cipher challenge ever broken is useful to knowthelanguageoftheplaintext against the engilish quadgrams represented by a modulo... A â ¦ hill cipher Algorithm hill
cipher Algorithm hill cipher is a substitution! Now solved this challenge about whether hill climbing could be an effective attack on the monoalphabetic substitution ciphers ) (. Slide 15 hill
climbing and simulated annealing, are used to break classical ciphers,.! Is a polygraphic substitution cipher a column transposition challenge ever broken balanced generation, n=12 fitness is
determined comparing..., it is useful to knowthelanguageoftheplaintext represented by a number modulo 26 3 nomenclatures. On linear algebra hill in 1929 and thus got itâ s name 2 ) homo phonic
substitution ). Be an effective attack on the monoalphabetic substitution cipher cryptanalysis digraph ) a!, ( 2 ) homo phonic substitution ciphers in this Lab, matrices are to... Encode and decode
messages homo phonic substitution ciphers in this Lab, matrices used. Each time ciphertext message is short number modulo 26 matrices are used break. A polygraphic substitution cipher hill climbing
cipher cryptanalysis climbing the hill cipher Algorithm hill cipher Algorithm cipher... In this Lab, matrices are used to break transposition ciphers, are used to break transposition ciphers with
new! Wheatstone, but bears the name of Lord Playfair for promoting its use by comparing quadgram statistics decrypted. Comparing quadgram statistics of decrypted text against the engilish quadgrams a
classical,. Comparison of hill climbing â ¢ Question: Whatâ s a neighbor hill cipher Algorithm hill hill! Is useful to knowthelanguageoftheplaintext by comparing quadgram statistics of decrypted
text against the quadgrams... 6.4 a comparison of hill climbing could be an effective attack on the monoalphabetic substitution cipher.. ( 2 ) homo phonic substitution ciphers promoting its use 1929
and thus itâ s. Scores each time higher scores each time breaking a classical cipher, consisting of a homophonic substitution and column! This is the shortest bigram cipher challenge ever broken
context of substitution cipher ] Now cipher., and so on ] Added hill climbing â ¢ Question: Whatâ s a neighbor in and! Eycke and Louie Helm have Now solved this challenge just convert it to simple.!
Was invented in 1854 by Charles Wheatstone, but bears the name of Lord Playfair for promoting its use balanced... Lester hill in 1929 and thus got itâ s name Whatâ s a neighbor ciphertext message
is.. ), ( 3 ) nomenclatures, and ( 4 ) code books in 1854 by Charles Wheatstone, bears! Invented by Lester S. hill in 1929 and thus got itâ s name bigram substitution cipher hill climbing!
Annealing, are used to break classical ciphers thinking about whether hill climbing solver cipher challenge ever broken cipher. And simulated annealing, are used to break transposition ciphers to
break classical ciphers 26: a 0... Continues to repeat this process with the new generation, ideally attaining higher each..., ( 3 ) nomenclatures, and substitution cipher hill climbing 4 ) code
books then... Whatâ S a neighbor so on context of substitution cipher based on linear letter... ) homo phonic substitution ciphers ), ( 2 ) homo phonic substitution ciphers (... Ever broken climbing
â ¢ Question: Whatâ s a neighbor simple substitution decrypted text against the engilish quadgrams about! Substitution ] Now solves cipher and not just convert it to simple.. Invented by Lester S.
hill in 1929 treated as a digit in base 26 substitution cipher hill climbing a = 0, =1!, are used to break classical ciphers context of substitution cipher based on linear algebra.Each letter is
treated as digit! Video here shows how hill climbing could be an effective attack on the monoalphabetic substitution cryptanalysis. Jarl Van Eycke and Louie Helm have Now solved this challenge a
digraph ) is polygraphic. Engilish quadgrams polyalphabetic substitution ciphers ), ( 2 ) homo phonic substitution ciphers (., consisting of a homophonic substitution and a column transposition
wikipedia Reference: hill cipher is a â ¦ cipher! | {"url":"http://coachyan.com/mygatewayonline-regina-rjpr/substitution-cipher-hill-climbing-52292c","timestamp":"2024-11-13T05:25:04Z","content_type":"text/html","content_length":"28055","record_id":"<urn:uuid:007aa574-ba5f-46f4-89b4-333621251ebe>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00096.warc.gz"} |
Conserved Charge Fluctuations from RHIC BES and FXT
Tomonaga Center for the History of the Universe, University of Tsukuba, Tenno-dai 1-1-1, Tsukuba 305-8571, Japan
Submission received: 12 December 2023 / Revised: 17 January 2024 / Accepted: 18 January 2024 / Published: 19 January 2024
Cumulants up to the sixth-order of the net-particle multiplicity distributions were measured at RHIC for the Beam Energy Scan and fixed-target program, from which we obtained some interesting hints
on the phase structure of the QCD matter. In this article, we present recent experimental results on (net-)proton cumulants and discuss current interpretations on the QCD critical point and the
nature of the phase transition. We will also report recent results for measurements of the bayron-strangeness correlations, which were measured with the newly developed analysis technique to remove
the effect from the combinatorial backgrounds for hyperon reconstruction.
1. Introduction
One of the ultimate goals in heavy-ion collision experiments is to understand the phase structure of the matter described by Quantum ChromoDynamics (QCD) and the nature of the phase transition.
Figure 1
depicts a conjectured phase diagram for the QCD matter [
] with respect to temperature
(MeV) and baryon chemical potential
$μ B$
(MeV). In the QCD phase diagram, there are two phases of the hadronic gas and quark–gluon plasma (QGP), which are the confined and deconfined states of quarks and gluons, respectively. According to
lattice QCD calculations, the phase transition between QGP and the hadronic gas is a smooth crossover [
] at vanishing baryon chemical potential,
$μ B = 0$
, while model calculations predict first-order phase transition at large
$μ B$
region [
]. If the first-order phase transition exists, the connecting point to the crossover may also exist, which is a QCD-critical point.
To explore the QCD phase diagram and elucidate the nature of the phase transition, the Beam Energy Scan (BES-I) program [
] was carried out at Relativistic Heavy Ion Collider (RHIC) from 2010 to 2017 for Au+Au collisions at
$s NN = 7.7$
, 11.5, 14.5, 19.6, 27, 39, 54.4, 62.4, and 200 GeV. The corresponding baryon chemical potential is around
$30 < μ B < 400$
MeV, covering a wide region in the QCD phase diagram. The fixed-target (FXT) experiment was also performed for
$s NN = 3.0$
GeV Au+Au collisions with the STAR detector in 2018, where the baryon chemical potential has been extended up to 720 MeV.
Various observables were measured in BES-I, e.g., conserved charge fluctuations [
] to search for the QCD critical point, directed flow [
] and average transverse mass [
] to search for the first-order phase transition, elliptic flow [
], the nuclear modification factor [
], dynamical charge correlations [
], and dileptons [
] to search for the possible boundary of QGP formation. Many of these measurements exhibit interesting trend as a function of the collision energy, but their interpretations have been limited by
large uncertainties at low collision energies.
In order to improve those results, the phase II of the BES program (BES-II) was performed in 2019–2021 at $s NN =$ 7.7, 9.2, 11.5, 13.7, 14.5, 17.3, and 19.6 GeV. The FXT experiments were also
carried out at $s NN =$ 3.2, 3.5, 3.8, 3.9, 4.5, 5.2, 6.2, and 7.7 GeV to fill the gap between BES energies and 3 GeV from FXT. In the following sections, we will present the measurements of
conserved charge fluctuations from BES-I and FXT 3 GeV data at RHIC.
2. Conserved Charge Fluctuations
2.1. Cumulants
Fluctuations of conserved charges are measured in terms of cumulants, or moments, of a probability distribution. The
th-order cumulant,
$C r$
, is defined by
th-derivatives of cumulant generating function [
], which is expressed by moments as:
$C 1 = 〈 N 〉$
$C 2 = 〈 ( δ N ) 2 〉$
$C 3 = 〈 ( δ N ) 3 〉$
$C 4 = 〈 ( δ N ) 4 〉 − 3 〈 ( δ N ) 2 〉 2$
$C 5 = 〈 ( δ N ) 5 〉 − 10 〈 ( δ N ) 2 〉〈 ( δ N ) 3 〉$
$C 6 = 〈 ( δ N ) 6 〉 + 30 〈 ( δ N ) 2 〉 3 − 15 〈 ( δ N ) 2 〉〈 ( δ N ) 3 〉$
, where
is the number of net-particles of a conserved charge measured within the experimental acceptance and the bracket represents the event average. Another notation,
$〈 N r 〉 c = C r$
, will also be used in following sections, where the subscript
represents the cumulant. Similarly, the 2nd-order mix-cumulant between two conserved quantities can be expressed as:
$〈 X Y 〉 c = 〈 X Y 〉 − 〈 X 〉〈 Y 〉$
, where
represent net-particle multiplicities of two different conserved quantities or particle species. The cumulants have a volume dependence by definition. To cancel this trivial effect, we take the ratio
between different orders of cumulants, e.g.,
$C 3 / C 2$
$C 4 / C 2$
. These ratios can be directly compared with the corresponding susceptibility ratios from theoretical calculations.
2.2. Analysis Techniques
The event-by-event net-proton multiplicity distributions are shown in
Figure 2
from BES-I [
]. We study the precise structures at the tail of the distributions through the measurements of various orders of cumulants. One should keep in mind that these are raw distributions which are not
corrected for detector efficiency [
], initial volume fluctuations [
], pileup events [
], and other experimental artifacts.
The effect of the detector efficiencies were corrected by using the correction formulas, which is derived based on the assumption that detector efficiency follows the binomial distribution [
]. The possible deviation from the binomial distribution was studied in embedding simulations at Au+Au 200 GeV most central collisions, where we found that the efficiencies of the STAR detector can
be well describe by the beta-binomial distribution [
]. The net-proton
$C 4 / C 2$
values corrected for the beta-binomial distribution [
] were found to be consistent with those from the binomial efficiency correction within statistical uncertainties, and therefore it was concluded that the efficiency distribution of the STAR detector
was close enough to the binomial distribution within the current statistical precision.
In heavy-ion collisions, the number of participant nucleons and particle multiplicity are not one-to-one corresponding, which distorts the cumulants of net-particle distributions. This effect is
referred to as initial volume fluctuations. The effect was suppressed by applying the data-driven approach of the Centrality Bin Width Correction (CBWC) [
], where the cumulants were calculated at each reference multiplicity bin and averaged at each centrality class. It was also confirmed that the CBWC gives consistent results with another approach to
correct for initial volume fluctuation in an analytical way [
] for the BES-I data sets [
]. It should be noted that the neutrons cannot be measured by the STAR detector. Thus, we measured net-proton distributions as a proxy of net-baryon distributions.
The fraction of pileup events were much higher for 3.0 GeV data from FXT compared to the collider energies from BES-I. We first determined the pileup fraction and the reference multiplicity
distributions of the single-collision events by using the unfolding approach [
]. This allowed us to determine the response matrices between single-collision multiplicity and that for the pileup events, which was used for the pileup correction of cumulants [
2.3. Baselines
Experimentally measured cumulant ratios of net-proton distributions are compared with the baselines. The simplest case is that the protons and antiprotons follow independent Poisson distributions,
respectively. Then the resulting net-proton distribution follows the Skellam distribution, whose odd-order cumulants are
$μ p − μ p ¯$
while even-order cumulants are
$μ p + μ p ¯$
, where
$μ p$
$μ p ¯$
denote the mean value of protons and antiprotons, respectively. As a result, the
$C 4 / C 2$
value for the Skellam distribution becomes unity for all collision energies and centralities, and therefore the deviation of the experimental results with respect to unity indicate the effects of
non-statistical fluctuations. It is also important to incorporate the background effects that cannot be avoided in experiments, such as initial volume fluctuations and baryon number conservation [
]. These effects are generally simulated in the hadronic transport model, which is employed as a more realistic baseline than the Skellam baseline.
3. Net-Proton Fluctuations
3.1. $C 4 / C 2$ for the Critical Point Search
Figure 3
shows the collision energy dependence of net-proton
$C 4 / C 2$
in Au+Au most central collisions from BES-I [
] and the FXT program at
$s NN = 3$
GeV [
]. The
$C 4 / C 2$
value is consistent with the Poisson baseline at
$s NN = 200$
GeV while it decreases with decreasing collision energy, reaching a minimal value at 19.6 GeV. The ratio seems to increase above the Poisson baseline at lower collision energies down to 7.7 GeV. The
collision energy dependence was found to have nonmonotonicity of 3.1
. The observed nonmonotonic collision energy dependence is qualitatively consistent with the model calculation incorporating the QCD critical point [
], and therefore the BES-I results could indicate the existence of the critical point at
$7.7 ≤ s NN ≤ 19.6$
GeV. The proton
$C 4 / C 2$
values from the HADES experiment at 2.4 GeV [
] and STAR-FXT at 3.0 GeV are also plotted in
Figure 3
. All experimental measurements are consistent within uncertainties. The STAR-FXT result can be reproduced by the UrQMD calculations [
], which indicates that the hadronic interactions are dominant at 3 GeV collisions and the QCD critical point may only exist at
$s NN > 3.0$
GeV. Further conclusions could potentially be made after the completion of the ongoing analysis for the phase II of the BES program (BES-II) and FXT at
$3.2 ≤ s NN ≤ 27$
GeV [
3.2. $C 6 / C 2$ for the Crossover Search
The STAR experiment also measured further higher-order cumulants up to the sixth order. Theoretically, the net-baryon
$C 6 / C 2$
is expected to be more sensitive to the QCD phase structure than
$C 4 / C 2$
, as its sign changes near the phase transition temperature [
]. The left panel of
Figure 4
shows the centrality dependence of net-proton
$C 6 / C 2$
in Au+Au collisions at
$s NN = 27$
, 54.4, and 200 GeV [
]. The
$C 6 / C 2$
values from 27 and 54.4 GeV are consistent with zero within large uncertainties, while those from 200 GeV are progressively negative systematically from peripheral to central collisions. These
negative signs are qualitatively consistent with lattice QCD calculations [
]. Thus, the results from 200 GeV could indicate the experimental signature of the smooth crossover at RHIC top energy. The collision energy dependence of (net-)proton
$C 6 / C 2$
is shown in the right panel in
Figure 4
for Au+Au 0–40% and 50–60% collisions. The
$C 6 / C 2$
value from 0–40% centrality decreases with decreasing collision energy down to 7.7 GeV, while it is consistent with UrQMD calculations at 3 GeV. The decreasing trend down to 7.7 GeV is qualitatively
consistent with the FRG model down to 7.7 GeV [
] and lattice QCD calculations down to 39 GeV [
], where both calculations predict a smooth crossover transition.
4. Challenge for Baryon–Strangeness Correlations
4.1. Previous Measurement
Correlations between two conserved charges are expected to carry important information on the magnetic field formed in non-central heavy-ion collisions [
] as well as the temperature of the system [
]. Observables suggested by theories consist of the second-order mix-cumulant between net-baryon and net-strangeness, which we call baryon–strangeness correlation in the rest of this article. The
importance of the baryon–strangeness correlations was first proposed in Ref. [
] in terms of the correlator
$C B S = − 3 〈 B S 〉 c 〈 S 2 〉 c ,$
$〈 B S 〉 c$
denotes the baryon–strangeness correlation and
$〈 S 2 〉 c$
is the second-order net-strangeness cumulant. The
$C B S$
value is expected to be unity for the ideal QGP while it strongly depends on the baryon-chemical potential for the hadronic gas. However, the
$C B S$
values extracted from previous STAR measurements [
] are between −0.12 and 0.043 for
$7.7 ≤ s NN ≤ 200$
GeV, which is much smaller than the expectations. According to the model calculations [
], the signal of the baryon–strangeness correlations vanish once the strange baryons (hyperons) are excluded from the measurements. The
$C B S$
values were thus very small as only (anti)protons and charged kaons were taken into account as proxies of net-baryon and net-strangeness, respectively, in previous STAR measurements.
To include hyperons in the measurement of event-by-event fluctuations, one has to address the issue of the combinatorial backgrounds. As hyperons decay into daughter particles before hitting the
detector, and therefore the invariant mass technique is usually employed to reconstruct hyperons [
]. One can see the signal peak of the hyperons of interest and determine the shape of the combinatorial backgrounds by optimizing the cut conditions for topological parameters for hyperon
reconstructions. Then one can subtract the background from the measurement to extract the signal yield and its event average. However, it is impossible to identify signal and background particles on
a candidate-by-candidate basis. Hence, the event-by-event fluctuation measurement of hyperons has proved challenging.
4.2. New Method: Purity Correction
Figure 5
shows a sketch of the invariant mass distribution for
]. The shape of the combinatorial backgrounds is assumed to be flat for simplicity. What we can measure in the experiment is always the sum of signal and background particles,
$m S N = m S + m N$
, where
$m S N$
is the number of signal candidates,
$m S$
is the number of signal particles, and
$m N$
is the number of background particles. However, it is impossible to identify
$m S$
$m N$
on an event-by-event basis. The second-order cumulant of signal candidates is expressed as:
$〈 m S N 2 〉 c = 〈 m S 2 〉 c + 〈 m N 2 〉 c + 2 〈 m S m N 〉 c ,$
$〈 m S 2 〉 c = 〈 m S N 2 〉 c − 〈 m N 2 〉 c − 2 〈 m S m N 〉 c ,$
where the last two terms on the right-hand side in Equation (
) cannot be measured experimentally.
Let us consider utilizing the sideband particles around the signal peak as a proxy for the number of background particles. Sideband particles,
$m R , i$
, are counted at the
th sideband windows indicated by dotted lines in
Figure 5
. Supposing that the probability distribution of sideband particles is consistent with that for the background particles, the following relations hold:
$〈 m N 2 〉 c = 〈 m R , i 2 〉 c ,$
$〈 m S m N 〉 c = 〈 m S m R , i 〉 c ,$
$〈 m N m R , i 〉 c = 〈 m R , i m R , j 〉 c , ( i ≠ j ) .$
From Equations (
), we obtain
$〈 m S 2 〉 c = 〈 m S N 2 〉 c − 〈 m R , i 2 〉 c − 2 〈 m S m R , i 〉 c .$
Next, we consider the second-order mix-cumulant between signal candidates and sideband particles:
$〈 m S N m R , i 〉 c = 〈 m S m R , i 〉 c + 〈 m N m R , i 〉 c$
$= 〈 m S m R , i 〉 c + 〈 m R , i m R , j 〉 c , ( i ≠ j ) ,$
$〈 m S m R , i 〉 c = 〈 m S N m R , i 〉 c − 〈 m R , i m R , j 〉 c .$
From Equation (
) to Equation (
) we used Equation (
). By substituting Equation (
) to Equation (
), we obtain the correction formula for the second-order cumulant [
] as
$〈 m S 2 〉 c = 〈 m S N 2 〉 c − 〈 m R , i 2 〉 c − 2 〈 m S N m R , i 〉 c + 2 〈 m R , i m R , j 〉 c .$
Similarly, the correction formula for the second-order mix-cumulant can be derived as
$〈 m S n S 〉 c = 〈 m S N n S N 〉 c − 〈 m S N n R , i 〉 c − 〈 n S N m R , i 〉 c + 〈 m R , i n R , i 〉 c ,$
is supposed to be the other conserved charge or particle species than
, and we utilized the following relations:
$〈 m S n N 〉 c → 〈 m S n R , i 〉 c = 〈 m S N n R , i 〉 c − 〈 m R , i n R , i 〉 c ,$
$〈 m N n S 〉 c → 〈 m R , i n S 〉 c = 〈 m R , i n S N 〉 c − 〈 m R , i n R , i 〉 c ,$
$〈 m N m N 〉 c → 〈 m R , i n R , i 〉 c .$
It should be noted that the sideband windows need to be determined carefully. Because of the trivial volume dependence, the values of $〈 m R , i 2 〉 c$, $〈 m R , i m R , j 〉 c$, and other (mix-)
cumulants that include sideband particles can easily change depending on the width of the sideband windows. The purpose of utilizing the sideband windows is to use them as the proxies of the
background particles under the signal peak, and therefore the width of the sideband windows have to be precisely determined so that their yields are consistent with the background particles that we
want to subtract. This leads to the iterative steps as follows. First, we determine the background yields utilizing data-driven approach like rotation or event-mixing methods. Second, we divide the
sideband according to the background yields. Finally, we calculate the correction parameters for each window of the sideband.
It is further suggested to check if those correction parameters including sideband particles are flat enough as a function of the invariant mass. Otherwise, one should revisit the definition of the
sideband windows to check if the sideband is equally divided. The residual dependence of correction parameters on the invariant mass should be taken into account as a part of the systematic
uncertainties. One can also take the average over as many sideband windows as possible to determine the correction parameters more precisely.
4.3. Measurement of $Λ$ and $Ξ −$ Hyperons
$Ξ −$
hyperons were reconstructed by using the invariant mass technique based on the following decay channels:
$Λ → p + π −$
$Ξ − → Λ + π −$
. The topological parameters such as the distance of the closest approach (DCA) of daughter particles, DCA between daughter particles, DCA and the decay length of hyperons, were optimized so that the
signal peak becomes visible.
Figure 6
shows the invariant mass (
$M inv$
) distributions for
$Ξ −$
, where the clear peaks from
$Ξ −$
are seen around
$M inv = 1.12$
and 1.32 GeV/c
, respectively. Another peak around 1.28 GeV/c
$Ξ −$
invariant mass distribution is the fake signal which appears if the bachelor
$π −$
are daughters from
. To avoid the effect from the fake signal, sideband particles at
$M inv < 1.3$
were not used in the purity correction for
$Ξ −$
. The background shape was determined by using the rotation method, which is shown by cyan solid lines in
Figure 6
. The yield of the background particles were then estimated from the rotational backgrounds, where the sidebands are equally divided (sideband windows), as shown by the magenta dotted lines.
The signal candidates for
$Ξ −$
were counted at
$1.11 < M inv < 1.12$
$1.32 < M inv < 1.33$
, respectively, on an event-by-event basis. Sideband particles were counted at each sideband window in
Figure 6
Figure 7
shows the first- and second-order cumulants of sideband particles, and the second-order mix-cumulant between signal candidates and sideband particles, as a function of invariant mass. The first-order
cumulant is flat by definition, as the sideband was equally divided based on the background yields. It is found that the second-order cumulants and mix-cumulants are flat as well, which indicates
that the parameters for the purity correction do not depend on the invariant mass, and furthermore, that sideband particles can be used as proxies of the background particles under the signal peak.
The analysis of the
$C B S$
was performed for two cases: (1) Measure
$Λ ¯$
on top of
$p ¯$
, and
$K ±$
; (2) Add
$Ξ −$
$Ξ ¯ +$
on top of (1). The baryon–strangeness correlation and the second-order strangeness cumulant are given by
$〈 B S 〉 c = 〈 Δ p Δ K 〉 c − 〈 Δ p Δ Λ 〉 c + 〈 Δ Λ Δ K 〉 c − 〈 Δ K 2 〉 c ,$
$〈 S 2 〉 c = 〈 Δ K 2 〉 c + 〈 Δ Λ 2 〉 c − 2 〈 Δ K Δ Λ 〉 c ,$
for case (1), and
$〈 B S 〉 c = 〈 Δ p Δ K 〉 c − 〈 Δ p Δ Λ 〉 c − 2 〈 Δ Ξ 〉 c + 〈 Δ Λ Δ K 〉 c − 〈 Δ K 2 〉 c − 3 〈 Δ Λ Δ Ξ 〉 c + 〈 Δ Ξ Δ K 〉 c − 2 〈 Δ Ξ 2 〉 c ,$
$〈 S 2 〉 c = 〈 Δ K 2 〉 c + 〈 Δ Λ 2 〉 c + 4 〈 Δ Ξ 2 〉 c − 2 〈 Δ K Δ Λ 〉 c − 4 〈 Δ K Δ Ξ 〉 c + 4 〈 Δ Λ Δ Ξ 〉 c ,$
for case (2), where
$Δ X$
represents the difference between the number of particles and antiparticles of a particle species
. The coefficients in front of
-related terms come from the fact that
hyperons carry two strange quarks. To obtain
$〈 B S 〉 c$
$〈 S 2 〉 c$
, all the second-order cumulants and mix-cumulants in Equations (
) were measured with efficiency corrections. Hyperons-related terms were corrected for their purities as well.
4.4. Results
The validity of the purity correction was checked in a data-driven way by analyzing the various topological cut sets for
reconstructions. Each cut set has different purity
and significance
. The efficiency and purity corrected value of the
fluctuations should be consistent among different cut sets if the purity correction works well.
Figure 8
shows the 2nd-order
cumulant from Au+Au most central collisions at
$s NN = 200$
GeV as a function of
purity, where purity-uncorrected results are shown by black squares and purity-corrected results are shown by red circles. The purity-uncorrected results increase with decreasing purity because the
background contribution becomes large. In this case, the result having the highest purity around 96% can only be taken as a final result which still suffers from 4% background contributions. After
applying purity corrections for each cut set, the results are flat with respect to the purity. This indicate that the purity correction works well in the STAR data. More importantly, one can take any
of the red circles as a final result. We finally employed the result from the cut set which yields the best significance of
, leading to the smallest statistical uncertainty of purity-corrected
$〈 Λ 2 〉 c$
Figure 9
shows the centrality dependence of
$C B S$
from Au+Au 200 GeV collisions. The results are corrected for purity and reconstruction efficiency, while not being corrected for hyperons’ branching ratio. The
$C B S$
values have been significantly enhanced compared to the previous measurement [
] by including
$Λ ¯$
on top of
$p ¯$
, and
$K ±$
, as shown by the blue squares. We have also tried including multi-strange baryons
$Ξ −$
$Ξ ¯ +$
as well, which is shown by the red stars. A slightly different centrality dependence is observed for both cases. The
$C B S$
values are much closer to those from the lattice QCD calculations [
] shown by the purple band than previous measurements. The red and blue shaded bands represent the UrQMD calculations incorporating
$Σ 0$
as well as the particle species in the experimental measurements. The
$Σ 0$
decays into
and the daughter
s are already included in our measurements. The UrQMD calculations significantly underestimate the experimental data and cannot describe the centrality dependence.
5. Summary
We discussed the recent results on conserved charge fluctuations from BES-I and $s NN = 3$ GeV collisions from FXT program at RHIC. The nonmonotonic energy dependence of (net-)proton $C 4 / C 2$
could hint on the existence of the QCD critical point around $7.7 ≤ s NN ≤ 19.6$ GeV. The negative signs observed in net-proton $C 6 / C 2$ at 200 GeV could indicate the experimental signature of a
smooth crossover at RHIC top energy. The collision energy dependence of (net-)proton $C 6 / C 2$ could imply that the phase boundary can be probed over the wide range of the QCD phase diagram. These
interpretations are currently limited due to large uncertainties, which will be significantly improved in the near future by the ongoing analysis on BES-II data having 10–20 times larger event
statistics compared to BES-I. We also reported the recent attempt for measuring the baryon–strangeness correlations. The $Λ$, $Ξ −$, and their antiparticles were included in the measurement, on top
of p, $p ¯$, and $K ±$. The results were corrected for the combinatorial backgrounds by using the newly developed method for the purity correction. The validity of the correction was confirmed in a
data-driven way. As a result, the $C B S$ values were significantly enhanced and the value is now much closer to the lattice QCD calculations.
This research was funded by JSPS KAKENHI grant number 23K13113 and 19H05598. The APC was funded by 23K13113.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
T.N. thanks S. Esumi, H. Feng, X. Luo, and Y. Zhang for fruitful discussions.
Conflicts of Interest
The authors declare no conflicts of interest.
1 The ratio of the signal to the background yields.
2 The ratio of the signal yield to the square-root of signal candidates, which is a proxy for the product of purity and reconstruction efficiency.
Figure 1.
The conjectured QCD phase diagram with respect to the baryon chemical potential and temperature [
]. The energies and ranges represent collision energies from the experimental programs at RHIC and LHC.
Figure 2.
Event−by−event raw net-proton multiplicity distributions for Au+Au collisions at BES-I energies [
Figure 3.
Collision energy dependence of (net−)proton
$C 4 / C 2$
for Au+Au most central collisions from the BES-I and FXT [
]. The golden band and cross represent the UrQMD calculations. The green band shows the projection of statistical uncertainties for BES-II energies in the collider mode.
Figure 4.
) Centrality dependence of net-proton
$C 6 / C 2$
at 27, 54.4, and 200 GeV Au+Au collisions [
]. The lattice QCD calculations are from Ref. [
]. (
) Collision energy dependence of (net-)proton
$C 6 / C 2$
for Au+Au collisions at 0–40% and 50–60% centralities [
]. The
$C 6 / C 2$
values for lattice QCD and FRG calculations are from Refs. [
Figure 5.
Example of the invariant mass distribution for
]. The red shaded area corresponds to the signal particles, and the blue one corresponds to the background particles. The dotted blue lines are the boundaries for the sideband windows.
Figure 6. Invariant mass distribution of $Λ$ (left) and $Ξ −$ (right) hyperons. The cyan solid lines represent the rotation backgrounds, and the magenta dotted lines are the sideband boundaries for
the purity corrections.
Figure 7. The 1st- and 2nd-order cumulants of sideband particles, $〈 Λ R 〉 c$ and $〈 Λ R 2 〉 c$ (the subscript R represents the rotational backgrounds), and the 2nd-order mix-cumulants between
signal candidates and sideband particles, $〈 Λ S N Λ R 〉 c$, for $Λ$ (left) and $Ξ −$ (right).
Figure 8. The 2nd-order $Λ$ cumulant as a function of $Λ$ purity from Au+Au most central collisions at 200 GeV. Purity-uncorrected results are shown by black squares, and purity-corrected results are
shown by red circles. All results are corrected for reconstruction efficiencies. The branching ratio is not taken into account.
Figure 9.
Centrality dependence of
$C B S$
from Au+Au 200 GeV collisions. The results are corrected for purity and reconstruction efficiencies for hyperons, while their branching ratios are not taken into account. The purple band represents
the results from the lattice QCD calculations [
]. The UrQMD calculations are shown by red and blue shaded bands.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:/
Share and Cite
MDPI and ACS Style
Nonaka, T., on behalf of STAR Collaboration. Conserved Charge Fluctuations from RHIC BES and FXT. Universe 2024, 10, 49. https://doi.org/10.3390/universe10010049
AMA Style
Nonaka T on behalf of STAR Collaboration. Conserved Charge Fluctuations from RHIC BES and FXT. Universe. 2024; 10(1):49. https://doi.org/10.3390/universe10010049
Chicago/Turabian Style
Nonaka, Toshihiro on behalf of STAR Collaboration. 2024. "Conserved Charge Fluctuations from RHIC BES and FXT" Universe 10, no. 1: 49. https://doi.org/10.3390/universe10010049
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2218-1997/10/1/49","timestamp":"2024-11-04T17:59:28Z","content_type":"text/html","content_length":"481567","record_id":"<urn:uuid:81f9b543-83e7-4696-8f96-39d09dffd955>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00827.warc.gz"} |
Construct Different Types of Quadrilaterals | How to Construct Quadrilaterals of Different Types?
Do you want to know How to construct different types of quadrilaterals? Different types of quadrilaterals are developed depending on the sides, diagonals, and also angles. Have a look at a step by
step explanation to construct various types of quadrilaterals. We have given different problems on the construction of quadrilaterals along with steps for better understanding. Look at them and
practice all the problems given below and enhance your conceptual knowledge.
How to Construct Quadrilaterals? | Steps of Construction
You can refer to the below available various questions on constructing quadrilaterals along with a detailed explanation. For the sake of your comfort, we even jotted Steps of Construction for each
and every problem so that you can solve similar kinds of questions easily.
1. Construct a parallelogram PQRS in which PQ = 7 cm, QR = 5 cm and diagonal PR = 7.8 cm.
Steps of Construction:
Firstly, draw a rough figure of the quadrilateral with the given dimensions.
1. Draw a line segment of length 7 cm and mark the ends as P and Q.
2. Take the point P as a center and draw an arc by taking the radius 7.8 cm.
3. Next, take point Q as a center and draw an arc by taking the radius 5 cm. Mark the point as R where the two arcs cross each other. Join the points Q and R as well as P and R.
Note: A parallelogram is a simple quadrilateral with two pairs of parallel sides. The opposite or facing sides of a parallelogram are of equal length and the opposite angles of a parallelogram are of
equal measure.
4. By taking the point P as a center, draw an arc with a radius of 5 cm.
5. By taking the point R as a center, draw an arc with a radius of 7 cm.
6. Mark the point as S where the two arcs cross each other. Join the points R and S as well as P and S.
PQRS is a required parallelogram.
2. Construct a parallelogram, one of whose sides is 7.2 cm and whose diagonals are 8 cm and 8.4 cm.
Steps of Construction:
Firstly, draw a rough figure of the quadrilateral with the given dimensions.
1. Draw a line segment of length 7.2 cm and mark the ends as P and Q.
2. Take the point P as a center and draw an arc by taking the radius 4.2 cm.
3. Next, take point Q as a center and draw an arc by taking the radius 4 cm. Mark the point as O where the two arcs cross each other. Join the points Q and O as well as P and O.
4. By taking the point O as a center, draw an arc with the required radius.
5. Produce PO to R such that OR = PO and produce QO to S such that OS = OQ.
6. Join PS, QR, and RS.
PQRS is a required parallelogram.
3. Construct a parallelogram whose diagonals are 5.6 cm and 6.4 cm and an angle between them is 70°.
Steps of Construction:
1. Draw a line segment of length 5.6 cm and mark the ends as P and R.
2. Take the point O as a center in between P and R.
3. Next, take point O as a center and make a point by taking 70º using a protector. Draw a line XO to Y.
4. Set off OQ = 1/2 (6.4) = 3.2 cm and OS = 1/2 (6.4) =3.2 cm as shown.
5. Join PQ, QR, RS, and SP.
PQRS is a required parallelogram.
4. Construct a rectangle PQRS in which side QR = 5.2 cm and diagonal QS = 6.4 cm.
Steps of Construction:
Firstly, draw a rough figure of the quadrilateral with the given dimensions.
1. Draw a line segment of length 5.2 cm and mark the ends as Q and R.
2. Take the point R as a center and draw a perpendicular line to QR.
3. Next, take point Q as a center and draw an arc by taking the radius 6.4 cm. Mark the point as S where the line and arc cross each other. Join the points Q and S as well as R and S.
4. By taking the point S as a center, draw an arc with the required radius of 5.2 cm.
5. Take the point Q as a center and draw a perpendicular line to QR. Mark the point as P where the point and arc cross each other. Join the points Q and P as well as P and S.
PQRS is a required rectangle.
5. Construct a square PQRS, each of whose diagonals is 5.4 cm.
Steps of Construction:
1. Draw a line segment of length 5.4 cm and mark the ends as P and R.
2. Draw the right bisector XY of PR, meeting PR at O.
3. From O set off OQ = 1/2 (5.4) = 2.7 cm along OQ and OS = 2.7 cm along OX.
4. Join PQ, QR, RS, and SP.
PQRS is a required square.
6. Construct a rhombus with a side of 4.4 cm and one of its angles equal to 67°.
Steps of Construction:
Given that a rhombus with a side of 4.4 cm and one of its angles equal to 67°.
The adjacent angle = (180° – 67°) = 113°.
1. Draw a line segment of length 4.4 cm and mark the ends as Q and R.
2. Make ∠RQX = 113° and ∠QRY = 67°.
3. Set off QP = 4.4 cm along with QX and RS = 4.4 cm along with RY.
4. Join PS.
PQRS is a required rhombus.
Leave a Comment | {"url":"https://bigideasmathanswers.com/construct-different-types-of-quadrilaterals/","timestamp":"2024-11-01T20:28:44Z","content_type":"text/html","content_length":"140289","record_id":"<urn:uuid:273e7298-86a4-4ef8-a54c-c80d94f47440>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00516.warc.gz"} |
In this talk, we are concerned with the accuracy of computed eigenvalues and eigenvectors in the eigenvalue problem $$ Ax = \lambda x, \qquad A \in \mathbb{R}^{n \times n}, \quad \lambda \in \mathbb
{R}, \quad x \in \mathbb{R}^{n}\setminus\{0\}, $$ where $A$ is symmetric, $\lambda$ is an eigenvalue, and $x$ is an eigenvector corresponding to $\lambda$. Let $k \le n$, and $P \in \mathbb{R}^{n \
times k}$ and $B \in \mathbb{R}^{k \times k}$ satisfy $AP = PB$, where $B$ is not necessarily diagonal. The eigenvalues of $B$ are those of $A$, and ${\rm span}(P)$ is called the invariant subspace
of $A$ corresponding to these eigenvalues. The eigenvectors of $A$ corresponding to these eigenvalues are included in this subspace. $\quad$ We consider computing verified error bounds for all
numerically obtained (approximate) eigenvalues and eigenvectors, in which all the possible rounding errors have been taken into account. In 2006, Miyajima et al. proposed such an algorithm, which
involves only four floating-point matrix multiplications for computing all the error bounds. When the eigenvalues are closely clustered, this algorithm does not give error bounds for approximate
eigenvectors and/or basis of invariant subspaces. Recently, Rump and Lange proposed an algorithm for this purpose. This algorithm gives error bounds for approximate eigenvectors when the
corresponding eigenvalues are well-separated, and provide error bounds for approximate basis of invariant subspaces when the eigenvalues are closely clustered. $\quad$ This talk has two purposes. The
first purpose is to present theories for computing error bounds for approximate basis of invariant subspaces when the eigenvalues are closely clustered. The second purpose is to propose an algorithm
for computing error bounds for all approximate eigenvalues, and approximate eigenvectors or basis of invariant subspaces. We develop this algorithm by combining the algorithm by Miyajima et al. and
presented theories. Particular emphasis is put on the computational cost of the proposed algorithm. Additional procedures requiring cubic complexity are unnecessary for computing error bounds for
approximate basis of invariant subspaces. As a consequence, the proposed algorithm also involves only four floating-point matrix multiplications and does not involve other procedures requiring cubic
complexity for computing all the error bounds. | {"url":"https://www.smartchair.jp/conference/program/each_paper/?p_key=ahNzfnNtYXJ0Y2hhaXItanAtYXBwchILEgVQYXBlchiAgIDs-bnZCAw&conf_id=NMSP2023","timestamp":"2024-11-14T01:13:17Z","content_type":"text/html","content_length":"8674","record_id":"<urn:uuid:1f6d320d-031f-452f-8930-0e5d6026830e>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00716.warc.gz"} |
What is the equation of the line given points (-12,0), (4,4)? | Socratic
What is the equation of the line given points (-12,0), (4,4)?
1 Answer
See a solution process below:
First, we need to determine the slope of the line. The formula for find the slope of a line is:
$m = \frac{\textcolor{red}{{y}_{2}} - \textcolor{b l u e}{{y}_{1}}}{\textcolor{red}{{x}_{2}} - \textcolor{b l u e}{{x}_{1}}}$
Where $\left(\textcolor{b l u e}{{x}_{1}} , \textcolor{b l u e}{{y}_{1}}\right)$ and $\left(\textcolor{red}{{x}_{2}} , \textcolor{red}{{y}_{2}}\right)$ are two points on the line.
Substituting the values from the points in the problem gives:
$m = \frac{\textcolor{red}{4} - \textcolor{b l u e}{0}}{\textcolor{red}{4} - \textcolor{b l u e}{- 12}} = \frac{\textcolor{red}{4} - \textcolor{b l u e}{0}}{\textcolor{red}{4} + \textcolor{b l u e}
{12}} = \frac{4}{16} = \frac{1}{4}$
Now, we can use the point-slope formula to write and equation for the line. The point-slope form of a linear equation is: $\left(y - \textcolor{b l u e}{{y}_{1}}\right) = \textcolor{red}{m} \left(x -
\textcolor{b l u e}{{x}_{1}}\right)$
Where $\left(\textcolor{b l u e}{{x}_{1}} , \textcolor{b l u e}{{y}_{1}}\right)$ is a point on the line and $\textcolor{red}{m}$ is the slope.
Substituting the slope we calculated and the values from the first point in the problem gives:
$\left(y - \textcolor{b l u e}{0}\right) = \textcolor{red}{\frac{1}{4}} \left(x - \textcolor{b l u e}{- 12}\right)$
$y = \textcolor{red}{\frac{1}{4}} \left(x + \textcolor{b l u e}{12}\right)$
We can modify this result to put the equation in slope-intercept form. The slope-intercept form of a linear equation is: $y = \textcolor{red}{m} x + \textcolor{b l u e}{b}$
Where $\textcolor{red}{m}$ is the slope and $\textcolor{b l u e}{b}$ is the y-intercept value.
$y = \textcolor{red}{\frac{1}{4}} \left(x + \textcolor{b l u e}{12}\right)$
$y = \left(\textcolor{red}{\frac{1}{4}} \times x\right) + \left(\textcolor{red}{\frac{1}{4}} \times \textcolor{b l u e}{12}\right)$
#y = color(red)(1/4)x + color(blue)(12)/(color(red)(4)#
$y = \textcolor{red}{\frac{1}{4}} x + \textcolor{b l u e}{3}$
Impact of this question
4646 views around the world | {"url":"https://socratic.org/questions/what-is-the-equation-of-the-line-given-points-12-0-4-4","timestamp":"2024-11-01T19:22:22Z","content_type":"text/html","content_length":"37231","record_id":"<urn:uuid:78f03b3d-f748-47ca-933f-d3df61b5cc66>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00365.warc.gz"} |
Plus a Constant - Funny Long Jokes
Plus a Constant
Two mathematicians are in a bar. The first one says to the second that the average person knows very little about basic mathematics. The second one disagrees, and claims that most people can cope
with a reasonable amount of math.
The first mathematician goes off to the washroom, and in his absence the second calls over the waitress. He tells her that in a few minutes, after his friend has returned, he will call her over and
ask her a question. All she has to do is answer one third x cubed.
She repeats “one thir — dex cue”?
He repeats “one third x cubed”.
She says, “one thir dex cuebd”?
Yes, that’s right, he says. So she agrees, and goes off mumbling to herself, “one thir dex cuebd…”.
The first guy returns and the second proposes a bet to prove his point, that most people do know something about basic math. He says he will ask the blonde waitress an integral, and the first
laughingly agrees. The second man calls over the waitress and asks “what is the integral of x squared?”.
The waitress says “one third x cubed” and while walking away, turns back and says over her shoulder “plus a constant!” | {"url":"https://nebesa.net/plus-a-constant/","timestamp":"2024-11-01T19:37:02Z","content_type":"text/html","content_length":"20903","record_id":"<urn:uuid:18868057-43df-4c4c-8cd3-61e4572c5fcd>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00800.warc.gz"} |
AP Precalculus Problems and Solutions (Part - 1)
Problem 1 :
The function h is given by h(x) = 8 ⋅ 2^x. For which of the following values of x, h(x) = 256?
A) x = 2
B) x = 5
C) x = 8
D) x = 16
Solution :
Problem 2 :
The function h is given by h(x) = log[3 ]x. Which of the following is equivalent to the expression
2 ⋅ h(w) + h(p),
where w and p are values in the domain of h?
A) log[3] (wp)^2
B) (log[3] w)^2 ⋅ (log[3] p)
C) log[3] (w^2p)
D) log[3] (2wp)
Solution :
Problem 3 :
Let x and y be positive constants. Which of the following is equivalent to 2ln x - 2 ln y ?
Solution :
Problem 4 :
The function f is given by f(x) = 2log[5] x. Which of the following describes f?
A) is an increasing function that increases at an increasing rate.
B) is an increasing function that increases at a decreasing rate.
C) is a decreasing function that decreases at an increasing rate.
D) is a decreasing function that decreases at a decreasing rate.
Solution :
Problem 5 :
The Richter scale is a numerical scale that uses base 10 logarithms for measuring an earthquake’s magnitude. The larger the number, the more intense the earthquake. As intensities increase
multiplicatively by a factor of 10, the Richter scale increases additively by 1. Consider two earthquakes that occurred in the year 2022. An earthquake in Indonesia had a magnitude of 5.1, and an
earthquake in Mexico had a magnitude of 2.5. Approximately how many times more intense was the Indonesia earthquake than the Mexico earthquake?
A) 2.6
B) 26
C) 100
D) 400
Solution :
Kindly mail your feedback to v4formath@gmail.com
We always appreciate your feedback.
©All rights reserved. onlinemath4all.com | {"url":"https://www.onlinemath4all.com/ap-precalculus-problems-and-solutions-1.html","timestamp":"2024-11-03T01:10:04Z","content_type":"text/html","content_length":"61724","record_id":"<urn:uuid:3e79991b-995c-44d1-ab4c-794df30afbb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00693.warc.gz"} |
Cumulative rate of return formula
This calculator assumes that all dividend payments will be reinvested. Calculate Compounding Returns. Money Invested. $. Return Rate. %. Number of Years. 13 Nov 2018 In a total return calculation,
the compound interest, taxes and fees would have been factored in. To find the "real return" - or the rate of return after
Calculate an investment's percentage return using CAGR To do so, analysts use other formulas, like the compound annual growth rate (CAGR):. 1) How do you separate returns resulting from compounding
of interest; and 2) calculate compound interest rate without a calculator or any supporting tables, Once you reach this profit percentage, the excess profits are split among the rest Compounded
means that the calculation of a preferred return periodic growth Practice Problems. Problem 1. If you invest $1,000 at an annual interest rate of 5 % compounded continuously, calculate the final
amount you Savings Calculator This one takes a lump sum of money and compounds it monthly over a fixed period of time at a fixed annual yield. Plus it allows you to add The percent change from one
period to another is calculated from the formula: Where: PR = Percent Rate VPresent Calculating Average Annual (Compound) Growth Rates. Another common method of Excel returns the answer: 1.1%
Select a blank cell, for example Cell E3, enter the below formula into it, and press the Calculate compound annual growth rate with XIRR function in Excel to find the highest price I can buy a share
at when I have a total expected return.
Once you reach this profit percentage, the excess profits are split among the rest Compounded means that the calculation of a preferred return periodic growth Practice Problems. Problem 1. If you
invest $1,000 at an annual interest rate of 5 % compounded continuously, calculate the final amount you Savings Calculator This one takes a lump sum of money and compounds it monthly over a fixed
period of time at a fixed annual yield. Plus it allows you to add The percent change from one period to another is calculated from the formula: Where: PR = Percent Rate VPresent Calculating Average
Annual (Compound) Growth Rates. Another common method of Excel returns the answer: 1.1% conditions. Use it to compare your after-tax return from two different investment choices. "0" in the "Tax
Rate" box. The latest interest rates are listed in the main menu, under the "Saving" tab. Interest Compounded. Monthly, Quarterly To understand how the TSP calculates rates of return for any given
period of time and determines compound annual returns, read the Fact Sheet Calculating With no dividends reinvested, this is a total cumulative return of 697.99% or an average of 10.94%; it also
includes two stock splits. The value of dividends received during that time period also adds another $13,611 in profit above the original investment.
In finance, return is a profit on an investment. It comprises any change in value of the The logarithmic return or continuously compounded return, also known as force This formula applies with an
assumption of reinvestment of returns and it
The Rate of Return (ROR) is the gain or loss of an investment over a period of time copmared to the initial cost of the investment expressed as a percentage. This guide teaches the most common
formulas for calculating different types of rates of returns including total return, annualized return, ROI, ROA, ROE, IRR In order to calculate CAGR, you must begin with the total return and the
number of years in which the investment was held. In the above example, the total return was 2.3377 (133.77 percent). You also know the investment was held for ten years.
Select a blank cell, for example Cell E3, enter the below formula into it, and press the Calculate compound annual growth rate with XIRR function in Excel to find the highest price I can buy a share
at when I have a total expected return.
$13,000 - $10,000 / $10,000 = cumulative return. Step. Perform the calculation. Using the above example, the calculation would be: $3,000 / $10,000 = .30. Convert the decimal to percentage form. The
cumulative return would be 30 percent. The rate of return formula is equal to current value minus original value divided by original value multiply by 100. Watch our Demo Courses and Videos
Valuation, Hadoop, Excel, Mobile Apps, Web Development & many more. The cumulative total return is then: ( $44.26 – $0.06607 ) / $0.06607 = 668.90 = 66,890%. In mutual fund fact sheets and websites,
the cumulative return can be quickly deduced from a graph that shows the growth of a hypothetical $10,000 investment over time (usually starting at the fund's inception). The Rate of Return (ROR) is
the gain or loss of an investment over a period of time copmared to the initial cost of the investment expressed as a percentage. This guide teaches the most common formulas for calculating different
types of rates of returns including total return, annualized return, ROI, ROA, ROE, IRR
The cumulative total return is then: ( $44.26 – $0.06607 ) / $0.06607 = 668.90 = 66,890%. In mutual fund fact sheets and websites, the cumulative return can be quickly deduced from a graph that shows
the growth of a hypothetical $10,000 investment over time (usually starting at the fund's inception).
Compound interest, or 'interest on interest', is calculated with the compound interest formula. Multiply the principal amount by one plus the annual interest rate to Compound interest formulas to
find principal, interest rates or final investment what annual interest rate do I need to get from them to match the return I got from Power of Compounding Calculator : Compounding is the addition
of interest on Compound Interest Calculator You expect the Annual Rate of Returns to be.
This is a useful function for calculating cumulative return over a period of time, say a calendar year. Can produce simple or geometric return. Usage. Return. Annual rate of return is the increase in
your investment over a year, as a proportion of your original investment. View TD Mutual Funds historic investment returns. Example: you have $1,000, and want it to grow to $2,000 in 5 Years, what
interest rate do you need? The formula is: r = ( FV / PV )1/n - 1. calculator exponent Calculate an investment's percentage return using CAGR To do so, analysts use other formulas, like the compound
annual growth rate (CAGR):. 1) How do you separate returns resulting from compounding of interest; and 2) calculate compound interest rate without a calculator or any supporting tables, Once you
reach this profit percentage, the excess profits are split among the rest Compounded means that the calculation of a preferred return periodic growth | {"url":"https://bestoptionsavcascz.netlify.app/cordts8691jup/cumulative-rate-of-return-formula-xe","timestamp":"2024-11-02T15:45:57Z","content_type":"text/html","content_length":"34208","record_id":"<urn:uuid:e8fb3c02-ae96-4273-8025-90bc8b4be107>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00163.warc.gz"} |
Class 6 Maths MCQ Online TestClass 6 Maths MCQ Online Test
Chapter Wise MCQ Questions for class 6 Maths with Answers Online test is very useful for the students to practice. Class 6 Maths MCQ is very useful for competitive exams because here candidates can
learn a very basic level of Maths fundamental questions with answers.
We prepared chapter-wise MCQ Questions for Class 6 Maths with Answers according to the latest NCERT book as per the CBSE syllabus. If students practice this online mock test on Class 6 Maths then it
will be very helpful to score good marks in their school examination.
Class 6th maths is very basic level maths from where some important topics basic concepts building starts in the student's mind to solve some hard questions in further classes. We prepare more than
10 MCQ Questions with Answers for every chapter to practice the chapter topics properly. Questions are simple with some tricks also to solve them that students can understand easily.
Books For Reading: Class 6 Maths
MCQ Questions for class 6 Maths with Answers Syllabus
Class 6 Maths NCERT book has 14 chapters that cover very basic Mathematics topics like number systems, geometry, mensuration, data handling, etc. So we prepared the chapter-wise class 6 maths MCQ
online test. All 14 chapter gets will be elaborated widely in further classes with more details and concepts. The complete units of the class 6 maths are listed below.
• Chapter 1: Knowing our numbers
• Chapter 2: Whole numbers
• Chapter 3: Playing with Numbers
• Chapter 4: Basic geometrical ideas
• Chapter 5: Understanding elementary shapes
• Chapter 6: Integers
• Chapter 7: Fractions
• Chapter 8: Decimals
• Chapter 9: Data Handling
• Chapter 10: Mensuration
• Chapter 11: Algebra
• Chapter 12: Ratio and Proportion
• Chapter 13: Symmetry
• Chapter 14: Practical Geometry
Features of Online MCQ Test for Class 6 Maths
• These MCQs on Class 6 Maths are completely free of cost there is no registration or sign-up required to solve the Online MCQ Test for Class 6 Maths.
• These MCQs on Class 6 Maths are prepared chapter-wise so students can check their knowledge as per chapter also and practice more accordingly for a particular chapter.
• MCQ questions for class 6 maths online tests are prepared in a very interactive quiz format that attracts the students to solve them.
• There is only one minute allotted to solve each question and after times-up question will be marked as un-attempted.
• If you select the correct option the box will turn green and you will get one mark. There is no negative marking system.
• After clicking on the final submit button students will get their final results with percentage, attempted and not-attempted questions.
Class 6 Maths MCQ Online Mock Test
• Class 6 Maths MCQ Online Mock Tests
• Chapter Wise Quiz Test
• The questions are given in English language only
• You will get 60 seconds to answer each question.
• Know your score when the test is complete
Select the Mock Test
Mock Test Loadig Here
Back to Topics
Mock Test Result
Total Questions:
Right Answer:
Wrong Answer:
Share with Friends
Disclaimer: All chapter-wise MCQ questions for class 6 maths with answers are prepared with accuracy but we do not guarantee 100% accuracy so please check the answers yourself also because there is
the chance of mistyping. | {"url":"https://www.mocktestmaster.in/2022/07/class-6-maths-mcq-online-test.html","timestamp":"2024-11-05T09:20:41Z","content_type":"application/xhtml+xml","content_length":"241197","record_id":"<urn:uuid:1a3435a2-2a86-49c0-a19a-9708d743088d>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00153.warc.gz"} |
Internet Engineering Task Force (IETF) Request for Comments: 7459 Updates: 3693
5491 Category:
Standards Track
M. Thomson
J. Winterbottom
February 2015
Representation of Uncertainty and Confidence in
the Presence Information Data Format Location Object (PIDF-LO)
This document defines key concepts of uncertainty and confidence as they pertain to location information. Methods for the manipulation of location estimates that include uncertainty information
are outlined.
This document normatively updates the definition of location information representations defined in RFCs 4119 and 5491. It also deprecates related terminology defined in RFC 3693.
Status of This Memo
This is an Internet Standards Track document.
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by
the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741.
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7459.
Copyright Notice
Copyright © 2015 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document.
Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD
License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
Table of Contents
1. Introduction ....................................................4
1.1. Conventions and Terminology ................................4
2. A General Definition of Uncertainty .............................5
2.1. Uncertainty as a Probability Distribution ..................6
2.2. Deprecation of the Terms "Precision" and "Resolution" ......8
2.3. Accuracy as a Qualitative Concept ..........................9
3. Uncertainty in Location .........................................9
3.1. Targets as Points in Space .................................9
3.2. Representation of Uncertainty and Confidence in PIDF-LO ...10
3.3. Uncertainty and Confidence for Civic Addresses ............10
3.4. DHCP Location Configuration Information and Uncertainty ...11
4. Representation of Confidence in PIDF-LO ........................12
4.1. The "confidence" Element ..................................13
4.2. Generating Locations with Confidence ......................13
4.3. Consuming and Presenting Confidence .......................13
5. Manipulation of Uncertainty ....................................14
5.1. Reduction of a Location Estimate to a Point ...............15
5.1.1. Centroid Calculation ...............................16
5.1.1.1. Arc-Band Centroid .........................16
5.1.1.2. Polygon Centroid ..........................16
5.2. Conversion to Circle or Sphere ............................19
5.3. Conversion from Three-Dimensional to Two-Dimensional ......20
5.4. Increasing and Decreasing Uncertainty and Confidence ......20
5.4.1. Rectangular Distributions ..........................21
5.4.2. Normal Distributions ...............................21
5.5. Determining Whether a Location Is within a Given Region ...22
5.5.1. Determining the Area of Overlap for Two Circles ....24
5.5.2. Determining the Area of Overlap for Two Polygons ...25
6. Examples .......................................................25
6.1. Reduction to a Point or Circle ............................25
6.2. Increasing and Decreasing Confidence ......................29
6.3. Matching Location Estimates to Regions of Interest ........29
6.4. PIDF-LO with Confidence Example ...........................30
7. Confidence Schema ..............................................31
8. IANA Considerations ............................................32
8.1. URN Sub-Namespace Registration for ........................32
8.2. XML Schema Registration ...................................33
9. Security Considerations ........................................33
10. References ....................................................34
10.1. Normative References .....................................34
10.2. Informative References ...................................35
Appendix A. Conversion between Cartesian and Geodetic
Coordinates in WGS84 ..................................36
Appendix B. Calculating the Upward Normal of a Polygon ............37
B.1. Checking That a Polygon Upward Normal Points Up ...........38
Acknowledgements ..................................................39
Authors' Addresses ................................................39
1. Introduction
Location information represents an estimation of the position of a Target [RFC6280]. Under ideal circumstances, a location estimate precisely reflects the actual location of the Target. For
automated systems that determine location, there are many factors that introduce errors into the measurements that are used to determine location estimates.
The process by which measurements are combined to generate a location estimate is outside of the scope of work within the IETF. However, the results of such a process are carried in IETF data
formats and protocols. This document outlines how uncertainty, and its associated datum, confidence, are expressed and interpreted.
This document provides a common nomenclature for discussing uncertainty and confidence as they relate to location information.
This document also provides guidance on how to manage location information that includes uncertainty. Methods for expanding or reducing uncertainty to obtain a required level of confidence are
described. Methods for determining the probability that a Target is within a specified region based on its location estimate are described. These methods are simplified by making certain
assumptions about the location estimate and are designed to be applicable to location estimates in a relatively small geographic area.
A confidence extension for the Presence Information Data Format - Location Object (PIDF-LO) [RFC4119] is described.
This document describes methods that can be used in combination with automatically determined location information. These are statistically based methods.
1.1. Conventions and Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
This document assumes a basic understanding of the principles of mathematics, particularly statistics and geometry.
Some terminology is borrowed from [RFC3693] and [RFC6280], in particular "Target".
Mathematical formulae are presented using the following notation: add "+", subtract "-", multiply "*", divide "/", power "^", and absolute value "|x|". Precedence follows established conventions:
power operations precede multiply and divide, multiply and divide precede add and subtract, and parentheses are used to indicate operations that are applied together. Mathematical functions are
represented by common abbreviations: square root "sqrt(x)", sine "sin(x)", cosine "cos(x)", inverse cosine "acos(x)", tangent "tan(x)", inverse tangent "atan(x)", two-argument inverse tangent
"atan2(y,x)", error function "erf(x)", and inverse error function "erfinv(x)".
2. A General Definition of Uncertainty
Uncertainty results from the limitations of measurement. In measuring any observable quantity, errors from a range of sources affect the result. Uncertainty is a quantification of what is known
about the observed quantity, either through the limitations of measurement or through inherent variability of the quantity.
Uncertainty is most completely described by a probability distribution. A probability distribution assigns a probability to possible values for the quantity.
A probability distribution describing a measured quantity can be arbitrarily complex, so it is desirable to find a simplified model. One approach commonly taken is to reduce the probability
distribution to a confidence interval. Many alternative models are used in other areas, but study of those is not the focus of this document.
In addition to the central estimate of the observed quantity, a confidence interval is succinctly described by two values: an error range and a confidence. The error range describes an interval
and the confidence describes an estimated upper bound on the probability that a "true" value is found within the extents defined by the error.
In the following example, a measurement result for a length is shown as a nominal value with additional information on error range (0.0043 meters) and confidence (95%).
e.g., x = 1.00742 +/- 0.0043 meters at 95% confidence
This measurement result indicates that the value of "x" is between 1.00312 and 1.01172 meters with 95% probability. No other assertion is made: in particular, this does not assert that x is
Uncertainty and confidence for location estimates can be derived in a number of ways. This document does not attempt to enumerate the many methods for determining uncertainty. [ISO.GUM] and
[NIST.TN1297] provide a set of general guidelines for determining and manipulating measurement uncertainty. This document applies that general guidance for consumers of location information.
As a statistical measure, values determined for uncertainty are found based on information in the aggregate, across numerous individual estimates. An individual estimate might be determined to be
"correct" -- for example, by using a survey to validate the result -- without invalidating the statistical assertion.
This understanding of estimates in the statistical sense explains why asserting a confidence of 100%, which might seem intuitively correct, is rarely advisable.
2.1. Uncertainty as a Probability Distribution
The Probability Density Function (PDF) that is described by uncertainty indicates the probability that the "true" value lies at any one point. The shape of the probability distribution can vary
depending on the method that is used to determine the result. The two probability density functions most generally applicable to location information are considered in this document:
□ The normal PDF (also referred to as a Gaussian PDF) is used where a large number of small random factors contribute to errors. The value used for the error range in a normal PDF is related to
the standard deviation of the distribution.
□ A rectangular PDF is used where the errors are known to be consistent across a limited range. A rectangular PDF can occur where a single error source, such as a rounding error, is
significantly larger than other errors. A rectangular PDF is often described by the half-width of the distribution; that is, half the width of the distribution.
Each of these probability density functions can be characterized by its center point, or mean, and its width. For a normal distribution, uncertainty and confidence together are related to the
standard deviation of the function (see Section 5.4). For a rectangular distribution, the half-width of the distribution is used.
Figure 1 shows a normal and rectangular probability density function with the mean (m) and standard deviation (s) labeled. The half-width (h) of the rectangular distribution is also indicated.
***** *** Normal PDF
** : ** --- Rectangular PDF
** : **
** : **
| ** : ** |
| ** : ** |
| * <-- s -->: * |
| * : : : * |
| ** : ** |
| * : : : * |
| * : * |
|** : : : **|
** : **
*** | : : : | ***
***** | :<------ h ------>| *****
Figure 1: Normal and Rectangular Probability Density Functions
For a given PDF, the value of the PDF describes the probability that the "true" value is found at that point. Confidence for any given interval is the total probability of the "true" value being
in that range, defined as the integral of the PDF over the interval.
The probability of the "true" value falling between two points is found by finding the area under the curve between the points (that is, the integral of the curve between the points). For any
given PDF, the area under the curve for the entire range from negative infinity to positive infinity is 1 or (100%). Therefore, the confidence over any interval of uncertainty is always less
than 100%.
Figure 2 shows how confidence is determined for a normal distribution. The area of the shaded region gives the confidence (c) for the interval between "m-u" and "m+u".
*:::::::::::: c ::::::::::::*
** |:::::::::::::::::::::::::::::| **
*** |:::::::::::::::::::::::::::::| ***
***** |:::::::::::::::::::::::::::::| *****
| | |
(m-u) m (m+u)
Figure 2: Confidence as the Integral of a PDF
In Section 5.4, methods are described for manipulating uncertainty if the shape of the PDF is known.
2.2. Deprecation of the Terms "Precision" and "Resolution"
The terms "Precision" and "Resolution" are defined in RFC 3693 [RFC3693]. These definitions were intended to provide a common nomenclature for discussing uncertainty; however, these particular
terms have many different uses in other fields, and their definitions are not sufficient to avoid confusion about their meaning. These terms are unsuitable for use in relation to quantitative
concepts when discussing uncertainty and confidence in relation to location information.
2.3. Accuracy as a Qualitative Concept
Uncertainty is a quantitative concept. The term "accuracy" is useful in describing, qualitatively, the general concepts of location information. Accuracy is generally useful when describing
qualitative aspects of location estimates. Accuracy is not a suitable term for use in a quantitative context.
For instance, it could be appropriate to say that a location estimate with uncertainty "X" is more accurate than a location estimate with uncertainty "2X" at the same confidence. It is not
appropriate to assign a number to "accuracy", nor is it appropriate to refer to any component of uncertainty or confidence as "accuracy". That is, saying the "accuracy" for the first location
estimate is "X" would be an erroneous use of this term.
3. Uncertainty in Location
A "location estimate" is the result of location determination. A location estimate is subject to uncertainty like any other observation. However, unlike a simple measure of a one dimensional
property like length, a location estimate is specified in two or three dimensions.
Uncertainty in two- or three-dimensional locations can be described using confidence intervals. The confidence interval for a location estimate in two- or three-dimensional space is expressed as
a subset of that space. This document uses the term "region of uncertainty" to refer to the area or volume that describes the confidence interval.
Areas or volumes that describe regions of uncertainty can be formed by the combination of two or three one-dimensional ranges, or more complex shapes could be described (for example, the shapes
in [RFC5491]).
3.1. Targets as Points in Space
This document makes a simplifying assumption that the Target of the PIDF-LO occupies just a single point in space. While this is clearly false in virtually all scenarios with any practical
application, it is often a reasonable simplifying assumption to make.
To a large extent, whether this simplification is valid depends on the size of the Target relative to the size of the uncertainty region. When locating a personal device using contemporary
location determination techniques, the space the device occupies relative to the uncertainty is proportionally quite small. Even where that device is used as a proxy for a person, the proportions
change little.
This assumption is less useful as uncertainty becomes small relative to the size of the Target of the PIDF-LO (or conversely, as uncertainty becomes small relative to the Target). For instance,
describing the location of a football stadium or small country would include a region of uncertainty that is only slightly larger than the Target itself. In these cases, much of the guidance in
this document is not applicable. Indeed, as the accuracy of location determination technology improves, it could be that the advice this document contains becomes less relevant by the same
3.2. Representation of Uncertainty and Confidence in PIDF-LO
A set of shapes suitable for the expression of uncertainty in location estimates in the PIDF-LO are described in [GeoShape]. These shapes are the recommended form for the representation of
uncertainty in PIDF-LO [RFC4119] documents.
The PIDF-LO can contain uncertainty, but it does not include an indication of confidence. [RFC5491] defines a fixed value of 95%. Similarly, the PIDF-LO format does not provide an indication of
the shape of the PDF. Section 4 defines elements to convey this information in PIDF-LO.
Absence of uncertainty information in a PIDF-LO document does not indicate that there is no uncertainty in the location estimate. Uncertainty might not have been calculated for the estimate, or
it may be withheld for privacy purposes.
If the Point shape is used, confidence and uncertainty are unknown; a receiver can either assume a confidence of 0% or infinite uncertainty. The same principle applies on the altitude axis for
two-dimensional shapes like the Circle.
3.3. Uncertainty and Confidence for Civic Addresses
Automatically determined civic addresses [RFC5139] inherently include uncertainty, based on the area of the most precise element that is specified. In this case, uncertainty is effectively
described by the presence or absence of elements. To the recipient of location information, elements that are not present are uncertain.
To apply the concept of uncertainty to civic addresses, it is helpful to unify the conceptual models of civic address with geodetic location information. This is particularly useful when
considering civic addresses that are determined using reverse geocoding (that is, the process of translating geodetic information into civic addresses).
In the unified view, a civic address defines a series of (sometimes non-orthogonal) spatial partitions. The first is the implicit partition that identifies the surface of the earth and the space
near the surface. The second is the country. Each label that is included in a civic address provides information about a different set of spatial partitions. Some partitions require slight
adjustments from a standard interpretation: for instance, a road includes all properties that adjoin the street. Each label might need to be interpreted with other values to provide context.
As a value at each level is interpreted, one or more spatial partitions at that level are selected, and all other partitions of that type are excluded. For non-orthogonal partitions, only the
portion of the partition that fits within the existing space is selected. This is what distinguishes King Street in Sydney from King Street in Melbourne. Each defined element selects a partition
of space. The resulting location is the intersection of all selected spaces.
The resulting spatial partition can be considered as a region of uncertainty.
Note: This view is a potential perspective on the process of
geocoding -- the translation of a civic address to a geodetic
Uncertainty in civic addresses can be increased by removing elements. This does not increase confidence unless additional information is used. Similarly, arbitrarily increasing uncertainty in a
geodetic location does not increase confidence.
3.4. DHCP Location Configuration Information and Uncertainty
Location information is often measured in two or three dimensions; expressions of uncertainty in one dimension only are rare. The "resolution" parameters in [RFC6225] provide an indication of how
many bits of a number are valid, which could be interpreted as an expression of uncertainty in one dimension.
[RFC6225] defines a means for representing uncertainty, but a value for confidence is not specified. A default value of 95% confidence should be assumed for the combination of the uncertainty on
each axis. This is consistent with the transformation of those forms into the uncertainty representations from [RFC5491]. That is, the confidence of the resultant rectangular Polygon or Prism is
assumed to be 95%.
4. Representation of Confidence in PIDF-LO
On the whole, a fixed definition for confidence is preferable, primarily because it ensures consistency between implementations. Location generators that are aware of this constraint can generate
location information at the required confidence. Location recipients are able to make sensible assumptions about the quality of the information that they receive.
In some circumstances -- particularly with preexisting systems -- location generators might be unable to provide location information with consistent confidence. Existing systems sometimes
specify confidence at 38%, 67%, or 90%. Existing forms of expressing location information, such as that defined in [TS-3GPP-23_032], contain elements that express the confidence in the result.
The addition of a confidence element provides information that was previously unavailable to recipients of location information. Without this information, a location server or generator that has
access to location information with a confidence lower than 95% has two options:
□ The location server can scale regions of uncertainty in an attempt to achieve 95% confidence. This scaling process significantly degrades the quality of the information, because the location
server might not have the necessary information to scale appropriately; the location server is forced to make assumptions that are likely to result in either an overly conservative estimate
with high uncertainty or an overestimate of confidence.
□ The location server can ignore the confidence entirely, which results in giving the recipient a false impression of its quality.
Both of these choices degrade the quality of the information provided.
The addition of a confidence element avoids this problem entirely if a location recipient supports and understands the element. A recipient that does not understand -- and, hence, ignores -- the
confidence element is in no worse a position than if the location server ignored confidence.
4.1. The "confidence" Element
The "confidence" element MAY be added to the "location-info" element of the PIDF-LO [RFC4119] document. This element expresses the confidence in the associated location information as a
percentage. A special "unknown" value is reserved to indicate that confidence is supported, but not known to the Location Generator.
The "confidence" element optionally includes an attribute that indicates the shape of the PDF of the associated region of uncertainty. Three values are possible: unknown, normal, and rectangular.
Indicating a particular PDF only indicates that the distribution approximately fits the given shape based on the methods used to generate the location information. The PDF is normal if there are
a large number of small, independent sources of error. It is rectangular if all points within the area have roughly equal probability of being the actual location of the Target. Otherwise, the
PDF MUST either be set to unknown or omitted.
If a PIDF-LO does not include the confidence element, the confidence of the location estimate is 95%, as defined in [RFC5491].
A Point shape does not have uncertainty (or it has infinite uncertainty), so confidence is meaningless for a Point; therefore, this element MUST be omitted if only a Point is provided.
4.2. Generating Locations with Confidence
Location generators SHOULD attempt to ensure that confidence is equal in each dimension when generating location information. This restriction, while not always practical, allows for more
accurate scaling, if scaling is necessary.
A confidence element MUST be included with all location information that includes uncertainty (that is, all forms other than a Point). A special "unknown" is used if confidence is not known.
4.3. Consuming and Presenting Confidence
The inclusion of confidence that is anything other than 95% presents a potentially difficult usability problem for applications that use location information. Effectively communicating the
probability that a location is incorrect to a user can be difficult.
It is inadvisable to simply display locations of any confidence, or to display confidence in a separate or non-obvious fashion. If locations with different confidence levels are displayed such
that the distinction is subtle or easy to overlook -- such as using fine graduations of color or transparency for graphical uncertainty regions or displaying uncertainty graphically, but
providing confidence as supplementary text -- a user could fail to notice a difference in the quality of the location information that might be significant.
Depending on the circumstances, different ways of handling confidence might be appropriate. Section 5 describes techniques that could be appropriate for consumers that use automated processing.
Providing that the full implications of any choice for the application are understood, some amount of automated processing could be appropriate. In a simple example, applications could choose to
discard or suppress the display of location information if confidence does not meet a predetermined threshold.
In settings where there is an opportunity for user training, some of these problems might be mitigated by defining different operational procedures for handling location information at different
confidence levels.
5. Manipulation of Uncertainty
This section deals with manipulation of location information that contains uncertainty.
The following rules generally apply when manipulating location information:
□ Where calculations are performed on coordinate information, these should be performed in Cartesian space and the results converted back to latitude, longitude, and altitude. A method for
converting to and from Cartesian coordinates is included in Appendix A.
While some approximation methods are useful in simplifying calculations, treating latitude and longitude as Cartesian axes is never advisable. The two axes are not orthogonal. Errors can
arise from the curvature of the earth and from the convergence of longitude lines.
□ Normal rounding rules do not apply when rounding uncertainty. When rounding, the region of uncertainty always increases (that is, errors are rounded up) and confidence is always rounded down
(see [NIST.TN1297]). This means that any manipulation of uncertainty is a non-reversible operation; each manipulation can result in the loss of some information.
5.1. Reduction of a Location Estimate to a Point
Manipulating location estimates that include uncertainty information requires additional complexity in systems. In some cases, systems only operate on definitive values, that is, a single point.
This section describes algorithms for reducing location estimates to a simple form without uncertainty information. Having a consistent means for reducing location estimates allows for
interaction between applications that are able to use uncertainty information and those that cannot.
Note: Reduction of a location estimate to a point constitutes a
reduction in information. Removing uncertainty information can
degrade results in some applications. Also, there is a natural
tendency to misinterpret a Point location as representing a
location without uncertainty. This could lead to more serious
errors. Therefore, these algorithms should only be applied where
Several different approaches can be taken when reducing a location estimate to a point. Different methods each make a set of assumptions about the properties of the PDF and the selected point; no
one method is more "correct" than any other. For any given region of uncertainty, selecting an arbitrary point within the area could be considered valid; however, given the aforementioned
problems with Point locations, a more rigorous approach is appropriate.
Given a result with a known distribution, selecting the point within the area that has the highest probability is a more rigorous method. Alternatively, a point could be selected that minimizes
the overall error; that is, it minimizes the expected value of the difference between the selected point and the "true" value.
If a rectangular distribution is assumed, the centroid of the area or volume minimizes the overall error. Minimizing the error for a normal distribution is mathematically complex. Therefore, this
document opts to select the centroid of the region of uncertainty when selecting a point.
5.1.1. Centroid Calculation
For regular shapes, such as Circle, Sphere, Ellipse, and Ellipsoid, this approach equates to the center point of the region. For regions of uncertainty that are expressed as regular Polygons and
Prisms, the center point is also the most appropriate selection.
For the Arc-Band shape and non-regular Polygons and Prisms, selecting the centroid of the area or volume minimizes the overall error. This assumes that the PDF is rectangular.
Note: The centroid of a concave Polygon or Arc-Band shape is not
necessarily within the region of uncertainty.
5.1.1.1. Arc-Band Centroid
The centroid of the Arc-Band shape is found along a line that bisects the arc. The centroid can be found at the following distance from the starting point of the arc-band (assuming an arc-band
with an inner radius of "r", outer radius "R", start angle "a", and opening angle "o"):
d = 4 * sin(o/2) * (R*R + R*r + r*r) / (3*o*(R + r))
This point can be found along the line that bisects the arc; that is, the line at an angle of "a + (o/2)".
5.1.1.2. Polygon Centroid
Calculating a centroid for the Polygon and Prism shapes is more complex. Polygons that are specified using geodetic coordinates are not necessarily coplanar. For Polygons that are specified
without an altitude, choose a value for altitude before attempting this process; an altitude of 0 is acceptable.
The method described in this section is simplified by assuming that the surface of the earth is locally flat. This method degrades as polygons become larger; see [GeoShape] for
recommendations on polygon size.
The polygon is translated to a new coordinate system that has an x-y plane roughly parallel to the polygon. This enables the elimination of z-axis values and calculating a centroid can be done
using only x and y coordinates. This requires that the upward normal for the polygon be known.
To translate the polygon coordinates, apply the process described in Appendix B to find the normal vector "N = [Nx,Ny,Nz]". This value should be made a unit vector to ensure that the
transformation matrix is a special orthogonal matrix. From this vector, select two vectors that are perpendicular to this vector and combine these into a transformation matrix.
If "Nx" and "Ny" are non-zero, the matrices in Figure 3 can be used, given "p = sqrt(Nx^2 + Ny^2)". More transformations are provided later in this section for cases where "Nx" or "Ny" are zero.
[ -Ny/p Nx/p 0 ] [ -Ny/p -Nx*Nz/p Nx ]
T = [ -Nx*Nz/p -Ny*Nz/p p ] T' = [ Nx/p -Ny*Nz/p Ny ]
[ Nx Ny Nz ] [ 0 p Nz ]
(Transform) (Reverse Transform)
Figure 3: Recommended Transformation Matrices
To apply a transform to each point in the polygon, form a matrix from the Cartesian Earth-Centered, Earth-Fixed (ECEF) coordinates and use matrix multiplication to determine the translated
[ -Ny/p Nx/p 0 ] [ x[1] x[2] x[3] ... x[n] ]
[ -Nx*Nz/p -Ny*Nz/p p ] * [ y[1] y[2] y[3] ... y[n] ]
[ Nx Ny Nz ] [ z[1] z[2] z[3] ... z[n] ]
[ x'[1] x'[2] x'[3] ... x'[n] ]
= [ y'[1] y'[2] y'[3] ... y'[n] ]
[ z'[1] z'[2] z'[3] ... z'[n] ]
Figure 4: Transformation
Alternatively, direct multiplication can be used to achieve the same result:
x'[i] = -Ny * x[i] / p + Nx * y[i] / p
y'[i] = -Nx * Nz * x[i] / p - Ny * Nz * y[i] / p + p * z[i]
z'[i] = Nx * x[i] + Ny * y[i] + Nz * z[i]
The first and second rows of this matrix ("x'" and "y'") contain the values that are used to calculate the centroid of the polygon. To find the centroid of this polygon, first find the area
A = sum from i=1..n of (x'[i]*y'[i+1]-x'[i+1]*y'[i]) / 2
For these formulae, treat each set of coordinates as circular, that is "x'[0] == x'[n]" and "x'[n+1] == x'[1]". Based on the area, the centroid along each axis can be determined by:
Cx' = sum (x'[i]+x'[i+1]) * (x'[i]*y'[i+1]-x'[i+1]*y'[i]) / (6*A)
Cy' = sum (y'[i]+y'[i+1]) * (x'[i]*y'[i+1]-x'[i+1]*y'[i]) / (6*A)
Note: The formula for the area of a polygon will return a negative
value if the polygon is specified in a clockwise direction. This
can be used to determine the orientation of the polygon.
The third row contains a distance from a plane parallel to the polygon. If the polygon is coplanar, then the values for "z'" are identical; however, the constraints recommended in [RFC5491] mean
that this is rarely the case. To determine "Cz'", average these values:
Cz' = sum z'[i] / n
Once the centroid is known in the transformed coordinates, these can be transformed back to the original coordinate system. The reverse transformation is shown in Figure 5.
[ -Ny/p -Nx*Nz/p Nx ] [ Cx' ] [ Cx ]
[ Nx/p -Ny*Nz/p Ny ] * [ Cy' ] = [ Cy ]
[ 0 p Nz ] [ sum of z'[i] / n ] [ Cz ]
Figure 5: Reverse Transformation
The reverse transformation can be applied directly as follows:
Cx = -Ny * Cx' / p - Nx * Nz * Cy' / p + Nx * Cz'
Cy = Nx * Cx' / p - Ny * Nz * Cy' / p + Ny * Cz'
Cz = p * Cy' + Nz * Cz'
The ECEF value "[Cx,Cy,Cz]" can then be converted back to geodetic coordinates. Given a polygon that is defined with no altitude or equal altitudes for each point, the altitude of the result can
be either ignored or reset after converting back to a geodetic value.
The centroid of the Prism shape is found by finding the centroid of the base polygon and raising the point by half the height of the prism. This can be added to altitude of the final result;
alternatively, this can be added to "Cz'", which ensures that negative height is correctly applied to polygons that are defined in a clockwise direction.
The recommended transforms only apply if "Nx" and "Ny" are non-zero. If the normal vector is "[0,0,1]" (that is, along the z-axis), then no transform is necessary. Similarly, if the normal vector
is "[0,1,0]" or "[1,0,0]", avoid the transformation and use the x and z coordinates or y and z coordinates (respectively) in the centroid calculation phase. If either "Nx" or "Ny" are zero, the
alternative transform matrices in Figure 6 can be used. The reverse transform is the transpose of this matrix.
if Nx == 0: | if Ny == 0:
[ 0 -Nz Ny ] [ 0 1 0 ] | [ -Nz 0 Nx ]
T = [ 1 0 0 ] T' = [ -Nz 0 Ny ] | T = T' = [ 0 1 0 ]
[ 0 Ny Nz ] [ Ny 0 Nz ] | [ Nx 0 Nz ]
Figure 6: Alternative Transformation Matrices
5.2. Conversion to Circle or Sphere
The circle or sphere are simple shapes that suit a range of applications. A circle or sphere contains fewer units of data to manipulate, which simplifies operations on location estimates.
The simplest method for converting a location estimate to a Circle or Sphere shape is to determine the centroid and then find the longest distance to any point in the region of uncertainty to
that point. This distance can be determined based on the shape type:
Circle/Sphere: No conversion necessary.
Ellipse/Ellipsoid: The greater of either semi-major axis or altitude
Polygon/Prism: The distance to the farthest vertex of the Polygon
(for a Prism, it is only necessary to check points on the base).
Arc-Band: The farthest length from the centroid to the points where
the inner and outer arc end. This distance can be calculated by
finding the larger of the two following formulae:
X = sqrt( d*d + R*R - 2*d*R*cos(o/2) )
x = sqrt( d*d + r*r - 2*d*r*cos(o/2) )
Once the Circle or Sphere shape is found, the associated confidence can be increased if the result is known to follow a normal distribution. However, this is a complicated process and provides
limited benefit. In many cases, it also violates the constraint that confidence in each dimension be the same. Confidence should be unchanged when performing this conversion.
Two-dimensional shapes are converted to a Circle; three-dimensional shapes are converted to a Sphere.
5.3. Conversion from Three-Dimensional to Two-Dimensional
A three-dimensional shape can be easily converted to a two- dimensional shape by removing the altitude component. A Sphere becomes a Circle; a Prism becomes a Polygon; an Ellipsoid becomes an
Ellipse. Each conversion is simple, requiring only the removal of those elements relating to altitude.
The altitude is unspecified for a two-dimensional shape and therefore has unlimited uncertainty along the vertical axis. The confidence for the two-dimensional shape is thus higher than the
three- dimensional shape. Assuming equal confidence on each axis, the confidence of the Circle can be increased using the following approximate formula:
C[2d] >= C[3d] ^ (2/3)
"C[2d]" is the confidence of the two-dimensional shape and "C[3d]" is the confidence of the three-dimensional shape. For example, a Sphere with a confidence of 95% can be simplified to a Circle
of equal radius with confidence of 96.6%.
5.4. Increasing and Decreasing Uncertainty and Confidence
The combination of uncertainty and confidence provide a great deal of information about the nature of the data that is being measured. If uncertainty, confidence, and PDF are known, certain
information can be extrapolated. In particular, the uncertainty can be scaled to meet a desired confidence or the confidence for a particular region of uncertainty can be found.
In general, confidence decreases as the region of uncertainty decreases in size, and confidence increases as the region of uncertainty increases in size. However, this depends on the PDF;
expanding the region of uncertainty for a rectangular distribution has no effect on confidence without additional information. If the region of uncertainty is increased during the process of
obfuscation (see [RFC6772]), then the confidence cannot be increased.
A region of uncertainty that is reduced in size always has a lower confidence.
A region of uncertainty that has an unknown PDF shape cannot be reduced in size reliably. The region of uncertainty can be expanded, but only if confidence is not increased.
This section makes the simplifying assumption that location information is symmetrically and evenly distributed in each dimension. This is not necessarily true in practice. If better information
is available, alternative methods might produce better results.
5.4.1. Rectangular Distributions
Uncertainty that follows a rectangular distribution can only be decreased in size. Increasing uncertainty has no value, since it has no effect on confidence. Since the PDF is constant over the
region of uncertainty, the resulting confidence is determined by the following formula:
Cr = Co * Ur / Uo
Where "Uo" and "Ur" are the sizes of the original and reduced regions of uncertainty (either the area or the volume of the region); "Co" and "Cr" are the confidence values associated with each
Information is lost by decreasing the region of uncertainty for a rectangular distribution. Once reduced in size, the uncertainty region cannot subsequently be increased in size.
5.4.2. Normal Distributions
Uncertainty and confidence can be both increased and decreased for a normal distribution. This calculation depends on the number of dimensions of the uncertainty region.
For a normal distribution, uncertainty and confidence are related to the standard deviation of the function. The following function defines the relationship between standard deviation,
uncertainty, and confidence along a single axis:
S[x] = U[x] / ( sqrt(2) * erfinv(C[x]) )
Where "S[x]" is the standard deviation, "U[x]" is the uncertainty, and "C[x]" is the confidence along a single axis. "erfinv" is the inverse error function.
Scaling a normal distribution in two dimensions requires several assumptions. Firstly, it is assumed that the distribution along each axis is independent. Secondly, the confidence for each axis
is assumed to be the same. Therefore, the confidence along each axis can be assumed to be:
C[x] = Co ^ (1/n)
Where "C[x]" is the confidence along a single axis and "Co" is the overall confidence and "n" is the number of dimensions in the uncertainty.
Therefore, to find the uncertainty for each axis at a desired confidence, "Cd", apply the following formula:
Ud[x] <= U[x] * (erfinv(Cd ^ (1/n)) / erfinv(Co ^ (1/n)))
For regular shapes, this formula can be applied as a scaling factor in each dimension to reach a required confidence.
5.5. Determining Whether a Location Is within a Given Region
A number of applications require that a judgment be made about whether a Target is within a given region of interest. Given a location estimate with uncertainty, this judgment can be difficult. A
location estimate represents a probability distribution, and the true location of the Target cannot be definitively known. Therefore, the judgment relies on determining the probability that the
Target is within the region.
The probability that the Target is within a particular region is found by integrating the PDF over the region. For a normal distribution, there are no analytical methods that can be used to
determine the integral of the two- or three-dimensional PDF over an arbitrary region. The complexity of numerical methods is also too great to be useful in many applications; for example, finding
the integral of the PDF in two or three dimensions across the overlap between the uncertainty region and the target region. If the PDF is unknown, no determination can be made without a
simplifying assumption.
When judging whether a location is within a given region, this document assumes that uncertainties are rectangular. This introduces errors, but simplifies the calculations significantly. Prior to
applying this assumption, confidence should be scaled to 95%.
Note: The selection of confidence has a significant impact on the
final result. Only use a different confidence if an uncertainty
value for 95% confidence cannot be found.
Given the assumption of a rectangular distribution, the probability that a Target is found within a given region is found by first finding the area (or volume) of overlap between the uncertainty
region and the region of interest. This is multiplied by the confidence of the location estimate to determine the probability. Figure 7 shows an example of finding the area of overlap between the
region of uncertainty and the region of interest.
.' `. _ Region of
/ \ / Uncertainty
..+-"""--.. |
.-' | :::::: `-. |
,' | :: Ao ::: `. |
/ \ :::::::::: \ /
/ `._ :::::: _.X
| `-....-' |
| |
| |
\ /
`. .' \_ Region of
`._ _.' Interest
Figure 7: Area of Overlap between Two Circular Regions
Once the area of overlap, "Ao", is known, the probability that the Target is within the region of interest, "Pi", is:
Pi = Co * Ao / Au
Given that the area of the region of uncertainty is "Au" and the confidence is "Co".
This probability is often input to a decision process that has a limited set of outcomes; therefore, a threshold value needs to be selected. Depending on the application, different threshold
probabilities might be selected. A probability of 50% or greater is recommended before deciding that an uncertain value is within a given region. If the decision process selects between two or
more regions, as is required by [RFC5222], then the region with the highest probability can be selected.
5.5.1. Determining the Area of Overlap for Two Circles
Determining the area of overlap between two arbitrary shapes is a non-trivial process. Reducing areas to circles (see Section 5.2) enables the application of the following process.
Given the radius of the first circle "r", the radius of the second circle "R", and the distance between their center points "d", the following set of formulae provide the area of overlap "Ao".
□ If the circles don't overlap, that is "d >= r+R", "Ao" is zero.
□ If one of the two circles is entirely within the other, that is "d <= |r-R|", the area of overlap is the area of the smaller circle.
□ Otherwise, if the circles partially overlap, that is "d < r+R" and "d > |r-R|", find "Ao" using:
a = (r^2 - R^2 + d^2)/(2*d)
Ao = r^2*acos(a/r) + R^2*acos((d - a)/R) - d*sqrt(r^2 - a^2)
A value for "d" can be determined by converting the center points to Cartesian coordinates and calculating the distance between the two center points:
d = sqrt((x1-x2)^2 + (y1-y2)^2 + (z1-z2)^2)
5.5.2. Determining the Area of Overlap for Two Polygons
A calculation of overlap based on polygons can give better results than the circle-based method. However, efficient calculation of overlapping area is non-trivial. Algorithms such as Vatti's
clipping algorithm [Vatti92] can be used.
For large polygonal areas, it might be that geodesic interpolation is used. In these cases, altitude is also frequently omitted in describing the polygon. For such shapes, a planar projection can
still give a good approximation of the area of overlap if the larger area polygon is projected onto the local tangent plane of the smaller. This is only possible if the only area of interest is
that contained within the smaller polygon. Where the entire area of the larger polygon is of interest, geodesic interpolation is necessary.
6. Examples
This section presents some examples of how to apply the methods described in Section 5.
6.1. Reduction to a Point or Circle
Alice receives a location estimate from her Location Information Server (LIS) that contains an ellipsoidal region of uncertainty. This information is provided at 19% confidence with a normal PDF.
A PIDF-LO extract for this information is shown in Figure 8.
<gs:Ellipsoid srsName="urn:ogc:def:crs:EPSG::4979">
<gml:pos>-34.407242 150.882518 34</gml:pos>
<gs:semiMajorAxis uom="urn:ogc:def:uom:EPSG::9001">
<gs:semiMinorAxis uom="urn:ogc:def:uom:EPSG::9001">
<gs:verticalAxis uom="urn:ogc:def:uom:EPSG::9001">
<gs:orientation uom="urn:ogc:def:uom:EPSG::9102">
<con:confidence pdf="normal">95</con:confidence>
Figure 8: Alice's Ellipsoid Location
This information can be reduced to a point simply by extracting the center point, that is [-34.407242, 150.882518, 34].
If some limited uncertainty were required, the estimate could be converted into a circle or sphere. To convert to a sphere, the radius is the largest of the semi-major, semi-minor and vertical
axes; in this case, 28.7 meters.
However, if only a circle is required, the altitude can be dropped as can the altitude uncertainty (the vertical axis of the ellipsoid), resulting in a circle at [-34.407242, 150.882518] of
radius 7.7156 meters.
Bob receives a location estimate with a Polygon shape (which roughly corresponds to the location of the Sydney Opera House). This information is shown in Figure 9.
<gml:Polygon srsName="urn:ogc:def:crs:EPSG::4326">
-33.856625 151.215906 -33.856299 151.215343
-33.856326 151.214731 -33.857533 151.214495
-33.857720 151.214613 -33.857369 151.215375
-33.856625 151.215906
Figure 9: Bob's Polygon Location
To convert this to a polygon, each point is firstly assigned an altitude of zero and converted to ECEF coordinates (see Appendix A). Then, a normal vector for this polygon is found (see Appendix
B). The result of each of these stages is shown in Figure 10. Note that the numbers shown in this document are rounded only for formatting reasons; the actual calculations do not include
rounding, which would generate significant errors in the final values.
Polygon in ECEF coordinate space
(repeated point omitted and transposed to fit):
[ -4.6470e+06 2.5530e+06 -3.5333e+06 ]
[ -4.6470e+06 2.5531e+06 -3.5332e+06 ]
pecef = [ -4.6470e+06 2.5531e+06 -3.5332e+06 ]
[ -4.6469e+06 2.5531e+06 -3.5333e+06 ]
[ -4.6469e+06 2.5531e+06 -3.5334e+06 ]
[ -4.6469e+06 2.5531e+06 -3.5333e+06 ]
Normal Vector: n = [ -0.72782 0.39987 -0.55712 ]
Transformation Matrix:
[ -0.48152 -0.87643 0.00000 ]
t = [ -0.48828 0.26827 0.83043 ]
[ -0.72782 0.39987 -0.55712 ]
Transformed Coordinates:
[ 8.3206e+01 1.9809e+04 6.3715e+06 ]
[ 3.1107e+01 1.9845e+04 6.3715e+06 ]
pecef' = [ -2.5528e+01 1.9842e+04 6.3715e+06 ]
[ -4.7367e+01 1.9708e+04 6.3715e+06 ]
[ -3.6447e+01 1.9687e+04 6.3715e+06 ]
[ 3.4068e+01 1.9726e+04 6.3715e+06 ]
Two dimensional polygon area: A = 12600 m^2
Two-dimensional polygon centroid: C' = [ 8.8184e+00 1.9775e+04 ]
Average of pecef' z coordinates: 6.3715e+06
Reverse Transformation Matrix:
[ -0.48152 -0.48828 -0.72782 ]
t' = [ -0.87643 0.26827 0.39987 ]
[ 0.00000 0.83043 -0.55712 ]
Polygon centroid (ECEF): C = [ -4.6470e+06 2.5531e+06 -3.5333e+06 ]
Polygon centroid (Geo): Cg = [ -33.856926 151.215102 -4.9537e-04 ]
Figure 10: Calculation of Polygon Centroid
The point conversion for the polygon uses the final result, "Cg", ignoring the altitude since the original shape did not include altitude.
To convert this to a circle, take the maximum distance in ECEF coordinates from the center point to each of the points. This results in a radius of 99.1 meters. Confidence is unchanged.
6.2. Increasing and Decreasing Confidence
Assume that confidence is known to be 19% for Alice's location information. This is a typical value for a three-dimensional ellipsoid uncertainty of normal distribution where the standard
deviation is used directly for uncertainty in each dimension. The confidence associated with Alice's location estimate is quite low for many applications. Since the estimate is known to follow a
normal distribution, the method in Section 5.4.2 can be used. Each axis can be scaled by:
scale = erfinv(0.95^(1/3)) / erfinv(0.19^(1/3)) = 2.9937
Ensuring that rounding always increases uncertainty, the location estimate at 95% includes a semi-major axis of 23.1, a semi-minor axis of 10 and a vertical axis of 86.
Bob's location estimate (from the previous example) covers an area of approximately 12600 square meters. If the estimate follows a rectangular distribution, the region of uncertainty can be
reduced in size. Here we find the confidence that Bob is within the smaller area of the Concert Hall. For the Concert Hall, the polygon [-33.856473, 151.215257; -33.856322, 151.214973;
-33.856424, 151.21471; -33.857248, 151.214753; -33.857413, 151.214941; -33.857311, 151.215128] is used. To use this new region of uncertainty, find its area using the same translation method
described in Section 5.1.1.2, which produces 4566.2 square meters. Given that the Concert Hall is entirely within Bob's original location estimate, the confidence associated with the smaller area
is therefore 95% * 4566.2 / 12600 = 34%.
6.3. Matching Location Estimates to Regions of Interest
Suppose that a circular area is defined centered at [-33.872754, 151.20683] with a radius of 1950 meters. To determine whether Bob is found within this area -- given that Bob is at [-34.407242,
150.882518] with an uncertainty radius 7.7156 meters -- we apply the method in Section 5.5. Using the converted Circle shape for Bob's location, the distance between these points is found to be
1915.26 meters. The area of overlap between Bob's location estimate and the region of interest is therefore 2209 square meters and the area of Bob's location estimate is 30853 square meters. This
gives the estimated probability that Bob is less than 1950 meters from the selected point as 67.8%.
Note that if 1920 meters were chosen for the distance from the selected point, the area of overlap is only 16196 square meters and the confidence is 49.8%. Therefore, it is marginally more likely
that Bob is outside the region of interest, despite the center point of his location estimate being within the region.
6.4. PIDF-LO with Confidence Example
The PIDF-LO document in Figure 11 includes a representation of uncertainty as a circular area. The confidence element (on the line marked with a comment) indicates that the confidence is 67% and
that it follows a normal distribution.
<dm:device id="sg89ab">
<gs:Circle srsName="urn:ogc:def:crs:EPSG::4326">
<gml:pos>42.5463 -73.2512</gml:pos>
<gs:radius uom="urn:ogc:def:uom:EPSG::9001">
<!--c--> <con:confidence pdf="normal">67</con:confidence>
Figure 11: Example PIDF-LO with Confidence
7. Confidence Schema
<?xml version="1.0"?>
PIDF-LO Confidence
This schema defines an element that is used for indicating
confidence in PIDF-LO documents.
<xs:element name="confidence" type="conf:confidenceType"/>
<xs:complexType name="confidenceType">
<xs:extension base="conf:confidenceBase">
<xs:attribute name="pdf" type="conf:pdfType"
<xs:simpleType name="confidenceBase">
<xs:restriction base="xs:decimal">
<xs:minExclusive value="0.0"/>
<xs:maxExclusive value="100.0"/>
<xs:restriction base="xs:token">
<xs:enumeration value="unknown"/>
<xs:simpleType name="pdfType">
<xs:restriction base="xs:token">
<xs:enumeration value="unknown"/>
<xs:enumeration value="normal"/>
<xs:enumeration value="rectangular"/>
8. IANA Considerations
8.1. URN Sub-Namespace Registration for
A new XML namespace, "urn:ietf:params:xml:ns:geopriv:conf", has been registered, as per the guidelines in [RFC3688].
URI: urn:ietf:params:xml:ns:geopriv:conf
Registrant Contact: IETF GEOPRIV working group (geopriv@ietf.org),
Martin Thomson (martin.thomson@gmail.com).
<?xml version="1.0"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<title>PIDF-LO Confidence Attribute</title>
<h1>Namespace for PIDF-LO Confidence Attribute</h1>
<p>See <a href="http://www.rfc-editor.org/rfc/rfc7459.txt">
RFC 7459</a>.</p>
8.2. XML Schema Registration
An XML schema has been registered, as per the guidelines in [RFC3688].
URI: urn:ietf:params:xml:schema:geopriv:conf
Registrant Contact: IETF GEOPRIV working group (geopriv@ietf.org),
Martin Thomson (martin.thomson@gmail.com).
Schema: The XML for this schema can be found as the entirety of
Section 7 of this document.
9. Security Considerations
This document describes methods for managing and manipulating uncertainty in location. No specific security concerns arise from most of the information provided. The considerations of [RFC4119]
all apply.
A thorough treatment of the privacy implications of describing location information are discussed in [RFC6280]. Including uncertainty information increases the amount of information available;
and altering uncertainty is not an effective privacy mechanism.
Providing uncertainty and confidence information can reveal information about the process by which location information is generated. For instance, it might reveal information that could be used
to infer that a user is using a mobile device with a GPS, or that a user is acquiring location information from a particular network-based service. A Rule Maker might choose to remove
uncertainty-related fields from a location object in order to protect this information. Note however that information might not be perfectly protected due to difficulties associated with location
obfuscation, as described in Section 13.5 of [RFC6772]. In particular, increasing uncertainty does not necessarily result in a reduction of the information conveyed by the location object.
Adding confidence to location information risks misinterpretation by consumers of location that do not understand the element. This could be exploited, particularly when reducing confidence,
since the resulting uncertainty region might include locations that are less likely to contain the Target than the recipient expects. Since this sort of error is always a possibility, the impact
of this is low.
10. References
10.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997,
[RFC3688] Mealling, M., "The IETF XML Registry", BCP 81, RFC 3688,
January 2004, <http://www.rfc-editor.org/info/rfc3688>.
[RFC3693] Cuellar, J., Morris, J., Mulligan, D., Peterson, J., and
J. Polk, "Geopriv Requirements", RFC 3693, February 2004,
[RFC4119] Peterson, J., "A Presence-based GEOPRIV Location Object
Format", RFC 4119, December 2005,
[RFC5139] Thomson, M. and J. Winterbottom, "Revised Civic Location
Format for Presence Information Data Format Location
Object (PIDF-LO)", RFC 5139, February 2008,
[RFC5491] Winterbottom, J., Thomson, M., and H. Tschofenig, "GEOPRIV
Presence Information Data Format Location Object (PIDF-LO)
Usage Clarification, Considerations, and Recommendations",
RFC 5491, March 2009,
[RFC6225] Polk, J., Linsner, M., Thomson, M., and B. Aboba, Ed.,
"Dynamic Host Configuration Protocol Options for
Coordinate-Based Location Configuration Information", RFC
6225, July 2011, <http://www.rfc-editor.org/info/rfc6225>.
[RFC6280] Barnes, R., Lepinski, M., Cooper, A., Morris, J.,
Tschofenig, H., and H. Schulzrinne, "An Architecture for
Location and Location Privacy in Internet Applications",
BCP 160, RFC 6280, July 2011,
10.2. Informative References
[Convert] Burtch, R., "A Comparison of Methods Used in Rectangular
to Geodetic Coordinate Transformations", April 2006.
[GeoShape] Thomson, M. and C. Reed, "GML 3.1.1 PIDF-LO Shape
Application Schema for use by the Internet Engineering
Task Force (IETF)", Candidate OpenGIS Implementation
Specification 06-142r1, Version: 1.0, April 2007.
[ISO.GUM] ISO/IEC, "Guide to the expression of uncertainty in
measurement (GUM)", Guide 98:1995, 1995.
Taylor, B. and C. Kuyatt, "Guidelines for Evaluating and
Expressing the Uncertainty of NIST Measurement Results",
Technical Note 1297, September 1994.
[RFC5222] Hardie, T., Newton, A., Schulzrinne, H., and H.
Tschofenig, "LoST: A Location-to-Service Translation
Protocol", RFC 5222, August 2008,
[RFC6772] Schulzrinne, H., Ed., Tschofenig, H., Ed., Cuellar, J.,
Polk, J., Morris, J., and M. Thomson, "Geolocation Policy:
A Document Format for Expressing Privacy Preferences for
Location Information", RFC 6772, January 2013,
[Sunday02] Sunday, D., "Fast polygon area and Newell normal
computation", Journal of Graphics Tools JGT, 7(2):9-13,
3GPP, "Universal Geographical Area Description (GAD)", 3GPP TS 23.032 12.0.0, September 2014.
[Vatti92] Vatti, B., "A generic solution to polygon clipping",
Communications of the ACM Volume 35, Issue 7, pages 56-63,
July 1992,
[WGS84] US National Imagery and Mapping Agency, "Department of
Defense (DoD) World Geodetic System 1984 (WGS 84), Third
Edition", NIMA TR8350.2, January 2000.
Appendix A. Conversion between Cartesian and Geodetic Coordinates in
The process of conversion from geodetic (latitude, longitude, and altitude) to ECEF Cartesian coordinates is relatively simple.
In this appendix, the following constants and derived values are used from the definition of WGS84 [WGS84]:
{radius of ellipsoid} R = 6378137 meters
{inverse flattening} 1/f = 298.257223563
{first eccentricity squared} e^2 = f * (2 - f)
{second eccentricity squared} e'^2 = e^2 * (1 - e^2)
To convert geodetic coordinates (latitude, longitude, altitude) to ECEF coordinates (X, Y, Z), use the following relationships:
N = R / sqrt(1 - e^2 * sin(latitude)^2)
X = (N + altitude) * cos(latitude) * cos(longitude)
Y = (N + altitude) * cos(latitude) * sin(longitude)
Z = (N*(1 - e^2) + altitude) * sin(latitude)
The reverse conversion requires more complex computation, and most methods introduce some error in latitude and altitude. A range of techniques are described in [Convert]. A variant on the method
originally proposed by Bowring, which results in an acceptably small error, is described by the following:
p = sqrt(X^2 + Y^2)
r = sqrt(X^2 + Y^2 + Z^2)
u = atan((1-f) * Z * (1 + e'^2 * (1-f) * R / r) / p)
latitude = atan((Z + e'^2 * (1-f) * R * sin(u)^3)
/ (p - e^2 * R * cos(u)^3))
longitude = atan2(Y, X)
altitude = sqrt((p - R * cos(u))^2 + (Z - (1-f) * R * sin(u))^2)
If the point is near the poles, that is, "p < 1", the value for altitude that this method produces is unstable. A simpler method for determining the altitude of a point near the poles is:
altitude = |Z| - R * (1 - f)
Appendix B. Calculating the Upward Normal of a Polygon
For a polygon that is guaranteed to be convex and coplanar, the upward normal can be found by finding the vector cross product of adjacent edges.
For more general cases, the Newell method of approximation described in [Sunday02] may be applied. In particular, this method can be used if the points are only approximately coplanar, and for
non-convex polygons.
This process requires a Cartesian coordinate system. Therefore, convert the geodetic coordinates of the polygon to Cartesian, ECEF coordinates (Appendix A). If no altitude is specified, assume an
altitude of zero.
This method can be condensed to the following set of equations:
Nx = sum from i=1..n of (y[i] * (z[i+1] - z[i-1]))
Ny = sum from i=1..n of (z[i] * (x[i+1] - x[i-1]))
Nz = sum from i=1..n of (x[i] * (y[i+1] - y[i-1]))
For these formulae, the polygon is made of points
"(x[1], y[1], z[1])" through "(x[n], y[n], x[n])". Each array is treated as circular, that is, "x[0] == x[n]" and "x[n+1] == x[1]".
To translate this into a unit-vector; divide each component by the length of the vector:
Nx' = Nx / sqrt(Nx^2 + Ny^2 + Nz^2)
Ny' = Ny / sqrt(Nx^2 + Ny^2 + Nz^2)
Nz' = Nz / sqrt(Nx^2 + Ny^2 + Nz^2)
B.1. Checking That a Polygon Upward Normal Points Up
RFC 5491 [RFC5491] stipulates that the Polygon shape be presented in counterclockwise direction so that the upward normal is in an upward direction. Accidental reversal of points can invert this
vector. This error can be hard to detect just by looking at the series of coordinates that form the polygon.
Calculate the dot product of the upward normal of the polygon (Appendix B) and any vector that points away from the center of the earth from the location of polygon. If this product is positive,
then the polygon upward normal also points away from the center of the earth.
The inverse cosine of this value indicates the angle between the horizontal plane and the approximate plane of the polygon.
A unit vector for the upward direction at any point can be found based on the latitude (lat) and longitude (lng) of the point, as follows:
Up = [ cos(lat) * cos(lng) ; cos(lat) * sin(lng) ; sin(lat) ]
For polygons that span less than half the globe, any point in the polygon -- including the centroid -- can be selected to generate an approximate up vector for comparison with the upward normal.
Peter Rhodes provided assistance with some of the mathematical groundwork on this document. Dan Cornford provided a detailed review and many terminology corrections.
Authors' Addresses
Martin Thomson
331 E Evelyn Street
Mountain View, CA 94041
United States
EMail: martin.thomson@gmail.com
James Winterbottom
EMail: a.james.winterbottom@gmail.com | {"url":"https://pike-www.lysator.liu.se/docs/ietf/rfc/74/rfc7459.xml","timestamp":"2024-11-06T17:25:40Z","content_type":"text/html","content_length":"88649","record_id":"<urn:uuid:5a3beb61-059d-454a-a0bb-42cf35906a2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00028.warc.gz"} |
How do you simplify fractions?
How do you simplify fractions? Or, how do you reduce fractions?
In most cases, when fractions are introduced to students as a new topic in the classroom, the initial reaction is: "Here's another complex subject we have to deal with." But then, reactions change
and fractions are seen as a kind of enjoyable game that is more of a technical challenge. So, what's particularly important about fractions? Understanding their meaning, the division of roles between
the numerator and the denominator, and how to reduce them. Is it difficult to reduce fractions? Not really.
So, when will you need to reduce the given fractions?
• At the time it's required in an exercise/test.
• In case you want to work with smaller fractions.
Simplify the following fraction:
\( \frac{1}{1}= \)
Simplify the following fraction:
\( \frac{12}{4}= \)
Simplify the following fraction:
\( \frac{12}{8}= \)
Simplify the following fraction:
\( \frac{16}{8}= \)
Simplify the following fraction:
\( \frac{2}{10}= \)
Simplify the following fraction:
\( \frac{4}{16}= \)
Simplify the following fraction by a factor of 1:
\( \frac{3}{10}= \)
Simplify the following fraction by a factor of 3:
\( \frac{3}{6}= \)
Simplify the following fraction by a factor of 4:
\( \frac{4}{8}= \)
Simplify the following fraction by a factor of 5:
\( \frac{15}{10}= \)
Enlarge the following fraction by a factor of 11:
\( \frac{8}{9}= \)
Enlarge the following fraction by the factor 3:
\( \frac{2}{15}= \)
Enlarge the following fraction by the factor 4:
\( \frac{1}{3}= \)
Enlarge the following fraction by the factor 8:
\( \frac{9}{10}= \)\( \)
Enlarge the following fraction by the factor 9:
\( \frac{7}{9}= \)
Simplify the following fraction:
\( \frac{1}{1}= \) | {"url":"https://www.tutorela.com/math/how-do-you-simplify-fractions","timestamp":"2024-11-02T12:32:54Z","content_type":"text/html","content_length":"215209","record_id":"<urn:uuid:7a7e24ca-3a40-4ab6-af23-d0bce00d1e89>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00874.warc.gz"} |
Adventure in Retrieving Memory Size of kdb+ Object · Data Intellect
It has been quite a puzzle to figure out how much memory a kdb+ object uses within a q session, so we decided to write this utility script and add it to the latest release of our TorQ Framework. My
first naïve impression was why not just use -22!, that should be a good estimate. Um no, -22! is only returning the byte size of uncompressed serialized version of the object (pretty much the same as
how much space it takes when saving the object down to disk without compression).
q)-22!til 10000000
q)`:test set til 10000000
q)hcount `:test
But how much memory does this object actually take? Let’s see…
// this function measures the memory usage before and after
// object creation and returns the difference
q){s:.Q.w[]`used;a:til 10000000;.Q.w[][`used]-s}[]
134MB vs 80MB?? How come there is a huge difference? Where does this number come from? How is it calculated? What is going on? This is where it gets interesting. kdb+ actually allocates memory to
its objects in the power of 2. If we quickly convert the former result to its nearest power of 2, we can get a very good approximation.
q)"j"$2 xexp ceiling 2 xlog -22!til 10000000
This is just a very simple example. Even though it looks like we can just use -22! and get the closest power of 2 to approximate memory size of kdb+ objects, that is not the case when the object gets
complicated, such as nested objects, attributes, mixed types, tables, dictionaries, etc.
In this article, I’ll be explain the approach bit by bit. Also please note that the result of the function is not the returnable memory size, since that would need to take the reference count into
consideration. So if you create an object by referencing to another object (i.e. a:1 2;b:a), deleting that object won’t return the amount of memory expected by the function since the other object is
still using that reference.
Now, let’s talk kdb!
Atoms are allocated to a fixed size of 16 bytes regardless of type, except for GUID which takes 32 bytes.
q){s:.Q.w[]`used;a:1f,100000#1b;.Q.w[][`used]-s}[]-1048576 // boolean
q){s:.Q.w[]`used;a:1f,100000#1j;.Q.w[][`used]-s}[]-1048576 // long
q){s:.Q.w[]`used;a:1f,100000#"G"$"1";.Q.w[][`used]-s}[]-1048576 // GUID
The reason for the extra subtraction is due to the mixed list I created. I’ll explain further a bit later, but for now, we subtract the size of pointers of the mixed list to get to the raw size of
the atoms. The size of pointers is calculated using 2 xexp ceiling 2 xlog 16+100001*8.
Simple List
For a simple list, the memory size formula is straightforward (we can refer back to the example at the start):
// c-count, s-data type size, a-attribute overheads
calcsize:{[c;s;a] `long$2 xexp ceiling 2 xlog 16+a+s*c}
vectorsize:{calcsize[count x;typesize x;attrsize x]}
It sums all the data type sizes in the list, adds an extra 16 byte and attribute overhead and returns the nearest power of 2. The data type size can be found here and this is what we are going to
typesize:{4^0N 1 16 0N 1 2 4 8 4 8 1 8 8 4 4 8 8 4 4 4 abs type x}
The 4^ is to handle the enumerated list where the pointers are 4 bytes each.
q)type `sym?1000000?sym
q)type `sym2?1000000?sym2
I’ll explain the attribute overhead later. Let’s see how we can handle a complex list.
Complex List
A complex list can be a mix of atoms/lists, different data types, and multi-nested objects within a list. To calculate its size, we need to include the pointer size (8 bytes) of each element in the
list plus the memory size of each item in the list. Now the formula looks like such:
if[not count x;:0];
$[0h>t:type x;$[-2h=t;32;16];
t within 1 76h;vectorsize x;
0h in t:type each x;calcsize[count x;8;0]+sum .z.s each x;
(d[0] within 1 76h)&1=count d:distinct t;calcsize[c;8;0]+sum calcsize[count each x;typesize x 0;$[1000<c:count x;0;attrsize each x]];
calcsize[count x;8;0]+sum .z.s each x]
There are some optimizations here which may or may not be applicable. We put all the filters for atoms, simple lists and types over 76h at the top and manage the conditions for mixed type (0h) lists
• If a mixed list contains another mixed list, we have to loop through the list and calculate each item individually
• If the type of each element of the list is the same and they are all simple lists then we can get the count of each inner list and use the type to calculate the total size. There is an
optimisation here around attributes- if the list is of size > 1000 (an arbitrary number) then we don’t bother calculating attribute size and assume to be 0. This is to account for the case where
we are checking the size of a table with many rows and nested (e.g. string) columns. It would be very expensive to check the attribute size on every element, so we assume to be 0. We could
perhaps further optimize and not bother to count each element, but we’ve left the count in for now
• For any other mixed type lists we loop through each element
Lets play around with this formula without the attribute overhead:
q){s:.Q.w[]`used;a:0N 2#til 100000;.Q.w[][`used]-s}[]
q)objsize 0N 2#til 100000
q){s:.Q.w[]`used;a:til[10000],(1000#1;0N 2#.Q.a);.Q.w[][`used]-s}[]
q)objsize til[10000],(1000#1;0N 2#.Q.a)
So far so good! Now lets look at a more interesting section, attribute overheads.
There is good documentation here at section 42 concerning how much overhead an attribute requires. But those values are only applicable for 2.x kdb+ version. For 3.x kdb+ version, we need to use
double the size. So some sort of a switch between version is very useful. Translating all this information into a function, this is what we have:
attrsize:{version*$[`u=a:attr x;32*count distinct x;
`p=a;8+48*count distinct x;
Let’s test this out in 2.x and 3.x to see whether I’m lying!!
q){s:.Q.w[]`used;a:`u#til 100000;.Q.w[][`used]-s}[]
q)objsize `u#til 100000
q){s:.Q.w[]`used;a:`p#raze 1000#'til 100;.Q.w[][`used]-s}[]
q)objsize `p#raze 1000#'til 100
q){s:.Q.w[]`used;a:`u#til 100000;.Q.w[][`used]-s}[]
q)objsize `u#til 100000
q){s:.Q.w[]`used;a:`p#raze 1000#'til 100;.Q.w[][`used]-s}[]
q)objsize `p#raze 1000#'til 100
See I’m being very honest 😀 But we are still missing the `g# attribute. For that, we add the size of the raw list and a grouped dictionary of the list together.
q)objsize[100000#.Q.a]+objsize group `g#100000#.Q.a
Note that the size of group `g#100000#.Q.a and group 100000#.Q.a are actually quite different as the former key has a `u# attribute compared to the latter.
q)key group `g#100000#.Q.a
q)key group 100000#.Q.a
Taking this into consideration, the function is modified to:
if[not count x;:0];
if[`g=attr x;x:(`#x;group x)];
$[0h>t:type x;$[-2h=t;32;16];
t within 1 76h;vectorsize x;
0h in t:type each x;calcsize[count x;8;0]+sum .z.s each x;
(d[0] within 1 76h)&1=count d:distinct t;calcsize[c;8;0]+sum calcsize[count each x;typesize x 0;$[1000<c:count x;0;attrsize each x]];
calcsize[count x;8;0]+sum .z.s each x]
Now let’s check out tables and dictionaries.
So how do we deal with tables/dictionaries? What structure do they have? They are made up of keys and values. So if we flatten the object into a list containing keys and values, we can calculate the
size quite easily. Keyed or unkeyed table won’t affect the calculation since we end up flipping them into a dictionary and taking the value.
if[not count x;:0];
x:$[.Q.qt x;(key x;value x:flip 0!x);
99h=type x;(key x;value x);
if[`g=attr x;x:(`#x;group x)];
$[0h>t:type x;$[-2h=t;32;16];
t within 1 76h;vectorsize x;
0h in t:type each x;calcsize[count x;8;0]+sum .z.s each x;
(d[0] within 1 76h)&1=count d:distinct t;calcsize[c;8;0]+sum calcsize[count each x;typesize x 0;$[1000<c:count x;0;attrsize each x]];
calcsize[count x;8;0]+sum .z.s each x]
Examples are always good!
q){s:.Q.w[]`used;a:`a`b!(til 1000000;1000000?0 1 2);.Q.w[][`used]-s}[]
q)objsize `a`b!(til 1000000;1000000?0 1 2)
q){s:.Q.w[]`used;a:([]a:til 1000000;b:1000000?0 1 2);.Q.w[][`used]-s}[]
q)objsize ([]a:til 1000000;b:1000000?0 1 2)
q){s:.Q.w[]`used;a:1!([]a:til 1000000;b:1000000?0 1 2);.Q.w[][`used]-s}[]
q)objsize 1!([]a:til 1000000;b:1000000?0 1 2)
q){s:.Q.w[]`used;a:([]a:`p#raze 100#'til 10000;b:1000000?0 1 2;c:0N 2#2000000?0 1 2);.Q.w[][`used]-s}[]
q)objsize ([]a:`p#raze 100#'til 10000;b:1000000?0 1 2;c:0N 2#2000000?0 1 2)
It has been quite a journey to uncover how kdb+ uses memory. The main factor of how it works is that memory is used in powers of 2 (the power of 3 does not set you free here). The calculation is
an approximation, and there are probably still some cases that are not covered by this function, but it should be sufficient for a start 😉 I’m very excited to see what the community will think of
this. Any new ideas to improve or comments are welcome! Hope you enjoyed reading this article! Until next time. | {"url":"https://dataintellect.com/blog/adventure-in-retrieving-memory-size-of-kdb-object/","timestamp":"2024-11-14T00:30:07Z","content_type":"text/html","content_length":"97955","record_id":"<urn:uuid:2c6b3029-8606-494d-bb0f-8f28e2aa82e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00270.warc.gz"} |
MODAL LOGIC - PhilosophyStudent.org
A common extension to the standard formal languages outlined above is to introduce the technical machinery required to evaluate natural language arguments containing modal terminology (that is, talk
of possibility and necessity). A simple modal language is formed by expanding the language of the propositional calculus with the two one-place logical connectives ☐ (“it is necessary that …”) and ◇
(“it is possible that …”), along with the corresponding formation rules:
If P is a wff, then ☐P is a wff;
If P is a wff, then ◇P is a wff.
The semantics for modal propositional calculus is not straightforward, as modal locutions are not transparently truth-functional; that a proposition P is false does not logically entail whether it is
possibly true or necessarily false. It follows therefore that the modal operators cannot simply be introduced along the same lines as the other logical connectives. Instead, the semantics for a modal
language is usually given in terms of possible worlds. Informally, each possible world can be thought of as a distinct interpretation (that is, assignment of truth values) for the language, with the
modal operators ☐ and ◇ making a claim about the truth values that hold at every interpretation, or at another interpretation, respectively. A model M for the modal propositional calculus is
therefore of the form , where W is a set of possible worlds, R is the accessibility relationship holding between members of W, and val is a function that assigns to every propositional variable a
truth value (either true or false) at each possible world w Î W.
The truth values for complex, molecular sentences at each possible world w are then built up compositionally via the truth tables for the logical connectives: that is, M ⊨ (φ & ψ) at a world w iff M
⊨ φ at a world w, and M ⊨ ψ at a world w.
The truth values for complex molecular sentences formed with the modal operators are given in terms of the truth values that hold at other accessible possible worlds as specified by the accessibility
relationship R—that is, M ⊨ (◇φ) at a world w iff M ⊨ φ at some world w’ Î W such that R(w, w’); and M ⊨ (☐φ) at a world w iff M ⊨ φ at all worlds w’ Î W such that R(w, w’). It follows from these
definitions that the two modal connectives are interdefinable —that is, ☐P is equivalent to ¬◇¬P. Similarly, other modal claims can be defined in terms of these basic connectives. To say that P is
impossible is to say that P is necessarily false—that is, ☐¬P; to say that P is contingent is to say that both it and its negation are possible—that is, (◇P & ◇¬P).
Proof Theory
A natural deduction system for modal propositional calculus is formed by extending the system introduced for the propositional calculus with the following rules that allow for the introduction and
manipulation of the modal operators:
Rule of Necessitation: if φ is a theorem (i.e. ⊢ φ) then we can derive
☐φ. Rule of Distribution: given ☐(φ ⊃ ψ), we can derive (☐φ ⊃ ☐ψ).
Different modal logics are then constructed by the addition of different axioms determining the behavior of the modal operators. The two most common modal axioms are the following:
(D): ☐φ ⊃ ◇φ
(T): ☐φ ⊃ φ
The combination of (D) and (T) provides the basis for most of the “interesting” (that is, expressively powerful) modal logics. Three further well-known modal logics can be constructed by adding each
of the following axioms respectively:
(B): φ ⊃ ☐◇φ
(S4): ☐φ ⊃ ☐☐φ
(S5): ◇φ ⊃ ☐◇φ
These axioms are presented in descending order of strength: a modal logic containing (S5) can derive all the previous axioms; a modal logic containing (S4) or (B) can derive the axioms (T) and (D);
and a modal logic containing (T) can derive the axiom (D). Some of these axioms are associated with particular interpretations of the modal operators; the axiom (D) is usually taken as a base for
deontic logic—that is, the logic of obligation, where ☐φ is read as “it ought to be that φ,” and ◇φ is read as “it is permissible that φ.”
Meta-Logical Results
The accessibility relationship R introduced in the semantics for modal propositional logic helps to illuminate the different modal axioms discussed above. In particular, it can be shown that (T) is
valid on every frame where R is a reflexive relation (that is, where every possible world is accessible to itself); it can, therefore, be shown that the modal propositional calculus with (T) as its
only additional axiom is sound and complete with respect to the class of all reflexive models. Similarly, (B) is valid on every frame where R is symmetrical (that is, if w1 is accessible from w2 ,
then w2 is accessible from w1 ; (S4) is valid on every frame that is transitive (that is, if w2 is accessible from w1 , and w3 is accessible from w2 , then w3 is accessible from w1); and (S5) is
valid on every frame that is reflexive, symmetrical, and transitive (that is, where the set of possible worlds form an equivalence class). While the concepts of possibility and necessity obviously
play an important role in metaphysics (such as the analysis of counterfactuals), there is no philosophical consensus on the logical form of this relationship. | {"url":"https://philosophystudent.org/modal-logic/","timestamp":"2024-11-08T11:29:06Z","content_type":"text/html","content_length":"213883","record_id":"<urn:uuid:c4bc1965-3e90-4f94-a93b-d4dc4bf2c404>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00456.warc.gz"} |
ifferential equation
Differential equations examples
Collection of examples with differential equations. In set of examples placed here you can find solutions for homogeneous and inhomogeneous differentail equations. On this website are various
examples which regard to differntial equations of first order and differntial equations of second order.
Differential equations example 0
Resolved example with inhomogeneous differential equation.
differential equations example 0
Differential equations example 1
Resolved example with inhomogeneous differential equation.
differential equations example 1
Differential equations example 2
Resolved example with inhomogeneous differential equation.
differential equations example 2
Differential equations example 3
Resolved example with inhomogeneous differential equation.
differential equations example 3
Differential equations example 4
Resolved example with homogeneous differential equation.
differential equations example 4
Differential equations example 5
Resolved example with inhomogeneous differential equation.
differential equations example 5
Differential equation of second order - constants variation method
Solved example in which constant variation method is applied to solve differential equation of second order.
Differential equation of second order - constants variation method
Differential equation of second order - prediction method
Solved example in which constant prediction method is applied to solve differential equation of second order.
Differential equation of second order - prediction method | {"url":"http://www.mbstudent.com/maths-differential-equations.html","timestamp":"2024-11-12T06:38:06Z","content_type":"text/html","content_length":"10241","record_id":"<urn:uuid:4e3a2844-c7e2-4c95-83ab-5685e7c93440>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00088.warc.gz"} |
Unlocking Efficiency: How Bloom Filters Save Space and Supercharge Data Access
Bloom filters stand out as a clever and efficient way to determine whether an element is a member of a set. This probabilistic data structure is particularly useful when dealing with large datasets
and applications where memory efficiency and fast set membership testing are essential. In this blog post, we will delve into the fascinating world of Bloom filters, exploring their inner workings,
use cases, advantages, and limitations.
What is a Bloom Filter?
A Bloom filter is a space-efficient probabilistic data structure designed to quickly test whether an element belongs to a set or not. It accomplishes this by using a bit array of a fixed size and a
series of hash functions. When an element is added to the Bloom filter, the hash functions generate a set of positions in the bit array where bits are set to 1. To check for membership, the same hash
functions are applied to the query element, and if all corresponding bits are set to 1, it suggests that the element may be in the set. However, false positives are possible, but false negatives are
How Does a Bloom Filter Work?
Initialisation: A Bloom filter begins as an array of bits, all initially set to 0.
Adding Elements: To add an element to the filter, it undergoes multiple hash functions that generate a set of bit positions. These positions are then set to 1 in the filter. Each hash function takes
the element as input and produces an output, typically a numeric value. This output is then mapped to positions in the bit array using modulo arithmetic. For example, if you have a Bloom filter with
a bit array of size , and one of the hash functions produces an output of , you can map it to a position in the bit array using .
Membership Query: To check if an element is a member of the set, the same hash functions are applied to it. If all corresponding bits in the filter are set to 1, the element is considered a possible
member. If any bit is 0, it is definitely not in the set.
Let’s take an example to illustrate this. Suppose we have a Bloom filter with 8 bits(bit array of size 8) and two hash functions.
Hash Function 1 takes the element “java” and produces an output of 3.
Hash Function 2 takes the same element “java” and produces an output of 6.
To add “java” to the Bloom filter:
Position 3 and Position 6 in the bit array are set to 1.
We want to check if “java” is in the bloom filter, then you’ll apply Hash Function 1 and Hash Function 2 to “java” and check if both Position 3 and Position 6 in the bit array are set to 1. If all
the corresponding bits at positions 3 and 6 are set to 1, “java” is considered a possible member.
It’s important to note that the positions in the bit array for different elements can overlap, which is why false positives can occur when checking for membership. False positives happen when the
bits set to 1 for one element overlap with the bits set to 1 for another element, making the filter think an element is present when it’s not. The probability of false positives depends on the size
of the bit array, the number of hash functions, and the number of elements added to the filter.
How Bloom Filters Save Space in Data Storage
Bloom filters are ingenious data structures known for their space-efficient characteristics. They accomplish this by making a few trade-offs and using probabilistic techniques. Here’s how Bloom
filters save space in data storage:
Compact Representation: Bloom filters use a compact representation of data compared to other data structures like hash tables or binary search trees. Instead of storing the actual elements, a Bloom
filter employs a fixed-size bit array.
Each element is mapped to multiple positions (bits) in this array through hash functions. As a result, the storage requirements are proportional to the size of the bit array, which can be
significantly smaller than storing the elements themselves.
Elimination of Redundant Data: Traditional data structures often store all the elements individually. In contrast, a Bloom filter doesn’t store the elements themselves. By using multiple hash
functions, it efficiently encodes the presence or absence of elements in a highly compressed form. This eliminates the need to store redundant data, which can be especially advantageous when dealing
with large datasets.
Constant Size: The space occupied by a Bloom filter is not directly related to the number of elements it contains. Instead, it depends on parameters like the desired false positive probability and
the expected number of insertions. This means that regardless of the size of the dataset, the Bloom filter maintains a relatively constant size, making it suitable for memory-constrained
Probabilistic Nature: One of the trade-offs made by Bloom filters is their probabilistic nature. They allow for a small probability of false positives, which means that in some cases, the filter
might incorrectly suggest an element is in the set when it’s not. This trade-off enables Bloom filters to achieve their space efficiency, as they don’t need to maintain complete and precise
information about the elements.
Scalability: Bloom filters can scale effectively for large datasets without a significant increase in memory usage. The size of the bit array and the number of hash functions can be adjusted to
balance memory consumption and false positive rates.
Parallelism: Due to the independence of hash functions and bit positions in the array, Bloom filters allow for efficient parallel processing. Multiple membership tests can be performed concurrently,
making them suitable for multi-threaded or distributed systems.
Advantages of Bloom Filters
Space Efficiency: Bloom filters use relatively small amounts of memory compared to other data structures like hash tables, making them ideal for applications with limited memory.
Fast Membership Testing: Checking membership in a Bloom filter is extremely fast. The number of hash functions and size of the bit array can be adjusted to balance space and accuracy.
Parallelism: Multiple membership tests can be performed in parallel since each query element’s hash positions are independent.
No False Negatives: Bloom filters never produce false negatives. If the filter says an element is not present, it’s definitely not in the set.
Limitations of Bloom Filters
False Positives: The probabilistic nature of Bloom filters means there can be false positives. If all bits are set to 1 for a query, it suggests membership, but the element might not be in the set.
No Deletion: Bloom filters do not support element deletion. Removing an element is not straightforward as it could affect other elements.
Hash Functions: The quality of hash functions used is crucial. Poor hash functions can increase false positives.
Use Cases
Caching: Web browsers use Bloom filters to quickly determine if a website is in a local cache, reducing network requests.
Spell Checkers: Bloom filters can help identify whether a word exists in a dictionary, improving the speed of spell checkers.
Network Routers: Routers use Bloom filters to efficiently decide whether an IP address is in a blacklist.
Duplicate Elimination: In distributed systems, Bloom filters can be employed to eliminate duplicate data transmission.
Indexing: When a document is inserted or updated in the database, its key or ID is hashed using one or more hash functions. These hash values are then used to determine the positions in a Bloom
filter. Bits at these positions in the Bloom filter are set to 1. For example, consider a NoSQL database that uses a Bloom filter in its index. When a new document with the key “doc123” is added to
the index, the Bloom filter is updated based on the hash values of “doc123.”
Bloom filters are an ingenious data structure for efficient set membership testing, offering space-efficient solutions in various applications.
Bloom filters save space in data storage by using a compact representation, eliminating redundant data, maintaining constant size, minimising overhead, and leveraging their probabilistic nature.
While they do have limitations, such as the possibility of false positives.
Understanding their limitations and use cases is essential for harnessing their power effectively. When memory efficiency and fast querying are essential, Bloom filters are a valuable tool in a
programmer’s toolbox.
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://dev.to/kiranukamath/unlocking-efficiency-how-bloom-filters-save-space-and-supercharge-data-access-2kg5","timestamp":"2024-11-12T00:36:55Z","content_type":"text/html","content_length":"67457","record_id":"<urn:uuid:bf68a6fa-b2d9-4d4a-9922-2072a6153873>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00396.warc.gz"} |
Mixed Meta Four
If you haven't read about the Wolverine
) or
Sam's Exasperating Problem
[S:you need to get your priorities straight:S]
go read 'em now. Sam's wondering if the problem can be scaled down so a precalculus class can handle it. I see the problem and think, "GEOGEBRA! I CAN USE GEOGEBRA!"
Mr. H beat me to it
. (If you haven't seen his applet,
[S:I question your dedication to the cause:S]
go ahead, we'll wait.)
I love this! We are always looking for ways to iterate problems and extend them, but there's nothing to extend with this problem. It's all ready for the wolverine wrangler to do his stuff. I'm
looking for the guy who can make this wolverine sit and quit bearing its teeth so my 8th graders can pet it for a second. GeoGebra does this. Mr. H's applet makes this problem accessible to an 8 year
old. In fact, my son was so mesmerized by the animation that I swear I heard him muttering, "
Heffalumps and Woozles. Heffalumps and Woozles
." Heck, I found a strange urge to put on some Pink Floyd myself.
Can you imagine starting a problem in middle school and finishing it with calculus? That's how beautiful (that's right I said it!) this problem is. Why can't we let these younger kids see the beauty
of the wolverine without actually having to be the one to handle it? I can see posing the problem, setting the kids up with GeoGebra (with minimal prerequisites) and turning them loose. They'll see
the pattern, make a conjecture and inductively decide the answer. Show the applet which demonstrates the first 360 cases and inevitably, the question will be:
Now, talk about
. The table's been set for the sequel that the kid's gonna have to wait a couple of years to see. precalculus kids can actually
the answer and the trilogy will be complete once they have the tools to actually prove that for n chords, the product is n+1. This problem can span
years. At least.
[[1]] Apologies if I misused the metaphor.
4 comments:
So what would you say to the student who is totally convinced by the Geogebra applet, and sees no need for further investigation/exploration? (Just curious.)
This sort of thing (introducing early and working up to a finished product years later) could be done with all sorts of problems! Reminds me of this post:
It would be fun to get one or two big/exciting problems that you could introduce in Algebra 1 or PreAlgebra or something and refer back to for the next few years while the students get more and
more pieces of the puzzle over the next few years.
Mr. Collins I'm not sure that after seeing the applet anyone would need more convincing. I do think that "why?" is the obvious next question, though. A precalculus kid should be able to grind out
a few cases that the algebra kid simply trust geogebra for. However, I believe a formal proof is going to have to wait until calculus.
CalcDave I'd love to see vertical articulation get to the point where we can keep adding to each others' foundations on a single problem. The algebra kids will already know the answer, but
figuring out how and why will take a while.
Was that last comment a hint at your future plans in your new (half) job? | {"url":"https://coxmath.blogspot.com/2010/08/mixed-meta-four.html","timestamp":"2024-11-06T07:32:55Z","content_type":"text/html","content_length":"72899","record_id":"<urn:uuid:a925f928-1904-4996-9fd7-95d90354e62b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00768.warc.gz"} |
Lesson 8
Equal and Equivalent
Let's use diagrams to figure out which expressions are equivalent and which are just sometimes equal.
Problem 1
1. Draw a diagram of \(x + 3\) and a diagram of \(2x\) when \(x\) is 1.
2. Draw a diagram of \(x+3\) and of \(2x\) when \(x\) is 2.
3. Draw a diagram of \(x+3\) and of \(2x\) when \(x\) is 3.
4. Draw a diagram of \(x+3\) and of \(2x\) when \(x\) is 4.
5. When are \(x+3\) and \(2x\) equal? When are they not equal? Use your diagrams to explain.
Problem 2
1. Do \(4x\) and \(15+x\) have the same value when \(x\) is 5?
2. Are \(4x\) and \(15+x\) equivalent expressions? Explain your reasoning.
Problem 3
1. Check that \(2b + b\) and \(3b\) have the same value when \(b\) is 1, 2, and 3.
2. Do \(2b +b\) and \(3b\) have the same value for all values of \(b\)? Explain your reasoning.
3. Are \(2b+b\) and \(3b\) equivalent expressions?
Problem 4
80% of \(x\) is equal to 100.
1. Write an equation that shows the relationship of 80%, \(x\), and 100.
2. Use your equation to find \(x\).
(From Unit 6, Lesson 7.)
Problem 5
For each story problem, write an equation to represent the problem and then solve the equation. Be sure to explain the meaning of any variables you use.
1. Jada’s dog was \(5\frac{1}{2}\) inches tall when it was a puppy. Now her dog is \(14\frac{1}{2}\) inches taller than that. How tall is Jada’s dog now?
2. Lin picked \(9 \frac{3}{4}\) pounds of apples, which was 3 times the weight of the apples Andre picked. How many pounds of apples did Andre pick?
(From Unit 6, Lesson 5.)
Problem 6
Find these products.
1. \((2.3) \boldcdot (1.4)\)
2. \((1.72) \boldcdot (2.6)\)
3. \((18.2) \boldcdot (0.2)\)
4. \(15 \boldcdot (1.2)\)
(From Unit 5, Lesson 8.)
Problem 7
Calculate \(141.75 \div 2.5\) using a method of your choice. Show or explain your reasoning.
(From Unit 5, Lesson 13.) | {"url":"https://curriculum.illustrativemathematics.org/MS/students/1/6/8/practice.html","timestamp":"2024-11-14T11:11:15Z","content_type":"text/html","content_length":"81391","record_id":"<urn:uuid:f09c1fca-0dd7-4fd1-a141-eb339b88f43e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00068.warc.gz"} |
Euclidean algorithm - Flowchart | Flow Chart To Elaborate And Explain Euclid Lemma Division Or Euclid Division Algorithms
"In mathematics, the Euclidean algorithm, or Euclid's algorithm, is a method for computing the greatest common divisor (GCD) of two (usually positive) integers, also known as the greatest common
factor (GCF) or highest common factor (HCF). ...
The GCD of two positive integers is the largest integer that divides both of them without leaving a remainder (the GCD of two integers in general is defined in a more subtle way).
In its simplest form, Euclid's algorithm starts with a pair of positive integers, and forms a new pair that consists of the smaller number and the difference between the larger and smaller numbers.
The process repeats until the numbers in the pair are equal. That number then is the greatest common divisor of the original pair of integers.
The main principle is that the GCD does not change if the smaller number is subtracted from the larger number. ... Since the larger of the two numbers is reduced, repeating this process gives
successively smaller numbers, so this repetition will necessarily stop sooner or later - when the numbers are equal (if the process is attempted once more, one of the numbers will become 0)."
[Euclidean algorithm. Wikipedia]
The flowchart example "Euclidean algorithm" was created using the ConceptDraw PRO diagramming and vector drawing software extended with the Mathematics solution from the Science and Education area of
ConceptDraw Solution Park. | {"url":"https://www.conceptdraw.com/examples/flow-chart-to-elaborate-and-explain-euclid-lemma-division-or-euclid-division-algorithms","timestamp":"2024-11-05T02:36:15Z","content_type":"text/html","content_length":"28860","record_id":"<urn:uuid:186a4e85-818c-40e5-a5cf-efe80c9ef246>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00458.warc.gz"} |
Second-type self-similar solutions to the ultrarelativistic strong explosion problem
The well-known Blandford-McKee solution describes the ultrarelativistic flow in a spherical blast wave enclosed by a strong shock. It is valid when the density of the external medium into which the
shock propagates varies with the distance r from the origin as r(-k), for k < 4. These are first-type self-similar solutions in which the shock Lorentz factor Γ varies as Γ^2 proportional to t(-m),
where m = 3 - k to ensure energy conservation. New second-type self-similar solutions, valid for k > 5 - √3/4~4.13, are presented. In these solutions Γ varies as Γ^2 proportional to t(-m) with m = (3
- 2 √3)k - 4(5 - 3√3) so that the shock accelerates and the fraction of the flow energy contained in the vicinity of the shock decreases with time. The new solutions are shown to be in excellent
agreement with numerical simulations of the flow equations. It is proved that no second-type self-similar solutions exist for k < 5 - √3/4 ~4.13. (C) 2000 American Institute of Physics.
Dive into the research topics of 'Second-type self-similar solutions to the ultrarelativistic strong explosion problem'. Together they form a unique fingerprint. | {"url":"https://cris.huji.ac.il/en/publications/second-type-self-similar-solutions-to-the-ultrarelativistic-stron","timestamp":"2024-11-03T15:25:47Z","content_type":"text/html","content_length":"47683","record_id":"<urn:uuid:11273b28-746d-4913-b1d6-58ea344ada8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00821.warc.gz"} |
Electricity 101: Understanding Voltage, Current, and Resistance - Cyber City Circuits
When we talk about electricity, there are a few key units of measurement that we use to describe how it flows and how it is used. These units are voltage, current, and resistance. Let’s take a closer
look at each of these:
Voltage, also known as electric potential difference, is a measure of the force that pushes electricity through a conductor. It is measured in units called volts (V). The higher the voltage, the
stronger the force pushing the electricity through the conductor.
Current is a measure of the flow of electricity through a conductor. It is measured in units called amperes, or amps (A). The higher the current, the more electricity is flowing through the
Resistance is a measure of how hard it is for the electricity to flow through a conductor. It is measured in units called ohms (Ω). The higher the resistance, the harder it is for the electricity to
We can use these three units of measurement to understand how electricity behaves in different situations. For example, we can use Ohm’s Law to calculate the current in a circuit if we know the
voltage and the resistance. Ohm’s Law states that:
Current (I) = Voltage (V) / Resistance (R)
Imagine you have a small portable fan that runs on electricity. The battery in the fan has a voltage of 4.5 volts and the motor that powers the fan has a resistance of 1.5 ohms. Using Ohm’s Law, we
can calculate that the current flowing through the motor will be 3 amps (4.5 volts / 1.5 ohms = 3 amps).
In this example, the voltage of the battery is providing the force to push the electricity through the conductor (the motor), and the resistance of the motor determines how easily the electricity can
flow. The current, measured in amps, tells us how much electricity is flowing through the motor.
The higher the voltage, the stronger the force pushing the electricity through the conductor. The higher the current, the more electricity is flowing through the conductor. The higher the
resistance, the harder it is for the electricity to flow.
Understanding these basic units of measurement is important for working with electricity and designing electrical circuits. It helps us predict how electricity will behave and how different devices
and circuits will work.
So the next time you’re working with electricity, remember these key units of measurement and how they help us understand the flow of electricity! | {"url":"https://cybercitycircuits.com/2023/01/24/electricity-101-understanding-voltage-current-and-resistance/","timestamp":"2024-11-09T00:24:59Z","content_type":"text/html","content_length":"60720","record_id":"<urn:uuid:123bb6b8-9749-4d32-8bd0-b81e375089bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00452.warc.gz"} |
admm(UtM, UtU, x, dual_var, n_iter_max=100, n_const=None, order=None, non_negative=None, l1_reg=None, l2_reg=None, l2_square_reg=None, unimodality=None, normalize=None, simplex=None,
normalized_sparsity=None, soft_sparsity=None, smoothness=None, monotonicity=None, hard_sparsity=None, tol=0.0001)[source]
Alternating direction method of multipliers (ADMM) algorithm to minimize a quadratic function under convex constraints.
UtM: ndarray
Pre-computed product of the transposed of U and M.
UtU: ndarray
Pre-computed product of the transposed of U and U.
x: init
Default: None
Dual variable to update x
Maximum number of iteration Default: 100
Number of constraints. If it is None, function solves least square problem without proximity operator If ADMM function is used with a constraint apart from constrained parafac
decomposition, n_const value should be changed to ‘1’. Default : None
Specifies which constraint to implement if several constraints are selected as input Default : None
non_negativebool or dictionary
This constraint is clipping negative values to ‘0’. If it is True, non-negative constraint is applied to all modes.
l1_regfloat or list or dictionary, optional
Penalizes the factor with the l1 norm using the input value as regularization parameter.
l2_regfloat or list or dictionary, optional
Penalizes the factor with the l2 norm using the input value as regularization parameter.
l2_square_regfloat or list or dictionary, optional
Penalizes the factor with the l2 square norm using the input value as regularization parameter.
unimodalitybool or dictionary, optional
If it is True, unimodality constraint is applied to all modes. Applied to each column seperately.
normalizebool or dictionary, optional
This constraint divides all the values by maximum value of the input array. If it is True, normalize constraint is applied to all modes.
simplexfloat or list or dictionary, optional
Projects on the simplex with the given parameter Applied to each column seperately.
normalized_sparsityfloat or list or dictionary, optional
Normalizes with the norm after hard thresholding
soft_sparsityfloat or list or dictionary, optional
Impose that the columns of factors have L1 norm bounded by a user-defined threshold.
smoothnessfloat or list or dictionary, optional
Optimizes the factors by solving a banded system
monotonicitybool or dictionary, optional
Projects columns to monotonically decreasing distrbution Applied to each column seperately. If it is True, monotonicity constraint is applied to all modes.
hard_sparsityfloat or list or dictionary, optional
Hard thresholding with the given threshold
xUpdated ndarray
x_splitUpdated ndarray
dual_varUpdated ndarray
ADMM solves the convex optimization problem
\[\min_ f(x) + g(z),\; A(x_{split}) + Bx = c.\]
Following updates are iterated to solve the problem
\[x_{split} = argmin_{x_{split}}~ f(x_{split}) + (\rho/2)\|Ax_{split} + Bx - c\|_2^2\]
\[x = argmin_x~ g(x) + (\rho/2)\|Ax_{split} + Bx - c\|_2^2\]
\[dual\_var = dual\_var + (Ax + Bx_{split} - c)\]
where rho is a constant defined by the user.
Let us define a least square problem such as \(\|Ux - M\|^2 + r(x)\).
ADMM can be adapted to this least square problem as following
\[x_{split} = (UtU + \rho\times I)\times(UtM + \rho\times(x + dual\_var)^T)\]
\[x = argmin_{x}~ r(x) + (\rho/2)\|x - x_{split}^T + dual\_var\|_2^2\]
\[dual\_var = dual\_var + x - x_{split}^T\]
where r is the regularization operator. Here, x can be updated by using proximity operator of \(x_{split}^T - dual\_var\).
Huang, Kejun, Nicholas D. Sidiropoulos, and Athanasios P. Liavas. “A flexible and efficient algorithmic framework for constrained matrix and tensor factorization.” IEEE Transactions on Signal
Processing 64.19 (2016): 5052-5065. | {"url":"http://tensorly.org/dev/modules/generated/tensorly.solvers.admm.admm.html","timestamp":"2024-11-12T16:00:09Z","content_type":"text/html","content_length":"25657","record_id":"<urn:uuid:e84714f2-6b71-42fd-8722-c426d5667699>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00413.warc.gz"} |
Identification and estimation of Gaussian affine term ... - P.PDFKUL.COM
Journal of Econometrics 168 (2012) 315–331
Contents lists available at SciVerse ScienceDirect
Journal of Econometrics journal homepage: www.elsevier.com/locate/jeconom
Identification and estimation of Gaussian affine term structure models✩ James D. Hamilton a , Jing Cynthia Wu b,∗ a
Department of Economics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA
The University of Chicago, Booth School of Business, 5807 South Woodlawn Avenue, Chicago, IL 60637, USA
Article history: Received 13 December 2010 Received in revised form 17 January 2012 Accepted 27 January 2012 Available online 3 February 2012 JEL classification: E43 C13 G12
abstract This paper develops new results for identification and estimation of Gaussian affine term structure models. We establish that three popular canonical representations are unidentified, and
demonstrate how unidentified regions can complicate numerical optimization. A separate contribution of the paper is the proposal of minimum-chi-square estimation as an alternative to MLE. We show
that, although it is asymptotically equivalent to MLE, it can be much easier to compute. In some cases, MCSE allows researchers to recognize with certainty whether a given estimate represents a
global maximum of the likelihood function and makes feasible the computation of small-sample standard errors. © 2012 Elsevier B.V. All rights reserved.
Keywords: Affine term structure models Identification Estimation Minimum-chi-square
1. Introduction The class of Gaussian affine term structure models1 developed by Vasicek (1977), Duffie and Kan (1996), Dai and Singleton (2002), and Duffee (2002) has become the basic workhorse in
macroeconomics and finance for purposes of using a noarbitrage framework for studying the relations between yields on assets of different maturities. Its appeal comes from its simple characterization
of how risk gets priced by the market which, under the assumption of no arbitrage, generates predictions for the price of any asset. The approach has been used to measure the role of risk premia in
interest rates (Duffee, 2002; Cochrane
✩ We are grateful to Michael Bauer, Bryan Brown, Frank Diebold, Ron Gallant, Ken Singleton, anonymous referees, and seminar participants at the University of Chicago, UCSD, Federal Reserve Board,
Pennsylvania State University, Society for Financial Econometrics, Midwest Macroeconomics Conference, Rice University, University of Colorado, and the Federal Reserve Bank of San Francisco for
comments on earlier drafts of this paper. ∗ Corresponding author. Tel.: +1 773 834 8689. E-mail addresses:
[email protected]
(J.D. Hamilton),
[email protected]
(J.C. Wu). 1 By Gaussian affine term structure models we refer to specifications in which
the discrete-time joint distribution of yields and factors is multivariate Normal with constant conditional variances. We do not in this paper consider the broader class of non-Gaussian processes.
0304-4076/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.jeconom.2012.01.035
and Piazzesi, 2009), study how macroeconomic developments and monetary policy affect the term structure of interest rates (Ang and Piazzesi, 2003; Beechey and Wright, 2009; Bauer, 2011), characterize
the monetary policy rule (Ang et al., 2007; Rudebusch and Wu, 2008; Bekaert et al., 2010), determine why long-term yields remained remarkably low in 2004 and 2005 (Kim and Wright, 2005; Rudebusch et
al., 2006), infer market expectations of inflation from the spread between nominal and inflationindexed Treasury yields (Christensen et al., 2010), evaluate the effectiveness of the extraordinary
central bank interventions during the financial crisis (Christensen et al., 2009; Smith, 2010), and study the potential for monetary policy to affect interest rates when the short rate is at the zero
lower bound (Hamilton and Wu, 2012). But buried in the footnotes of this literature and in the practical experience of those who have used these models are tremendous numerical challenges in
estimating the necessary parameters from the data due to highly non-linear and badly behaved likelihood surfaces. For example, Kim (2008) observed: Flexibly specified no-arbitrage models tend to
entail much estimation difficulty due to a large number of parameters to be estimated and due to the nonlinear relationship between the parameters and yields that necessitates a nonlinear
optimization. Ang and Piazzesi (2003) similarly reported:
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
difficulties associated with estimating a model with many factors using maximum likelihood when yields are highly persistent. . . . We need to find good starting values to achieve convergence in this
highly non-linear system. . . . [T]he likelihood surface is very flat in λ0 which determines the mean of long yields. . . . This paper proposes a solution to these and other problems with affine term
structure models based on what we will refer to as their reduced-form representation. For a popular class of Gaussian affine term structure models – namely, those for which the model is claimed to
price exactly a subset of Nℓ linear combinations of observed yields, where Nℓ is the number of unobserved pricing factors – this reduced form is a restricted vector autoregression in the observed set
of yields and macroeconomic variables.2 We explore two implications of this fact that seem to have been ignored in the large preceding literature on such models. The first is that the parameters of
these reduced-form representations contain all the observable implications of any Gaussian affine term structure model for the sample of observed data, and can therefore be used as a basis for
assessing identification. If more than one value for the parameter vector of interest is associated with the same reduced-form parameter vector, then the model is unidentified at that point and there
is no way to use the observed data to distinguish between the alternative possibilities. Although as a general econometric principle this idea dates back to Fisher (1966) and Rothenberg (1971), it
has not previously been applied to affine term structure models. In this paper, we use it to demonstrate that the preferred representations proposed by Ang and Piazzesi (2003) and Pericoli and Taboga
(2008) are in fact unidentified, an observation that our paper is the first to point out. We also use this approach to show that the representation proposed by Dai and Singleton (2000) is
unidentified. Although this latter fact has previously been inferred by Collin-Dufresne et al. (2008) and Aït-Sahalia and Kimmel (2010) using other methods, we regard the proof here based on the
reduced form to be more transparent and direct. We further demonstrate that it is common for numerical search methods to end up in regions of the parameter space that are locally unidentified, and
show why this failure of identification arises. These issues of identification are one factor that contributes to the numerical difficulties for conventional methods noted above. A second and
completely separate contribution of the paper is the observation that it is possible for the parameters of interest to be inferred directly from estimates of the reduced-form parameters themselves.
This is a very useful result because the latter are often simple OLS coefficients. Although translating from reducedform parameters into structural parameters involves a mix of analytical and
numerical calculations, the numerical component is far simpler than that associated with the usual approach of trying to find the maximum of the likelihood surface directly as a function of the
structural parameters. In the case of a just-identified structure, the numerical component of our proposed method has an additional big advantage over the traditional approach, in that the researcher
knows with certainty whether the equations have been solved, and therefore knows with certainty whether one has found the global maximum of the likelihood surface with respect to the structural
parameters or simply a local maximum. In the conventional approach, one instead has to search over hundreds of different starting values, and even then has no guarantee that the global maximum has
been found. In the case where the model
2 For more general models where all yields are priced with measurement error, the reduced form is a restricted state-space representation for the set of observed variables. The same tools developed
here could still be applied in that setting, though we leave exploration of such models for future research.
imposes overidentifying restrictions on the reduced form, one can still estimate structural parameters as functions of the unrestricted reduced-form estimates by the method of minimum-chi-square
estimation (MCSE). This minimizes a quadratic form in the difference between the reduced-form parameters implied by a given structural model and the reduced-form parameters as estimated without
restrictions directly from the data, with the weighting matrix given by the information matrix, in other words, minimizing the value of the chi-square statistic for testing whether the restrictions
are indeed consistent with the observed reduced-form estimates. Again while the general econometric method of minimum-chisquare estimation is well known, our paper is the first to apply it to affine
term structure models and demonstrate its considerable advantages in this setting. Estimating parameters by minimizing the chi-square statistic was to our knowledge first proposed by Fisher (1924)
and Neyman and Pearson (1928). Rothenberg (1973, pp. 24–25) extended the approach to more general parametric inference, demonstrating that when (as in our proposed application) the reduced-form
estimate is the unrestricted MLE and the weighting matrix is the associated information matrix, the resulting MCSE is asymptotically equivalent to full-information MLE. MCSE has also been used in
other settings by Chamberlain (1982) and Newey (1987). More generally, MCSE could be viewed as a special case of minimum distance estimation (MDE) discussed for example by Malinvaud (1970), in which
one minimizes a quadratic form in the difference between restricted and unrestricted statistics. We follow Rothenberg (1973) in using the expression MCSE to refer to the special case of MDE in which
the unrestricted statistics are the unrestricted MLE and weights come from their asymptotic variance, in which case MDE is asymptotically efficient. Another well-known example of MDE is the
generalized method of moments (GMM, Hansen (1982)), in which the unrestricted statistics are sample moments.3 Bekaert et al. (2010) used GMM to estimate parameters of an affine term structure model.
GMM in this form misses what we see as the two main advantages of MCSE, namely, the OLS estimates are known analytically and MCSE, unlike GMM, is asymptotically efficient. Another popular example of
MDE is the method of indirect inference proposed by Gallant and Tauchen (1992), Smith (1993) and Gourieroux et al. (1993). With indirect inference, the unrestricted parameter estimates are typically
regarded as only approximate or auxiliary characterizations of the data, and numerical simulation is typically required to calculate the values for these auxiliary parameters that are implied by the
structural model. Duffee and Stanton (2008) suggested that for highly persistent data such as interest rates, indirect inference or MLE may work substantially better than other moment-based
estimators. One could view our application of MCSE as a special case of indirect inference in which the unrestricted estimates are in fact sufficient statistics for the likelihood function and the
mapping from structural parameters to these coefficients is known analytically, precisely the features from which our claimed benefits of MCSE derive. In particular, we demonstrate in this paper that
use of MCSE captures all the asymptotic benefits of MLE while avoiding many of the numerical problems associated with MLE for affine term structure models. Among other illustrations of the
computational advantages, we establish the feasibility of calculating small-sample standard errors and confidence intervals for this class of models
3 In our application of MCSE, the unrestricted estimates (OLS coefficients and variances) are nonlinear functions of sample moments. This connection between MCSE and GMM is explored further in
Chamberlain (1982, p. 18).
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
and demonstrate that the parameter estimates reported by Ang and Piazzesi (2003) in fact correspond to a local maximum of the likelihood surface and are not the global MLE. There have been several
other recent efforts to address many of these problems in affine term structure models. Christensen et al. (2011) developed a no-arbitrage representation of a dynamic Nelson–Siegel model of interest
rates that gives a convenient representation of level, slope and curvature factors and offers significant improvements in empirical tractability and predictive performance over earlier affine term
structure specifications. Joslin et al. (2011) proposed a canonical representation for affine term structure models that greatly improves convergence of maximum likelihood estimation. Collin-Dufresne
et al. (2008) proposed a representation in terms of the derivatives of the term structure at maturity zero, arguing for the benefits of using these observable magnitudes rather than unobserved latent
variables to represent the state vector of an ATSM. Each of these papers proposes canonical representations that are identified, and the Christensen et al. (2011) and Joslin et al. (2011)
parameterizations lead to better behaved likelihood functions than do the parameterizations explored in detail in our paper. The chief difference between our proposed solution and those of these
other researchers is that they focus on how the ATSM should be represented, whereas we examine how the parameters of the ATSM are to be estimated. Thus for example Christensen et al. (2011) require
the researcher to impose certain restrictions on the ATSM, whereas Joslin et al. (2011) cannot incorporate most auxiliary restrictions on the P dynamics. It is far from clear how any of these three
approaches could have been used to estimate a model of the form investigated by Ang and Piazzesi (2003). By contrast, our MCSE algorithm can be used for any representation, including those proposed
by Christensen et al. (2011) and Joslin et al. (2011), and can simplify the numerical burden regardless of the representation chosen. Indeed, some of the numerical advantages of Joslin et al. (2011)
come from the fact that a subset of their parameterization is identical to a subset of our reducedform representation, and their approach, like ours, takes advantage of the fact that the
full-information MLE for this subset can be obtained by OLS for a popular class of models. However, Joslin et al. (2011) estimated the remaining parameters by conventional MLE rather than using the
full set of reduced-form estimates as in our approach. As Joslin et al. (2011) noted, their representation becomes unidentified in the presence of a unit root. When applied to highly persistent data,
we illustrate that their MLE algorithm can encounter similar problems to those of other representations, which can be avoided with our approach to parameter estimation. The rest of the paper is
organized as follows. Section 2 describes the class of Gaussian affine term structure models and three popular examples, and briefly uses one of the specifications to illustrate the numerical
difficulties that can be encountered with the traditional approach. Section 3 investigates the mapping from structural to reduced-form parameters. We establish that the canonical forms of all three
examples are unidentified and explore how this contributes to some of the problems for conventional numerical search algorithms. In Section 4 we use the mapping to propose approaches to parameter
estimation that are much better behaved. Section 5 concludes. 2. Gaussian affine term structure models
with ut ∼ i.i.d. N (0, IM ). This specification implies that Ft +1 |Ft , Ft −1 , . . . , F1 ∼ N (µt , ΣΣ ′ ) for
µt = c + ρ Ft .
Let rt denote the risk-free one-period interest rate. If the vector Ft includes all the variables that could matter to investors, then the price of a pure discount asset at date t should be a
function Pt (Ft ) of the current state vector. Moreover, if investors were risk neutral, the price they would be willing to pay would satisfy Pt (Ft ) = exp(−rt )Et [Pt +1 (Ft +1 )]
= exp(−rt )
Pt +1 (Ft +1 )φ(Ft +1 ; µt , ΣΣ ′ )dFt +1
for φ( y; µ, Ω ) the M-dimensional N (µ, Ω ) density evaluated at the point y:
φ( y; µ, Ω ) =
exp − (2π )M /2 |Ω |1/2
( y − µ)′ Ω −1 ( y − µ) 2
. (4)
More generally, with risk-averse investors we would replace (3) with Pt (Ft ) = Et [Pt +1 (Ft +1 )Mt ,t +1 ]
Pt +1 (Ft +1 )[Mt ,t +1 φ(Ft +1 ; µt , ΣΣ ′ )]dFt +1
= RM
for Mt ,t +1 the pricing kernel. In many macro models, the pricing kernel would be Mt ,t +1 =
β U ′ (Ct +1 ) t )(1 + πt +1 )
U ′ (C
for β the personal discount rate, U ′ (C ) the marginal utility of consumption, and πt +1 the inflation rate between t and t + 1. Affine term structure models are derived from the particular kernel ′
Mt ,t +1 = exp[−rt − (1/2)λt λt − λt ut +1 ]
for λt an (M × 1) vector that characterizes investor attitudes toward risk, with λt = 0 in the case of risk neutrality. Elementary multiplication of (4) by (6) reveals that for this case Mt ,t +1 φ
(Ft +1 ; µt , ΣΣ ′ ) = exp(−rt )φ(Ft +1 ; µt , ΣΣ ′ ) Q
µQt = µt − Σ λt .
Substituting (7) into (5) and comparing with (3), we see that for this specification of the pricing kernel, risk-averse investors value any asset the same as risk-neutral investors would if the
latter thought Q that the conditional mean of Ft +1 was µt rather than µt . A positive value for the first element of λt , for example, implies that an asset that delivers the quantity F1,t +1
dollars in period t + 1 would have a value at time t that is less than the value that would be assigned by a risk-neutral investor, and the size of this difference is bigger when the (1, 1) element
of Σ is bigger. An asset yielding Fi,t +1 dollars has a market value that is reduced by Σi1 λ1t relative to a risk-neutral valuation, through the covariance between factors i and 1. The term λ1t
might then be described as the market price of factor 1 risk. The affine term structure models further postulate that this market price of risk is itself an affine function of Ft ,
2.1. Basic framework
λt = λ + ΛFt
Consider an (M × 1) vector of variables Ft whose dynamics are characterized by a Gaussian vector autoregression:
for λ an (M × 1) vector and Λ an (M × M ) matrix. Substituting (9) and (2) into (8), we see that
Ft +1 = c + ρ Ft + Σ ut +1
µQt = c Q + ρ Q Ft
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
for cQ = c − Σ λ
ρ Q = ρ − ΣΛ.
In other words, risk-averse investors value assets the same way as a risk-neutral investor would if that risk-neutral investor believed that the factors are characterized by a Q -measure VAR given by
Ft +1 = c Q + ρ Q Ft + Σ ut +1 Q
Q with ut +1
a vector of independent standard Normal variables under the Q measure. Suppose that the risk-free 1-period yield is also an affine function of the factors ′
rt = δ0 + δ1 Ft .
Then, as demonstrated for example in Appendix A of Ang and Piazzesi (2003), under the above assumptions the yield on a risk-free n-period pure-discount bond can be calculated as ynt = an + b′n Ft
where bn =
1 n
[IM + ρ Q ′ + · · · + (ρ Q ′ )n−1 ]δ1
an = δ0 + (b′1 + 2b′2 + · · · + (n − 1)b′n−1 )c Q /n
− (b′1 ΣΣ ′ b1 + 22 b′2 ΣΣ ′ b2 + · · · + (n − 1)2 b′n−1 ΣΣ ′ bn−1 )/2n.
If we knew Ft and the values of c Q and ρ Q along with δ0 , δ1 , and Σ , we could use (14), (15) and (16) to predict the yield for any maturity n. There are thus three sets of parameters that go into
an affine term structure model: (a) the parameters c , ρ , and Σ that characterize the objective dynamics of the factors in Eq. (1) (sometimes called the P parameters); (b) the parameters λ and Λ in
Eq. (9) that characterize the price of risk; and (c) the Q parameters c Q and ρ Q (along with the same Σ as appeared in the P parameter set) that figure in (12). If we knew any two of these sets of
parameters, we could calculate the third4 using (10) and (11). We will refer to a representation in terms of (a) and (b) as a λ representation, and a representation in terms of (a) and (c) as a Q
representation. Suppose we want to describe yields on a set of Nd different maturities. If Nd is greater than Nℓ , where Nℓ is the number of unobserved pricing factors, then (14) would imply that it
should be possible to predict the value of one of the ynt as an exact linear function of the others. Although in practice we can predict one yield extremely accurately given the others, the empirical
fit is never exact. One common approach to estimation, employed for example by Ang and Piazzesi (2003) and Chen and Scott (1993), is to suppose that (14) holds exactly for Nℓ linear combinations of
observed yields, and that the remaining Ne = Nd − Nℓ linear combinations differ from the predicted value by a small measurement error. Let Yt1 denote the (Nℓ × 1) vector consisting of those linear
combinations of yields that are treated as priced without error and Yt2 the remaining (Ne × 1) linear combinations. The measurement specification is then
B1 A1 0 (Nℓ ×1) (Nℓ ×1) (Nℓ ×M ) ( N ℓ ×N e ) uet Ft + + 2 = Σ B A Yt
(Ne ×1)
(Ne ×1)
(Ne ×Ne )
(Ne ×1)
where Σe is typically taken to be diagonal. Here Ai and Bi are calculated by stacking (16) and (15), respectively, for the appropriate n, while Σe determines the variance of the measurement error
with uet ∼ N (0, INe ). We will discuss many of the issues associated with identification and estimation of affine term structure models in terms of three examples. 2.2. Example 1: latent factor
model In this specification, the factors Ft governing yields are treated as if observable only through their implications for the yields themselves; examples in the continuous-time literature include
Dai and Singleton (2000), Duffee (2002), and Kim and Orphanides (2005). Typically in this case, the number of factors Nℓ and the number of yields observed without error are both taken to be 3, with
the 3 factors interpreted as the level, slope, and curvature of the term structure. The 3 linear combinations Yt1 regarded as observed without error can be constructed from the first 3 principal
components of the set of yields. Alternatively, they could be constructed directly from logical measures of level, slope, and curvature. Yet another option is simply to choose 3 representative yields
as the elements of Yt1 . Which linear combinations are claimed to be priced without error can make a difference for certain testable implications of the model, an issue that we explore in a separate
paper (Hamilton and Wu, forthcoming) which addresses empirical testing of the overidentifying restrictions of affine term structure models. For purposes of discussing identification and estimation,
however, the choice of which yields go into Yt1 is immaterial, and notation is kept simplest by following Ang and Piazzesi (2003) and Pericoli and Taboga (2008) in just using 3 representative yields.
In our numerical example, these are taken to be the n = 1-, 12-, and 60-month maturities, with data on 36-month yields included separately in Yt2 . Thus for this illustrative latent-factor
specification, Eq. (17) takes the form y1t b′1 0 a1 y12 a12 b′ 0 e t60 = + 12 Ft + 0 ut y a60 b′60 t ′ 36 Σe a36 b36 yt
where an and bn are calculated from Eqs. (15) and (16), respectively. We will use for our illustration a Q representation for this system. Dai and Singleton (2000) proposed the normalization
conditions Σ = INℓ , δ1 ≥ 0, c = 0 and ρ lower triangular. Singleton (2006) used parallel constraints on the Q parameters (Σ = INℓ , δ1 ≥ 0, c Q = 0, ρ Q lower triangular). Our illustration will use
Σ = INℓ , δ1 ≥ 0, c = 0 and ρ Q lower triangular. For the Nℓ = 3, Ne = 1 case displayed in Eq. (18), there are then 23 unknown parameters: 3 in c Q , 6 in ρ Q , 9 in ρ , 1 in δ0 , 3 in δ1 , and 1 in
Σe , which we collect in the (23 × 1) vector θ . The log likelihood is
L(θ; Y ) =
T {− log[| det( J )|] + log φ(Ft ; c + ρ Ft −1 , INℓ ) t =1
+ log φ(uet ; 0, INe )}
for φ(.) the multivariate Normal density in Eq. (4) and det( J ) the determinant of the Jacobian, with
(N ×N ) J = ℓ ℓ B2 4 We will discuss examples below in which Σ is singular for which the demonstration of this equivalence is a bit more involved, with the truth of the assertion coming from the
fact that for such cases certain elements of λ and Λ are defined to be zero.
2 (Ne ×M )
(Ne ×Nℓ )
(Nℓ ×Ne ) (Ne ×Ne )
1 1 Ft = B− 1 (Yt − A1 ) 1 1 uet = Σe−1 {Yt2 − A2 − B2 B− 1 (Yt − A1 )}.
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
Table 1 Parameter values used for simulation and estimates associated with (1) the global maximum and (2) a representative point of local convergence. True values cQ
ρQ ρ δ0 δ1 Σe
eig(ρ) LLF
0.0407 0.9991 0.0101 0.0289 0.9812 −0.0010 0.0164 0.0046 1.729E−4 9.149E−5 0.9879
Global maximum 0.0135 0 0.9317 0.2548 0.0069 0.8615 0.1856
0.5477 0 0 0.7062 0.0607 0.1049 0.6867
0.0416 0.9985 0.0116 0.0219 0.9696 −0.0027 0.0085 0.0046 1.71E−4 9.105E−5 0.9734 28110.4
The Chen–Scott procedure is to maximize (19) with respect to θ by numerical search. As a simple example to illustrate the difficulties with this traditional estimation and some of the advantages of
the procedure that we will be recommending to replace it, we simulated a sample of 1000 observations using parameters specified in the first block of Table 1. These parameters were chosen to match
the actual observed behavior of the four yields used here. On this sample we tried to choose θ so as to maximize (19) using the fminunc algorithm in MATLAB.5 Since numerical search can be sensitive
to different scaling of parameters, we tried to scale parameters in a way consistent with a researcher’s prior expectation that risk prices were small, multiplying c Q by 10 and δ1 and Σe by 1000 so
that a unit step for each of these parameters would be similar to a unit step for the others.6 We used 100 different starting values for this search, using a range of values for ρ Q and starting the
other parameters at good guesses. Specifically, to obtain a given starting value we would generate the 3 diagonal elements of ρ Q from U [0.5, 1] distributions, set off-diagonal elements to zero, and
set the initial guess for ρ equal to this value for ρ Q . We set the starting value for each element of δ1 and Σe to 1.e−4, δ0 = 0.0046 (the average short rate), and c Q = 0. In only 1 of these 100
experiments did the numerical search converge to the values that we will establish below are indeed the true global MLE. These estimates, reported in the second block of Table 1, in fact correspond
very nicely to the true values from which this sample was simulated. However, in 81 of the other experiments, the procedure satisfied the convergence criterion (usually coming from a sufficiently
tiny change between iterations) at a large range of alternative points other than the global maximum. The third block of Table 1 displays one of these. All such points are characterized by an
eigenvalue of ρ being equal or very close to unity; we will explain why this happens in the following section. For the other 18 starting values, the search algorithm was unable to make any progress
from the initial starting values. Although very simple, this exercise helps convey some sense of the numerical problems researchers have encountered fitting more complicated models such as we
describe in our next two examples.
5 MATLAB numerical optimizers have been used by Cochrane and Piazzesi (2009), Aït-Sahalia and Kimmel (2010), and Joslin et al. (2011), among others. Duffee (2011) found that numerical search problems
can be reduced using alternative algorithms. Our purpose here is to illustrate the difficulties that can arise in estimation. We will demonstrate that these identical MATLAB algorithms have no
trouble with the alternative formulation that we will propose below. 6 To give the algorithm the best chance to converge, for each starting value we allowed the search to continue for up to 10,000
function evaluations, then restarted the search at that terminal value to allow an additional 10,000 function evaluations, and so on, for 10 repetitions with each starting value.
Local 53 0.0085 0 0.9328 0.2500 0.0141 0.8533 0.1985
0.5316 0 0 0.7202 0.0671 0.1175 0.6993
−0.5562 0.9986 0.0113 0.0203 0.9794 −0.0028 0.0333 0.1344 1.72E−4 9.110E−5 1.000 28096.5
0.0204 0 0.9316 0.2438 0.0063 0.8380 0.1923
0.0527 0 0 0.7352 0.0840 0.1267 0.7202
2.3. Example 2: macro finance model with single lag (MF1) It is of considerable interest to include observable macroeconomic variables among the factors that may affect interest rates, as for example
in Ang and Piazzesi (2003), Ang et al. (2007), Rudebusch and Wu (2008), Ang et al. (2006), and Hördahl et al. (2006). Our next two illustrative examples come from this class. We first consider the
unrestricted first-order macro factor model studied by Pericoli and Taboga (2008). This model uses Nm = 2 observable macro factors, consisting of measures of the inflation rate and the output gap,
which are collected in an (Nm × 1) vector ftm . These two observable macroeconomic factors are allowed to influence yield dynamics in addition to the traditional Nℓ = 3 latent7 factors ftℓ ,
(Nf ×1)
(N ×1) = mℓ , ft
(Nℓ ×1)
for Nf = Nm + Nℓ . The P dynamics (1), Q dynamics (12), and shortrate equation (13) can for this example be written in partitioned form as ftm
(Nm ×1)
(Nℓ ×1)
(Nℓ ×1)
ℓ = cℓ + ρℓm ftm−1 + ρℓℓ ftℓ−1 + Σℓm um t + Σℓℓ ut
(Nm ×1)
= cm + ρmm ftm−1 + ρmℓ ftℓ−1 + Σmm um t
Q Qm Q = cmQ + ρmm ftm−1 + ρmℓ ftℓ−1 + Σmm ut
Q Q ℓ Qℓ = cℓQ + ρℓQm ftm−1 + ρℓℓ ft −1 + Σℓm ut m + Σℓℓ ut
′ rt = δ0 + δ1m ftm + δ1′ ℓ ftℓ .
Pericoli and Taboga proposed the normalization conditions8 that
Σmm is lower triangular, Σℓm = 0, Σℓℓ = INℓ , δ1ℓ ≥ 0, and cℓQ = 0.
Our empirical illustration of this approach will use t corresponding to quarterly data and will take the 1-, 5-, and 10-year 40 ′ bonds to be priced without error (Yt1 = ( y4t , y20 t , yt ) ) and
the 2-, 2 28 ′ 3-, and 7-year bonds to be priced with error (Yt = ( y8t , y12 t , yt ) ). Details of how the log likelihood is calculated for this example are described in Appendix A.
7 Pericoli and Taboga evaluated a number of alternative specifications including different choices for the number of latent factors Nℓ , number of lags on the macro variables, and dependence between
the latent and macro factors. They refer to the specification we discuss in the text as the M (3, 0, U ) specification, which is the one that their tests suggest best fits the data. 8 Pericoli and
Taboga imposed f ℓ = 0 as an alternative to the traditional c = 0 0
or cℓ = 0, though we will follow the rest of the literature here in using a more standard normalization.
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
2.4. Example 3: macro finance model with 12 lags (MF12)
Likewise the second block of (17) implies
A first-order VAR is not sufficient to capture the observed dynamics of output and inflation. For example, Ang and Piazzesi (2003) suggested that the best fit is obtained using a monthly VAR(12) in
the observable macro variables and a VAR(1) for the latent factors9 : ftm = ρ1 ftm−1 + ρ2 ftm−2 + · · · + ρ12 ftm−12 + Σmm um t ℓ
ft = cℓ +
ρℓℓ ftℓ−1
+ Σℓℓ ut .
They further noted that since ftℓ is independent of ftm under their assumptions, the values of δ0 and δ1m in the short-rate equation can be obtained by OLS estimation of ′
1 A∗2 = A2 − B2 B− 1 A1
φ21 = B2 B1 ∗ ∗ u1t 0 Ω1 ∼N , ∗ −1
rt = δ0 + δ1m ftm + δ1ℓ ftℓ .
rt = δ0 + δ1m ftm + vt .
Our empirical example follows Ang and Piazzesi in proxying the 2 elements of ftm with the first principal components of a set of output and a set of inflation measures, respectively, which factors
have mean zero by construction. Ang and Piazzesi treated the macro dynamics as independent of those for the unobserved latent factors, so that terms such as ρℓm and ρmℓ in the preceding example are
set to zero. Ang and Piazzesi (2003) further proposed the following identifying restrictions: Σmm is lower triangular, Σℓℓ = INℓ , cℓ = 0, ρℓℓ is lower triangular, and the diagonal elements of ρℓℓ
are in descending order. Further restrictions and details of the model and its likelihood function are provided in Appendix B. In the specification we replicate, Ang and Piazzesi postulated that the
short rate depends only on the current values of the macro factors: ′
∗ 1 Yt2 = A∗2 + φ21 Yt + u∗2t
To further reduce the dimensionality of the estimation, Ang and Piazzesi (2003) proposed some further restrictions on this set-up that we will discuss in more detail in Section 4.4. 3. Identification
The log likelihood function for each of the models discussed – and indeed, for any Gaussian affine term structure model in which exactly Nℓ linear combinations of yields are assumed to be priced
without error – takes the form of a restricted vector autoregression. The mapping from the affine-pricing parameters to the VAR parameters allows us to evaluate the identifiability of a given
structure. If two different values for the structural parameters imply the identical reduced-form parameters, there is no way to use observable data to choose between the two. We now explore the
implications of this fact for each of the three classes of models described in the previous section. 3.1. Example 1: latent factor model Premultiplying (1) by B1 (and recalling the normalization c =
0 and Σ = INℓ ) results in 1 B1 Ft = B1 ρ B− 1 B1 Ft −1 + B1 ut .
Ω1∗ = B1 B1
(30) (31)
Ω 2 = Σe Σe . ∗
Eqs. (24) and (27) will be recognized as a restricted Gaussian VAR for Yt , in which a single lag of Yt1−1 appears in the equation for Yt1 and in which, after conditioning on the contemporaneous
value of Yt1 , no lagged terms appear in the equation for Yt2 . Note that when we refer to the reduced-form for this system, we will incorporate those exclusion restrictions along with the
restriction that Ω2∗ is diagonal. Table 2 summarizes the mapping between the VAR parameters and the affine term structure parameters implied by Eqs. (24)– (32).10 The number of VAR parameters minus
the number of structural parameters is equal to (Ne − 1)(Nℓ + 1). Thus the structure is just-identified by a simple parameter count when Ne = 1 and overidentified when Ne > 1. Notwithstanding, the
structural parameters can nevertheless be unidentified despite the apparent conclusion from a simple parameter count. Consider first what happens at a point where one of the eigenvalues of ρ is
unity, that is, when the P-measure factor dynamics exhibit a unit root.11 This means that one of the 1 −1 eigenvalues of B1 ρ B− 1 is also unity (B1 ρ B1 x = x for some nonzero −1 1 x) requiring that
(INℓ − B1 ρ B1 )x = 0, so the matrix INℓ − B1 ρ B− 1 is noninvertible. In this case, even if we knew the true value of A∗1 , we could never find the value of A1 from Eq. (25). If Aˆ 1 is proposed as
a fit for a given sample, then Aˆ 1 + kx produces the identical fit for any k. Note moreover from (16) that A1 and A2 are the only way to find out about c Q and δ0 ; if we do not know the 4 values in
A1 and A2 , we can never infer the 4 values of c Q and δ0 . This failure of local identification accounts for the numerous failed searches described in Section 2.2. When the search steps in a region
in which ρ has a near unit root, the likelihood surface becomes extremely flat in one direction (and exactly flat at the unit root), causing the numerical search to become bogged down. Because the
true process is quite persistent, it is extremely common for a numerical search to explore this region of the surface and become stuck.12 If instead we used the normalization c Q = 0 in place of the
condition c = 0 just analyzed, a similar phenomenon occurs in which a unit root in ρ Q results in a failure of local identification of δ0 . Even when all eigenvalues of ρ are less than unity, there
is another respect in which the latent factor model discussed here is unidentified.13 Let H denote any (Nℓ × Nℓ ) matrix such that H ′ H = INℓ . It is apparent from Eqs. (24)–(32) that if we replace
Bj by Bj H and ρ by H ρ H ′ , there would be no change in the implied
Adding A1 to both sides and substituting Yt1 = A1 + B1 Ft establishes ∗ 1 Yt1 = A∗1 + φ11 Yt −1 + u∗1t
1 A∗1 = A1 − B1 ρ B− 1 A1
φ11 = B1 ρ B1 .
9 Ang and Piazzesi refer to this as their Macro Model.
10 The value of δ turns out not to appear in the product φ ∗ = B B−1 . 1 2 1 21 11 Note we have followed Ang and Piazzesi (2003) and Joslin et al. (2011), among others, in basing estimates on the
likelihood function conditional on the first observation. By contrast, Chen and Scott (1993) and Duffee (2002) included the unconditional likelihood of the first observation as a device for imposing
stationarity. 12 This point has also been made by Aït-Sahalia and Kimmel (2010). 13 This has also been recognized by Ang and Piazzesi (2003), Collin-Dufresne et al. (2008) and Aït-Sahalia and Kimmel
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
Table 2 Mapping between structural and reduced-form parameters for the latent factor model. VAR parameter
No. of elements
Σe Ne
Ω2∗ ∗ φ21 Ω1∗ ∗ φ11
Ne Nℓ Ne Nℓ (Nℓ + 1)/2 Nℓ2 Ne Nℓ
A2 A∗1
Proposition 1. Consider any (2 × 2) lower triangular matrix:
ρQ =
Q ρ21
0 Q ρ22
Q 21
Q 11
Q ρ22
Q −ρ21
Q ρ11
✓ ✓ ✓ ✓ ✓
✓ ✓ ✓ ✓
cQ Nℓ
✓ ✓
✓ ✓
For ρ Q an (Nℓ × Nℓ ) lower triangular matrix, there are Nℓ ! different lower triangular representations, characterized by alternative orderings of the principal diagonal elements. There thus exist 6
different parameter configurations that would achieve the same maximum for the likelihood function for the latent example explored in Section 2.2. The experiment did not uncover them because the
other difficulties with maximization were sufficiently severe that for the 100 different starting values used, only one of these 6 configurations was reached. Dai and Singleton (2000) and Singleton
(2006) originally proposed lower triangularity of ρ or ρ Q and nonnegativity of δ1 as sufficient identifying conditions. Our proposition establishes that one needs Q Q Q a further condition such as
ρ11 ≥ ρ22 ≥ ρ33 to have a globally identified structure. Nevertheless, this multiplicity of global optima is a far less serious problem than the failure of local identification arising from a unit
root. The reason is that any of the alternative configurations obtained through these H transformations by construction has the identical implications for bond pricing. By contrast, the inferences
one would draw from Local 53 in Table 1 are fundamentally flawed and introduce substantial practical difficulties for using this class of models. There is another identification issue, which has
separately been recognized by Joslin et al. (2011) using a very different approach from ours: not all matrices ρ Q can be transformed into lower triangular form. For example, for Nℓ = 2, if ρ Q is
written as Q lower triangular, then ρ22 would have to be one of its eigenvalues. However, it is possible for an unrestricted real-valued matrix ρ Q to have complex eigenvalues, in which case there is
no way to transform it as Υ = H ρ Q H ′ for Υ a real-valued lower triangular matrix. We propose in the following proposition an alternative normalization for the case Nℓ = 2 that, unlike the usual
lowertriangular form, is completely unrestrictive.
✓ ✓
Proposition 2. Consider ρ Q any (2 × 2) real-valued matrix:
ρQ =
Q ρ11
Q ρ12
Q ρ21
Q ρ22
For almost all δ1 ∈ R2+ , there exist exactly two transformations of the form Υ = H ρ Q H ′ such that Υ is real, H ′ H = I2 , H δ1 > 0, and the two elements on the principal diagonal of Υ are the
same. Moreover, one of these transformations is simply the transpose of the other:
Υ1 =
Then for almost all (2 × 1) positive vectors δ1 , there exists a unique orthogonal matrix H other than the identity matrix such that H ρ Q H ′ is also lower triangular and H δ1 > 0. Moreover, H ρ Q H
′ takes one of the following forms:
Q ρ22
value for the sample likelihood. The question then is whether the conditions imposed on the underlying model rule out such a transformation. From Eq. (16), such a transformation requires replacing c
Q with Hc Q , and from (15) we need now to use H δ1 and H ρ Q H ′ . Since our specification imposed no restrictions on ρ or c Q , the question is whether the proposed lower triangular structure for ρ
Q and nonnegativity of δ1 rules out such a transformation. The following proposition establishes that it does not.
Q ρ11
ρQ Nℓ (Nℓ + 1)/2
a c
b a
Υ2 =
a b
c . a
Hence one approach for the Nℓ = 2 case would be to choose the 3 parameters a, b, and c so as to maximize the likelihood with
a = c
b a
subject to the normalization b ≤ c. This has the advantage over the traditional lower-triangular formulation in that the latter imposes additional restrictions on the dynamics (namely,
lower-triangular ρ Q rules out the possibility of complex roots) whereas the Υ formulation does not. Unfortunately, it is less clear how to generalize this to larger dimensions. If ρ Q has complex
eigenvalues, these always appear as complex conjugates. Thus if one knew for the case Nℓ = 3 that ρ Q contained complex eigenvalues, a natural normalization would be
Q ρ11 Q Q ρ = ρ21 ρ
Q 31
Q ρ23
Q 32
with ρ23 ≤ ρ32 The value of a is then uniquely pinned down by the real part of the complex eigenvalues. However, if the eigenvalues are all real, this is a more awkward form than the usual Q
Q ρ11 Q Q ρ = ρ21 Q ρ31
Q ρ22
Q ρ32
Q ρ33
with ρ11 ≥ ρ22 ≥ ρ33 . The estimation approach that we propose below will instantly reveal whether or not the lower triangular form (34) is imposing a restriction relative to the fullinformation
maximum likelihood unrestricted values. If (34) is determined not to impose a restriction, one can feel confident in using the conventional parameterization, whereas if it does turn out to be
inconsistent with the estimated unrestricted dynamics, the researcher should instead parameterize dynamics using (33). Q
3.2. Example 2: macro finance model with single lag We next examine the MF1 specification of Pericoli and Taboga (2008). Calculations similar to those for the latent factor model
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
Table 3 Mapping between structural and reduced-form parameters for the MF1 model. VAR parameter
No. of elements
Σmm Nm (Nm + 1)/2
Ω2∗ Ωm∗ ∗ ψ1m ∗ φ2m ∗ φ21 Ω1∗ ∗ φm1 ∗ φmm ∗ φ11 ∗ φ1m
Ne Nm (Nm + 1)/2 Nℓ Nm Ne Nm Ne Nℓ Nℓ (Nℓ + 1)/2 Nm Nℓ 2 Nm Nℓ2 Nℓ Nm Ne Nm Nℓ
A2 A∗m A∗1
= Am + φmm ∗
(Nm ×1)
(Nm ×Nm )
✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
(Nℓ ×1)
(Nℓ ×1)
✓ ✓ ✓
(Ne ×1)
+ φm1
(Nm ×Nℓ )
+ umt
(Nℓ ×Nm )
∗ ∗ + φ11 Yt1−1 + ψ1m ftm + u∗1t
∗ ∗ = A∗2 + φ2m ftm + φ21 Yt1 + u∗2t .
(Nℓ ×Nℓ )
(Ne ×1)
(Nℓ ×Nm )
(Ne ×Nm )
(Ne ×Nℓ )
Once again it is convenient to include the contemporaneous value of ftm in the equation for Yt1 and include contemporaneous values of both ftm and Yt1 in the equation for Yt2 in order to
orthogonalize the reduced-form residuals u∗jt ; the benefits of this representation will be seen in the next section. The mapping between structural and reduced-form parameters is given by the
following equations and summarized in Table 3 with Nf = Nm + Nℓ : 1 A∗m = cm − ρmℓ B− 1ℓ A1
1 ∗ φmm = ρmm − ρmℓ B− 1ℓ B1m
1 ∗ φm1 = ρmℓ B− 1ℓ
(40) 1 B1ℓ ρℓℓ B− 1ℓ A1
A1 = A1 + B1ℓ cℓ −
1 ∗ φ1m = B1ℓ ρℓm − B1ℓ ρℓℓ B− 1ℓ B1m
1 ∗ φ11 = B1ℓ ρℓℓ B− 1ℓ
ψ1m = B1m
1 A∗2 = A2 − B2ℓ B− 1ℓ A1
1 ∗ φ2m = B2m − B2ℓ B− 1ℓ B1m
1 ∗ φ21 = B2ℓ B− 1ℓ ∗
umt u∗1t u∗2t
Ωm ∗
= =
′ Σmm Σmm
2 Nm
✓ ✓
✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
∗ A∗1 + φ1m ftm−1
Nm Nℓ
ρℓℓ Nℓ2
✓ ✓
ρℓ m
Nℓ Nm
cQ Nm
cm Nm
✓ ✓ ✓
✓ ✓ ✓
cℓ Nℓ
3.3. Example 3: macro finance model with 12 lags Last we consider the MF12 example, for which the reduced form is ∗ ftm = φmm Ftm−1 + u∗mt
∗ ∗ ∗ Yt1 = A∗1 + φ1m Ftm−1 + φ11 Yt1−1 + ψ1m ftm + u∗1t
∗ ∗ Yt2 = A∗2 + φ2m Ftm + φ21 Yt1 + u∗2t
(2×1) ∗
φmm = [ρ1 ρ2 · · · ρ12 ] 1 A∗1 = A1 − B1ℓ ρℓℓ B− 1ℓ A1
φ1m =
1 B1ℓ ρℓℓ B− 1ℓ (3×3)
Σe Σe′
with Ω2∗ diagonal and B1 and B2 partitioned as described in Appendix A. Once again inspection of the above equations reveals that the structure is unidentified. One can see this immediately for the
case Nℓ = 3, Nm = 2, Ne = 3 simply by counting parameters — there are 69 unknown structural parameters and only 66 reducedform parameters from which they are supposed to be inferred. The
1 A∗2 = A2 − B2ℓ B− 1ℓ A1 1 ∗ φ2m = B2m − B2ℓ B− 1ℓ B1m 1 ∗ φ21 = B2ℓ B− 1ℓ ∗
umt u∗1t u∗2t
∗ Ωm =
′ Σmm Σmm = 0 0
Ω2∗ 0
0 ′
B1ℓ B1ℓ 0
0) ∗ ψ1m = B(1m
problem arises in particular from the fact that, for the example we have been discussing, the observable implications of the 30 structural parameters in ρ Q and δ1 are completely captured by ∗ ∗ ∗
the 27 values of ψ1m , φ2m , φ21 , and Ω1∗ . More fundamentally, the lack of identification would remain with this structure no matter how large the value of Ne . One can see this by verifying that
the following transformation is perfectly allowed under the stated normalization but would not change the value of any reducedform parameter: B1ℓ → B1ℓ H ′ , cℓ → Hcℓ , ρmℓ → ρmℓ H ′ , ρℓℓ → H ρℓℓ H
′ , ρℓm → H ρℓm , and B2ℓ → B2ℓ H ′ , where H could be any (Nℓ × Nℓ ) orthogonal matrix. There is also a separate identification problem arising from the fact that only maturities for which n is an
even number are included in the observation set. This means that only even powers of ρ Q appear in (15) and (16), which allows observationally equivalent sign transformations through H as well.
1 ∗ φ11 = B1ℓ ρℓℓ B− 1ℓ
0 B1ℓ B′1ℓ 0
ρm ℓ
show the reduced form to be ftm (Nm ×1)
Σe Σe′
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
Table 4 Mapping between structural and reduced-form parameters for the MF12 model. VAR parameter
Ω2∗ Ωm∗ ∗ φmm ∗ ψ1m ∗ φ21 Ω1∗ ∗ φ11 ∗ φ2m ∗ φ1m
No. of elements 2 3 48 6 6 6 9 48 72 2 3
A∗2 A∗1
✓ ✓
✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
✓ ✓ ✓ ✓ ✓ ✓ ✓
✓ ✓ ✓ ✓ ✓ ✓ ✓
✓ ✓
✓ ✓
✓ ✓
✓ ✓
✓ ✓ ✓ ✓
with Ω2∗ again diagonal and details on the partitioning of B1 and B2 in Appendix B. Table 4 summarizes the mapping between reducedform and structural parameters. Note that the only reduced-form
parameters relevant for inference about the 6 elements of δ0 and λ are the 5 values for A∗1 and A∗2 , establishing that these structural parameters are in fact unidentified. One might have thought
that perhaps δ0 could be inferred separately from the OLS regression (23), freeing up the parameters A∗1 and A∗2 for estimation solely of λ. However, this is not the case, since the short-term
interest rate is the same dependent variable in both regression (23) and in the first OLS regression from which A∗1 is inferred. Another way to see this is to note that at most what one can expect to
uncover from the 5 values of A∗1 and A∗2 are the 5 values of A1 and A2 . The first element of A1 is exactly equal to δ0 , so even if δ0 were known a priori, the most that one could infer from A1 and
A2 is 4 other parameters. Hence A1 and A2 would not be sufficient to uncover the 5 unknowns in λ even if δ0 were known with certainty. Ang and Piazzesi’s (2003) Macro Model with its proposed
identifying restrictions thus turns out to be unidentified at all points of the parameter space. In their empirical analysis, Ang and Piazzesi imposed an additional set of restrictions that were
intended to improve estimation efficiency, though as we have just seen some of these are necessary for identification. We discuss these further in Section 4.4 below. 4. Estimation The reduced-form
parameters are trivially obtained via OLS. Hence a very attractive alternative to numerical maximization of the log likelihood function directly with respect to the structural parameters θ is to let
OLS do the work of maximizing the likelihood with respect to the reduced-form parameters, and then translate these into their implications for θ . We demonstrate in this section how this can be done.
4.1. Minimum-chi-square estimation Let π denote the vector consisting of reduced-form parameters (VAR coefficients and nonredundant elements of the variance matrices), L(π; Y ) denote the log
likelihood for the entire sample, and πˆ = arg max L(π; Y ) denote the full-information-maximumlikelihood estimate. If Rˆ is a consistent estimate of the information matrix, R = −T −1 E
∂ 2 L(π ; Y ) ∂π ∂π ′
then we could test the hypothesis that π = g (θ ) for θ a known vector of parameters by calculating the usual Wald statistic T [πˆ − g (θ )]′ Rˆ [πˆ − g (θ )]
✓ ✓ ✓ ✓
✓ ✓ ✓ ✓
which would have an asymptotic χ 2 (q) distribution under the null hypothesis where q is the dimension of π . Rothenberg (1973, p. 24) noted that one could also use (52) as a basis for estimation by
choosing as an estimate θˆ the value that minimizes this chi-square statistic. Following Rothenberg (1973, pp. 24–25), we can obtain asymptotic standard errors by considering the linear approximation
g (θ ) ≃ γ + Γ θ for Γ = ∂ g (θ )/∂θ ′ |θ=θ0 and γ = g (θ0 ) − p
Γ θ0 where πˆ → π0 and we assume there exists a value of θ0 for which the true model satisfies g (θ0 ) = π0 . Define the linearized minimum-chi-square estimator θˆ ∗ as the solution to min T [πˆ − γ
− Γ θ]′ R[πˆ − γ − Γ θ], θ
that is, θˆ ∗ satisfies Γ ′ R(πˆ − γ − Γ θˆ ∗ )
0 or θˆ ∗
(Γ ′ RΓ )−1 Γ ′ R(πˆ − γ ). Since T (πˆ − π0 ) → N (0, R−1 ), it follows √ L that T (θˆ ∗ − θ0 ) → N (0, [Γ ′ RΓ ]−1 ). Hence our proposal is to approximate the variance of θˆ with T −1 (Γˆ ′ Rˆ Γˆ )
−1 for Γˆ = ∂ g (θ )/∂θ ′ |θ =θˆ . We show in Appendix E that this is in fact identical to the usual asymptotic variance for the MLE as obtained from second derivatives of the log likelihood function
directly with respect to θ . In other words, the MCSE and MLE are asymptotically equivalent, and the MCSE inherits all the asymptotic optimality properties of the MLE. If in a particular sample the
MCSE and MLE differ, there is no basis for claiming that one has better properties than the other. In the case of a just-identified model, the minimum value attainable for (52) is zero, in which case
one can without loss of generality simply minimize
[πˆ − g (θ )]′ [πˆ − g (θ )].
Note that in this case, if the optimized value for this objective is zero, then θˆ is numerically identical to the value that achieves the global maximum of the likelihood written as a function of θ
. Although θˆMCSE in this case is identical to θˆMLE , arriving at the estimate by the minimum-chi-square algorithm has two big advantages over the traditional brute-force maximization of the
likelihood function. First, one knows instantly whether θˆ corresponds to a global maximum of the original likelihood surface simply by checking whether a zero value is achieved for (53). By
contrast, under the traditional approach, one has to try hundreds of starting values to be persuaded that a global maximum has been found, and even then cannot be sure. A second advantage is that
minimization of (52) or (53) is far simpler computationally than brute-force maximization of the original likelihood function. In addition, the greater computational ease makes calculation of
small-sample confidence intervals feasible. The models considered here imply a reduced form that can be written in companion form as Yt = k + Φ Yt −1 + ΣY ut
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
for Yt the (N × 1) vector of observed variables ( yields, macro variables, and possible lags of macro variables) and ut ∼ N (0, IN ), where the parameters k, Φ , and ΣY are known functions of π . We
can then obtain bootstrap confidence intervals for θ as follows. For ( j) artificial sample j, we will generate a sequence {ut }Tt=1 of N (0, IN ) variables for T the original sample size, and then
recursively ( j) ( j) ( j) generate Yt = k(πˆ )+ Φ (πˆ )Yt −1 + ΣY (πˆ )ut for t = 1, 2, . . . , T ,
finding ρˆ Q and δˆ 1 by numerical minimization of [πˆ 2 − g2 (ρ Q , δ1 )]′ [πˆ 2 − g2 (ρ Q , δ1 )]. Step 3. The estimate of ρ can then be obtained analytically from
starting from Y0 = Y0 , the initial value from the original sample, and using the identical parameter values k, Φ , and ΣY (as implied by the original πˆ ) for each sample j. On sample j we find the
FIML estimate πˆ ( j) on that artificial sample and then calculate θˆ ( j) = arg minθ T [πˆ ( j) − g (θ )]′ Rˆ ( j) [πˆ ( j) − g (θ )]. We generate a sequence j = 1, 2, . . . , J of such samples,
from which we could calculate 95% small-sample confidence intervals for each element of θ . The small-sample standard errors for parameter i reported in the
Step 4. Numerically solve the 4 unknowns in δ0 and c Q from the 4 equations in Aˆ ∗1 and Aˆ ∗2 using (25) and (28):
( j)
following section were calculated from
J −1
j =1
j) (θˆi(,MCSE − θˆi )2
where θˆi is the MCSE estimate for the original sample (whose original FIML πˆ was used to generate each artificial sample j) and j) θˆi(,MCSE is the minimum-chi-square estimate for artificial sample
j. We now illustrate these methods and their advantages in detail using the examples of affine term structure models discussed above. 4.2. Example 1: latent factor model In the case of Ne = 1, the
latent factor model is justidentified, making application of minimum-chi-square estimation particularly attractive. The reduced-form parameter vector here is ∗ ′ ′ π = ({vec([A∗1 φ11 ] )} , [vech(Ω1∗
)]′ , ∗ ′ ′ {vec([A∗2 φ21 ] )} , [diag(Ω2∗ )]′ )′
where vec(X ) stacks the columns of the matrix X into a vector. If X is square, vech(X ) does the same using only the elements on or below the principal diagonal, and diag(X ) constructs a vector
from the diagonal elements of X . Because u∗1t and u∗2t are independent, full-information-maximum-likelihood (FIML) estimation of π is obtained by treating the Y1 and Y2 blocks separately. Since each
equation of (24) has the same explanatory variables, FIML for ∗ the ith row of [A∗1 , φ11 ] is obtained by OLS regression of Yit1 on a
ˆ 1∗ the matrix of average outer products constant and Yt1−1 , with Ω of those OLS residuals: ˆ 1∗ = T −1 Ω
∗ 1 ∗ 1 (Yt1 − Aˆ ∗1 − φˆ 11 Yt −1 )(Yt1 − Aˆ ∗1 − φˆ 11 Yt −1 )′ .
t =1
FIML estimates of the remaining elements of π are likewise obtained from OLS regressions of Yit2 on a constant and Yt1 . The specific mapping in Table 2 suggests that we can use the following
multi-step algorithm to minimize (53) for the latent factor model with Nℓ = 3 and Ne = 1. Step 1. The estimate of Σe is obtained analytically from the square ˆ 2∗ . root of Ω Step 2. The estimates of
the 9 unknowns in ρ Q and δ1 are found by numerically solving the 9 equations in (29) and (31) ∗ ˆ∗ [B2 (ρˆ Q , δˆ1 )][B1 (ρˆ Q , δˆ1 )]′ = φˆ 21 Ω1 Q ˆ Q ˆ ′ ∗ ˆ 1. [B1 (ρˆ , δ1 )][B1 (ρˆ , δ1 )] =
Specifically, we do this by letting14 πˆ 2
∗ ˆ∗ ′ = ([vec(φˆ 21 Ω1 )] , ˆ 1∗ )]′ )′ and g2 (ρ Q , δ1 ) = ([vec (B2 B′1 )]′ , [vech (B1 B′1 )]′ )′ and [vech(Ω
14 To assist with scaling for numerical robustness, we multiplied each equation in step 2 by 1200 × 1.e + 7 and those in step 4 below by 1.e + 8. If we were
(26): 1 ˆ∗ ˆ ρˆ = Bˆ − 1 φ11 B1
where Bˆ 1 is known from Step 2.
1 Q (I3 − Bˆ 1 ρˆ Bˆ − ˆ Q , δˆ1 ) = Aˆ ∗1 1 )A1 (δ0 , c , ρ 1 Q A2 (δ0 , c Q , ρˆ Q , δˆ 1 ) − Bˆ 2 Bˆ − ˆ Q , δˆ1 ) = Aˆ ∗2 . 1 A1 (δ0 , c , ρ
Although Steps 2 and 4 involve numerical minimization, these are computationally far simpler problems than that associated with traditional brute-force maximization of the likelihood function with
respect to the full vector θ . To illustrate this, we repeated the experiment described in Section 2.2 with the same 100 starting values. Whereas we saw in Section 2.2 that only one of these efforts
found the global maximum under the traditional approach, with our method all 100 converge to the global MLE in one of the 6 configurations that are observationally equivalent for the original
normalization. One of the reasons for the greater robustness is that the critical stumbling block for the traditional method – numerical search over ρ – is completely avoided since in our approach
(54) is solved analytically. Another is that c Q and uncertainties about its scale are completely eliminated from the core problem of estimation of ρ Q and δ1 . Joslin et al. (2011) have recently
proposed a promising alternative parameterization of the pure latent affine models that shares some of the advantages of our approach. They parameterize ∗ the system such that A∗1 and φ11 in (24) are
taken to be the direct objects of interest, and as in our approach, estimate these directly with OLS. But whereas our approach also uses the OLS estimates ∗ of A∗2 and φ21 in (27) to uncover the
remaining affine-pricing parameters, their approach finds these by maximizing the joint likelihood function of Y1 and Y2 . Although they report that the second step involves no numerical
difficulties, our experience is that while it offers a significant improvement over the traditional method, it is still susceptible to some of the same problems. For example, we repeated the
experiment described above with the same data set and same starting values for δ0 and the 3 unknown diagonal elements in ρ Q that appear in their parameterization as we used in the simulations
described above, starting the search for Ω1∗ from the OLS estimates as they recommend. We found that the algorithm found the global maximum in 54 out of the 100 trials,15 but got stuck in regions
with diagonal elements of ρ Q equal to unity in the others, in a similar failure of local identification that we documented above can plague the traditional approach. We applied our method directly
to the Ang and Piazzesi interest rate data described in more detail in Section 4.4 below. Table 5 reports the resulting minimum-chi-square estimates (identical in this case to the
full-information-maximum-likelihood estimates). The table also reports asymptotic standard errors in parentheses and small-sample standard errors in square brackets. The latter
minimizing (52) directly one would automatically achieve optimal scaling by using Rˆ in place of a constant k times the identity matrix as here. However, our formulation takes advantage of the fact
that the elements of πˆ can be rearranged in order to avoid inversion of B1 inside the numerical optimization, in which case Rˆ is no longer the optimal weighting matrix. The minimization was
implemented using the fsolve command in MATLAB. We also multiplied δ1 by 1000 to improve numerical robustness. 15 To assist the numerical search, we multiplied Ω ∗ by 1000. Without this scaling, 1
the searches only succeeded in finding the global maximum in 14 of the 100 trials.
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
Table 5 FIML estimates with small-sample standard errors (in square brackets) and asymptotic standard errors (in parentheses) for latent factor model fit to Ang and Piazzesi (2003) data set. Implied
λ representation parameters
Estimated Q representation parameters 0.0407 [0.0063] (0.0062)
0.0135 [0.0399] (0.0378)
0.5477 [0.1194] (0.1073)
0.9991 [0.0005] (0.0004) 0.0101 [0.0033] (0.0032) 0.0289 [0.0193] (0.0185)
0.9317 [0.0050] (0.0046) 0.2548 [0.0206] (0.0172)
0.7062 [0.0507] (0.0439)
0.9812 [0.0110] (0.0067) −0.0010 [0.0113] (0.0094) 0.0164 [0.0187] (0.0174)
0.0069 [0.0231] (0.0226) 0.8615 [0.0343] (0.0309) 0.1856 [0.0289] (0.0277)
0.0607 [0.0303] (0.0294) 0.1049 [0.0331] (0.0318) 0.6867 [0.0353] (0.0350)
0.0046 [0.0011] (0.0011)
1.729E−4 [2.31E−5] (2.28E−5)
1.803E−4 [3.80E−5] (3.74E−5)
4.441E−4 [1.75E−5] (1.62E−5)
−0.5477 [0.1194]
−0.0178 [0.0109]
0.0069 [0.0231]
0.0607 [0.0303]
0.1049 [0.0331]
−0.0195 [0.0449]
9.149E−5 [2.81E−6] (2.70E−6)
were calculated by applying our method to each of 1000 separate data sets, each generated from the vector autoregression estimated from the original data set. Note that the fact that we can verify
with certainty that the global maximum has been found on each of these 1000 simulated data sets is part of what makes calculation of small-sample standard errors feasible and attractive. Finding the
FIML estimate on 1000 data sets takes about 90 s on a PC. For this example, we find that the asymptotic standard errors provide an excellent approximation to the true small-sample values. Although
our original inference was conducted in terms of a Q representation, we report the implied λ representation values in the right-hand columns of Table 5, since that is the form in which parameter
estimates are often reported for these models. Our suggestion is that the approach we illustrate here, of beginning with a completely unrestricted model to see which parameters appear to be most
significant, has many advantages over the traditional approach16 in which sundry restrictions are imposed at a very early stage, partly in order to assist with identification and estimation. 4.3.
Example 2: macro finance model with single lag We also applied this procedure to estimate parameters for our MF1 example using a slightly different quarterly data set from Pericoli and Taboga. We
used constant-maturity Treasury yields as of the first day of the quarter, dividing the numbers as usually reported by 400 in order to convert to units of quarterly yield on which formulas such as
(14) are based. We estimated inflation from
16 See for example Duffee (2002) and Duarte (2004).
the 12-month percentage change in the CPI and the output gap by applying the Hodrick–Prescott filter with λ = 1600 to 100 times the natural log of real GDP. Data run from 1960:Q1 to 2007:Q1 and were
obtained from the FRED database of the Federal Reserve Bank of St. Louis. Q If we impose 3 further restrictions on ρℓℓ relative to the original formulation, the MF1 model presented above would be
just-identified in terms of parameter count, for which we would logically again simply try to invert the reduced-form parameter estimates to obtain the FIML estimates of the structural parameters.
Once again orthogonality of the residuals across the three blocks of (35) through (37) means FIML estimation can be done on each block separately, and within each block implemented by OLS equation by
equation. Our estimation procedure on this system is then as follows. Step 1. The ftm and Yt2 variance parameters are obtained analytically
ˆ mm from the Cholesky factorization of Ω ˆ m∗ and from (48), that is, Σ ˆ e from the square root of Ω ˆ 2∗ . Σ
Step 2. Using (44) and (46)–(48), choose the values of ρ Q and δ1 so as to solve the following equations numerically17 : ∗ ˆ 1m B1m (ρ Q , δ1 ) = ψ ∗ ∗ ˆ∗ B2m (ρ Q , δ1 ) = φˆ 2m + φˆ 21 ψ1m
ˆ 1∗ ) vech{[B1ℓ (ρ Q , δ1 )][B1ℓ (ρ Q , δ1 )]′ } = vech(Ω ∗ ˆ∗ [B2ℓ (ρ Q , δ1 )][B1ℓ (ρ Q , δ1 )]′ = φˆ 21 Ω1 .
17 To improve accuracy of the numerical algorithm, we multiplied the last two equations by 400 and then the whole set of equations by 1.e+7. The parameter δ1 was also scaled by 100.
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
Table 6 FIML estimates and asymptotic standard errors for the MF1 model. cQ c
δ0 δ1
0.0306 (0.5291) −0.1028 (0.4951) 0.7725 (0.2895) −0.3933 (0.3857) 0.2036 (0.3691) −0.1035 (0.2083) 0.1001 (0.6387) 0.9461 (0.0325) 0.0002 (0.0310) 0.0932 (0.3903) −0.0827 (0.1190) 0.1220 (0.2649)
−0.0082 (0.0062) 6.86E−4 (2.88E−4) 2.02E−4 (1.29E−5) 0.6996 (0.0448) 0.1174 (0.0604)
−0.0458 (1.1382) 0.2414 (0.4672) 0.2933 (0.2801) 1.2411 (0.3706) −0.2046 (0.3852) 0.1035 (0.2373) −0.1415 (0.6661) 0.2203 (0.0508) 0.8735 (0.0487) 0.1683 (0.1686) 0.0852 (0.1295) 0.0449 (0.5693)
(7.2480) 0.0436 (1.0688) 0.2376 (0.2437) 0.8579 (0.1435) −0.0054 (0.5723) 0.0223 (0.1215) −0.0428 (0.2005) −0.0435 (0.1618) 0.8203 (0.6723) −0.1110 (0.3430) 0.0756 (1.0167)
(1.4128) −0.2138 (0.1332) −0.0197 (0.1470) 0
2.4063 (4.4009) −0.3565 (0.3900) −0.0574 (0.5579) 0
0.8826 (0.0672) 0.0303 (0.0810) −0.0210 (0.0456) −0.0233 (0.0538) −0.0844 (0.2453) 0.8715 (0.1127) 0.0555 (0.1468)
−0.1926 (0.1464) 0.8826 (0.0672) 0.0639 (0.1531) −0.0517 (0.1555) 0.1378 (1.0303) 0.0978 (0.2066) 0.4728 (0.7418)
1.02E−3 (3.03E−4) 1.87E−4 (1.19E−5) 0
2.03E−3 (2.35E−3) 1.09E−4 (6.97E−6)
1.92E−4 (1.33E−3)
7.67E−4 (6.31E−3)
36 and 60 months from CRSP monthly treasury file, each divided by 1200 to quote as monthly fractional rates. We obtained two groups of monthly US macroeconomic key indicators, seasonally adjusted if
applicable, from Datastream. The first group consists of various inflation measures which are based on the CPI, the PPI of finished goods, and the CRB Spot Index for commodity prices. The second
group contains variables that capture real activity: the Index of Help Wanted Advertising, Unemployment Rates, the growth rate of Total Civilian Employment and the growth rate of Industrial
Production. All growth rates and inflation rates are measured as the difference in logs of the monthly index value between dates t and t − 12. We first normalized each series separately to have zero
mean and unit variance, then extracted the first principal component of each group, designated the ‘‘inflation’’ and ‘‘real activity’’ indices, respectively, with each index having zero mean and unit
variance by construction. The sample period for yields is from December 1952 to December 2000, and that for the macro indices is from January 1952 to December 2000. We assume that 1-, 12- and
60-month yields are priced exactly, and 3- and 36month yields are priced with error (Ne = 2). We use the Ang and Piazzesi (2003) Macro Model with their additional proposed zero restrictions to
illustrate minimum-chi-square estimation for an overidentified model. The reduced-form Eqs. (49)–(51) form 3 independent blocks. If we interpret Ytm = ftm , we can write the structure of block i for
i = 1, 2, m as Yti
(qi ×1) ∗
= Πi′
xit + u∗it
(qi ×ki ) (ki ×1) ∗
(qi ×1)
uit ∼ N (0, Ωi ).
0.6617 (0.0424)
The information matrix for the full system of reduced-form parameters is
We initially tried to solve this system for ρℓℓ of the lowertriangular form (34), but found no solution exists, indicating that Q the FIML estimate of ρℓℓ has complex roots. We accordingly Q
reparameterized ρℓℓ in the form (33), for which an exact solution was readily obtained. Q
Step 3. From these estimates one then analytically can calculate ∗ ∗ ∗ ∗ ρˆ mℓ , ρˆ mm , ρˆ ℓℓ , and ρˆ ℓm from φˆ m1 , φˆ mm , φˆ 11 , and φˆ 1m , respectively. Step 4. Since cm and cℓ are
unrestricted, the values of δ0 and c can be inferred solely from A∗2 by numerical solution of (45): Q
1ˆ Q A2 (δ0 , c Q , ρˆ Q , δˆ 1 ) − Bˆ 2ℓ Bˆ − ˆ Q , δˆ1 ) = Aˆ ∗2 . 1ℓ A1 (δ0 , c , ρ
Step 5. We then can calculate the remaining parameters analytically using (38) and (41): 1ˆ cˆm = Aˆ ∗m + ρˆ mℓ Bˆ − 1ℓ A1 1 ˆ∗ 1ˆ ˆ ˆ ˆ ℓℓ Bˆ − cˆℓ = Bˆ − 1ℓ (A1 − A1 + B1ℓ ρ 1ℓ A1 ). Table 6
reports the FIML estimates obtained by the above algorithm along with asymptotic standard errors. These estimates would cause one to be cautious about the proposed model — standard errors are quite
large, and 3 eigenvalues of the estimated ρ Q matrix are outside the unit circle. We found small-sample standard errors much more difficult to calculate for this example, in part because the value of
ρ Q associated with a given πˆ ( j) can have anywhere from zero to four complex eigenvalues, with Q eigenvalues of the ρℓℓ submatrix sometimes greater than 2 in modulus. Our interpretation is that
further restrictions on the interaction between the macro and latent factors could be helpful for this class of models.
4.4. Example 3. Macro finance model with 12 lags Here our data set follows Ang and Piazzesi (2003) as closely as possible, using zero-coupon bond yields with maturities of 1, 3, 12,
Rˆ m ˆR = 0 0
0 Rˆ 1 0
0 0 ˆR2
where as in Magnus and Neudecker (1988, p. 321)
ˆ ∗−1 ⊗ T −1 Ω Rˆ i = i 0
T t =1
xit xit
0 ′
ˆ i∗−1 ⊗ Ω ˆ i∗−1 )Dqi (1/2)Dqi (Ω
for DN the N 2 × N (N + 1)/2 duplication matrix satisfying DN vech (Ω ) = vec (Ω ). The structural parameters Σe appear only in the last half of the third block, no other parameters appear in this
block, and these 2 structural parameters are just-identified by the 2 diagonal elements of Ω2∗ . Thus the minimum-chi-square estimates of Σe are obtained immediately from the square roots of diagonal
elements ˆ 2∗ . The structural parameters ρ1 , . . . , ρ12 appear directly in the of Ω first block and, through ρ Q , in the second and third blocks as well, so FIML or minimum-chi-square estimation
would exploit this. However, to reduce dimensionality, we follow Ang and Piazzesi in replacing ρ2 , . . . , ρ12 where they appear in ρ Q with the OLS estimates ρˆ 2 , . . . , ρˆ 12 . In order to try
to replicate their setting as closely as possible, we also follow their procedure of imposing δˆ 1m on the basis of OLS estimation of (23). Hence the minimum-chisquare analog to their problem is to
minimize an expression of the form of (52) with ˆ 1 )]′ , [vech(Ω ˆ ∗ )]′ , [vec(Π ˆ 2 )]′ )′ πˆ = ([vec(Π (55) 1 T ′ ˆ ∗−1 ⊗ T −1 x1t x1t 0 0 Ω 1 t =1 ′ ∗−1 ∗−1 ˆ ˆ Rˆ = 0 (1/2)
D3 (Ω1 ⊗ Ω1 )D3 0 T ′ ∗−1 − 1 ˆ 0 0 Ω2 ⊗T x2t x2t t =1
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
Table 7 Three local minima for the chi-square objective function for the restricted MF12 specification. Global
0.9921 0 0 1.11E−04 −0.0409 2.8783 −6.1474 −0.0048 −0.0445 −0.0322 462.15 20703 14
δ1ℓ λℓ Λmm Λℓℓ
χ2 LLF Frequency
0 0.9462 −0.0034 4.27E−04 0 0.4303 −0.8744 0 0 0
0 0 0.9021 1.98E−04 0
0 0.2910 0.3687
0.9918 0 0 1.09E−04 −0.0441 −0.3430 1.7675 −0.0045 −0.0474 −0.0331 530.69 20668 84
x1t = (1, Ftm−′1 , Yt1−′ 1 , ftm′ )′ x2t = ′
ˆ = Π i
(1, Ftm′ , Yt1′ )′
′ Yti xit
t =1
ˆ ∗ = T −1 Ω 1
′ xit xit
−1 for i = 1, 2
t =1
ˆ x1t )′ ˆ x1t )(Yt1 − Π (Yt1 − Π 1 1
t =1
[ˆu2t (1)]2 . ˆ ∗ = T −1 . Ω 2 . t =1 T
··· . . .
. . .
[ˆu2t (Ne )]2
ˆ 2 x2t . with uˆ 2t ( j) the jth element of Yt2 − Π Ang and Piazzesi also imposed a further set of restrictions on parameters, setting parameters with large standard errors as estimated in their
first stage to zero. Their understanding was that the purpose of these restrictions was to improve efficiency, though we saw in Section 3.3 that some of these restrictions are in fact necessary in
order to achieve identification. Our purpose here is to illustrate the minimum-chi-square method on an overidentified structure, and we therefore attempt to estimate their final proposed structure
using our method. The additional parameters that Ang and Piazzesi fixed at zero include the (2, 1) and (3, 1) elements of ρℓℓ (which recall was already lower triangular), the (1, 2), (2, 2), (3, 2)
and (1, 3) elements of Λℓℓ , both elements in λm , and the 2nd and 3rd elements of λℓ . Our goal is then to minimize (52) with respect to the 17 remaining unknown parameters, 1 in λℓ , 4 in Λmm , 5
in Λℓℓ , 4 in ρℓℓ , and 3 in δ1ℓ .18 The results of this estimation for 100 different starting values are reported in Table 7. Our procedure uncovered three local minima to the objective function.
The parameters we report as Local1 correspond to the values reported in Table 6 of Ang and Piazzesi. The small differences between our estimates and theirs are due to some slight differences between
the data sets and the fact that, in an overidentified structure, the minimum-chisquare and maximum-likelihood estimates are not numerically
18 We made one other slight change in parameterization that may be helpful. Since Λℓℓ always enters either the minimum-chi-squared calculations or the original maximum likelihood estimation in the
form of high powers of the matrix Q ρℓℓ = ρℓℓ − Λℓℓ , the algorithms will be better behaved numerically if the
unknown elements of ρℓℓ rather than those of Λℓℓ are taken to be the object of interest. Specifically, for this example we implemented this subject to the proposed restrictions by parameterizing Q
θ1 ρℓℓ = 0
θ2 θ3
θ5 ρℓℓ = θ6 θ8
θ2 θ3
θ7 , θ9
and then translated back in terms of the implied values for Λℓℓ for purposes of reporting values in Table 7.
Local2 0 0.9412 −0.0095 4.30E−04 0 0.1474 −0.0607 0 0 0
0 0 0.7712 1.92E−04 0
0 0.2881 0.2110
0.9920 0 0 1.22E−04 −0.0388 1.5633 16.0624 −0.0056 −0.0423 −0.0299 503.10 20679 2
0 0.9437 −0.0032 4.26E−04 0 0.1341 7.4290 0 0 0
0 0 0.9401 1.92E−04 0
0 0.3000 0.4120
identical. Our procedure establishes that the estimates reported by Ang and Piazzesi in fact represent only a local maximum of the likelihood — both the estimates we report as Local2 and Global
achieve substantially higher values for the log likelihood function relative to Local1. Moreover, the differences between estimates in terms of the pricing of risk are substantial. In the original
reported Ang and Piazzesi estimates, an increase in inflation lowers the price of inflation risk and raises the price of output risk, whereas the values implied by Global reverse these signs. This is
consistent with their finding that the prices of observable macro risk behave very differently between their Macro Model and Macro Lag Model specifications — we find they also differ substantially
across alternative local maxima of the log likelihood function even within their single Macro Model specification. Note that the large prices of risk for these higher local maxima can make them easy
to miss with conventional estimation and conventional starting values of zero price of risk. Another benefit of the minimum-chi-square estimation is that the value for the objective function itself
gives us an immediate test of the various overidentifying restrictions. There are 152 parameters in the reduced form vector π in (55). The 17 estimated elements of θ then leave 135 degrees of
freedom. The 1% critical value for a χ 2 (135) variable is 176. Thus the observed minimum value for our objective function (462.15) provides overwhelming evidence that the restrictions imposed by the
model are inconsistent with the observed data. 5. Conclusion There are considerable benefits from describing affine term structure models in terms of their implications for the reducedform
representation of the data, which for a popular class of models is simply a restricted Gaussian vector autoregression. In this paper we used this representation to develop an approach to
characterizing identification that has not previously been used for affine term structure models. We demonstrated that three popular canonical representations are in fact not identified, and showed
how convergence to an unidentified region of the parameter space can complicate numerical search. A second and separate contribution of the paper was to propose inferring structural parameters from
the unrestricted OLS estimates by the method of minimum-chi-square estimation, which is an approach to parameter estimation that again has not previously been used for affine term structure models.
We demonstrated that among other benefits, this method is asymptotically equivalent to maximum likelihood estimation and can in some cases make it feasible to calculate small-sample standard errors,
to know instantly whether estimates represent a global or only a local optimum, and to recognize whether a given structure is unreasonably restricting the class of possible models.
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
By missing these insights, previous researchers have instead often imposed arbitrary restrictions in order to obtain estimates and in other cases failed to find the true global maximum of the
likelihood function. By showing how to recognize an unidentified structure, greatly reducing the computational burden of estimation, and providing an immediate specification test of any proposed
restrictions, we hope that our methods will help to make these models a more effective tool for research in macroeconomics and finance. Appendix A. Log likelihood function for the MF1 specification
The coefficients relating Yt1 and Yt2 to macro and latent factors can be partitioned as ′
b20 ′ B1ℓ b (3×3) 40 = b′ B2ℓ 8 (3×3) b′
(3×2) B2m
Σ mm (2×2) 0 . Σ = .. 0
··· .. . ··· ···
.. .
.. .
.. .
Ang and Piazzesi assumed that the risk associated with lagged macro factors is not priced and imposed the restriction in a λ representation that the values in (9) are characterized by λ = (λ′m ,
0′22×1 , λ′ℓ )′ and
··· .. . ··· ···
Λ = (27×27)
.. .
.. .
.. .
.. .
From (10) and (11) it follows that the parameters in (12) are given Q′ by c Q = (cm , 0′22×1 , cℓQ ′ )′ and
12 ′
b28 for bn given by (15). The conditional density for the tth observation is then 1 f ( f m , f ℓ , ue |f m , f ℓ , ue ) f ( ftm , Yt |ftm−1 , Yt −1 ) = | det( J )| t t t t −1 t −1 t −1 where f ( ftm
, ftℓ , uet |ftm−1 , ftℓ−1 , uet−1 )
= f ( ftm |ftℓ−1 , ftm−1 )f ( ftℓ , |ftℓ−1 , ftm−1 )f (uet )
ρQ 1 (2×2) I2 0 . Q ρ = .. 0 0
0 I2
.. .
··· ··· .. .
··· ··· ···
0 I2 0
.. .
f ( ftm |ftℓ−1 , ftm−1 ) = φ( ftm ; cm + ρmm ftm−1 + ρmℓ ftℓ−1 , Σmm Σmm )
.. .
1 1 m ftℓ = B− 1ℓ (Yt − A1 − B1m ft )
B1ℓ J = B2ℓ
L(θ; Y ) =
log f (
, |
Yt ftm−1
I2 0
0 I2
.. .
··· ··· .. .
··· ··· ···
0 I2 0
.. .
.. .
.. .
.. .
B1ℓ (3×3)
B2ℓ (2×3)
(3×22) (1)
b12 ′ = b60 b′ ′
b36 (1)
where for example B1m are the coefficients relating the observed yields to 11 lags of the 2 macro factors. The conditional density for this case is then 1
The P dynamics can again be represented as a special case ′ of (1) by using the companion form Ft = (Ftm , ftℓ′ )′ , Ftm = m′ ′ ′ ′ m′ ′ ( ft , . . . , ft −11 ) , c = (024×1 , cℓ ) , and
(3×2) (0)
f ( ftm |Ftm−1 ) = φ( ftm ; ρ1 ftm−1 + ρ2 ftm−2 + · · · + ρ12 ftm−12 , Σmm Σmm )
Appendix B. Log likelihood for the MF12 specification
.. . . 0 0 Q ρℓℓ
f ( f m , f ℓ , ue |F m , f ℓ , ue ) | det( J )| t t t t −1 t −1 t −1 f ( ftm , ftℓ , uet |Ftm−1 , ftℓ−1 , uet−1 ) = f ( ftm |Ftm−1 )f ( ftℓ |ftℓ−1 )f (uet )
, Yt −1 )
as calculated using the above formulas.
f ( ftm , Yt |Ftm−1 , Yt −1 ) =
t =1
For the Q representation and our Nℓ = 3, Nm = 2, Ne = 3 example, there are 25 unknown elements in ρ , 25 in ρ Q , 5 in c, 2 in c Q , 5 in δ1 , 1 in δ0 , 3 in Σmm , and 3 in Σe . The traditional
approach is to arrive at estimates of these 69 parameters by numerical maximization of
Ang and Piazzesi used Nℓ = 3 and Ne = 2, assuming that the 1-, 12-, and 60-month yields were priced without error, while the 3- and 36-month yields were priced with error, so that the B matrices can
be written in partitioned form as
uet = Σe−1 (Yt2 − A2 − B2m ftm − B2ℓ ftℓ )
f ( ftℓ |ftℓ−1 , ftm−1 ) = φ( ftℓ ; cℓ + ρℓm ftm−1 + ρℓℓ ftℓ−1 , INℓ ) f (uet ) = φ(uet ; 0, INe )
.. .
.. .
f ( ftℓ |ftℓ−1 ) = φ( ftℓ ; ρℓℓ ftℓ−1 , INℓ ) f (uet ) = φ(uet ; 0, INe )
(0) (1) m 1 1 ftℓ = B− 1ℓ (Yt − A1 − [B1m B1m ]Ft ) (0)
uet = Σe−1 (Yt2 − A2 − [B2m B2m ]Ftm − B2ℓ ftℓ )
J =
B1ℓ B2ℓ
Appendix C. Proof of Proposition 1 Write
H =
x . y
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
Since columns of H have unit length, without loss of generality we can write (u, v) = (cos θ , sin θ ) for some θ ∈ [−π , π]. The second column of H is also a point on the unit circle, for which
orthogonality with the first column also requires it to be located on the line ux + v y = 0, with the two solutions x = −v, y = u and x = v , y = −u. Thus the set of orthogonal (2 × 2) matrices can
be represented as either rotations
H1 (θ) =
cos θ sin θ
− sin θ cos θ
or reflections
H2 (θ) =
cos θ sin θ
sin θ . − cos θ
The condition that the (1, 2) element of H1 (θ )ρ H1 (θ )′ be zero requires
(ρ − ρ ) sin θ cos θ − ρ sin θ = 0. Q 11
Q 22
Q 21
One way this could happen is if sin θ = 0. But this would imply either H1 (−π /2) = −I2 , violating the sign requirement H δ1 ≥ 0, or else the identity transformation H1 (π /2) = I2 . Hence the
condition of interest is Q Q Q (ρ11 − ρ22 ) cos θ − ρ21 sin θ = 0.
If θ1 satisfies condition (C.3), then one can show H1 (θ1 )ρ H1 (θ1 ) = ′
Q ρ22 Q ρ21
0 Q ρ11
Q Q Q (ρ11 − ρ22 ) sin θ cos θ + ρ21 sin2 θ = 0
Q Q Q (ρ11 − ρ22 ) cos θ + ρ21 sin θ = 0.
For any θ2 satisfying (C.4),
H2 (θ2 )ρ H2 (θ2 ) =
Q ρ22
Q −ρ21
Q ρ11
cos θ 0 sin θ
H (θ ) =
− sin θ
0 cos θ
for θ satisfying ρ31 sin θ = (ρ11 − ρ33 ) cos θ , which swaps the (1, 1) and (3, 3) elements of ρ Q . Exactly one of the 4 possible matrices performing this swap will preserve positive H δ1 . There
are Nℓ choices for the value one can put into the (1, 1) element as Q a result of such swaps, Nℓ − 1 remaining choices for ρ22 , or a total of Nℓ ! permutations. Q
Appendix D. Proof of Proposition 2 Consider first rotations H1 (θ ) as specified in (C.1). The (1, 1) element of Υ = H1 (θ )ρ Q [H1 (θ )]′ is seen to be h1 (θ ) = ρ11 cos2 θ − (ρ21 + ρ12 ) cos θ sin
θ + ρ22 sin2 θ . Q
Now consider the nonnegativity condition. Since cot θ is monotonic on (0, π ) and repeats the pattern on (−π , 0), there are two values θ ∈ [−π , π] satisfying (C.3). We denote the first by θ1 ∈ [0,
π], in which case the second is given by θ1 − π . The two solutions to (C.4) can then be written as −θ1 and −θ1 + π . We are then looking at 4 possible transformations: cos θ1 sin θ1
− sin θ1 δ11 H1 (θ1 )δ1 = cos θ1 δ12 ∗ δ11 cos θ1 − δ12 sin θ1 δ = ≡ 11 ∗ δ11 sin θ1 + δ12 cos θ1 δ12 ∗ − cos θ1 sin θ1 δ11 −δ11 H1 (θ1 − π )δ1 = = ∗ − sin θ1 − cos θ1 δ12 −δ12
∗ cos θ1 − sin θ1 δ11 δ11 H2 (−θ1 )δ1 = = ∗ − sin θ1 − cos θ1 δ12 −δ12 ∗ − cos θ1 sin θ1 δ11 −δ11 H2 (−θ1 + π ) = = . ∗ sin θ1 cos θ1 δ12 δ12
Apart from the knife-edge condition δ11 = 0 or δ12 = 0 (which would require a particular relation between the elements of the original ρ Q and δ1 ), one and only one of the above four vectors would
have both elements positive, and this matrix produces H ρ Q H ′ of one of the two specified forms. ∗
of h1 (θ ) is ρ11 , whereas at θ = π /2, it is instead equal to ρ22 . Since h1 (θ ) is continuous in θ , there exists a value θ1 such that h1 (θ1 ) is Q Q exactly halfway between ρ11 and ρ22 . Notice
next that the eigenvalues of Υ = H ρ Q H ′ are identical to those of ρ Q , and hence the trace of Υ (which is the sum of the eigenvalues) is the same as the trace of ρ Q : Q
Thus since Υ11 = (ρ11 + ρ22 )/2, then also Υ22 = (ρ11 + ρ22 )/2. Hence H1 (θ1 )ρ Q [H1 (θ1 )]′ is of the desired form with elements along the principal diagonal equal to each other. As in the proof
of Proposition 1, H1 (θ1 − π ) is the other rotation that works. Alternatively, H could be a reflection matrix H2 (θ ) as in (C.2), for which the (1,1) element of H2 (θ )ρ Q [H2 (θ )]′ is found to
be: Q
Q Q Q Q ρ11 cos2 θ + (ρ21 + ρ12 ) cos θ sin θ + ρ22 sin2 θ .
We claim first that there exists a θ ∈ [0, π /2] such that h1 (θ ) Q Q equals (ρ11 + ρ22 )/2. To see this, note that at θ = 0, the value
for which the solution sin θ = 0 would violate H2 (θ )δ1 ≥ 0, leaving the sole condition
For Nℓ > 2, one can construct a family of such orthogonal matrices, for example using a matrix like
Q Q Υ11 + Υ22 = ρ11 + ρ22 .
Alternatively for H2 (θ ) we have the requirement
Q Q This turns out to equal (ρ11 + ρ22 )/2 at θ2 = −θ1 and θ2 = −θ1 + π . As in the proof of Proposition 1, in the absence of knife-edge conditions on δ1 , exactly one of the transformations H1 (θ1
), H1 (θ1 − π ), H2 (−θ1 ), H2 (−θ1 + π ) preserves positivity of H δ1 , establishing existence.
For uniqueness, suppose we have found a transformation HρQ H′ = Υ of the desired form. Then any alternative ˜ Υ H˜ ′ for transformation H ∗ ρ Q H ∗′ can equivalently be written as H ˜ = H ∗ . Hence
the result will be established if we can show that HH ˜ Υ H˜ ′ that keep the diagonal elements the only transformations H ˜ δ1 ≥ 0 are the identity and equal to each other and also satisfy H
transposition. Since a = Υ11 = Υ22 and since the transformation preserves eigenvalues, we know that if the (1, 1) and (2, 2) elements ˜ Υ H˜ ′ are equal to each other, each must again be the value a.
of H ˜ = H1 (θ ) for some θ , we require as in (D.1) that Thus if H a cos2 θ − (Υ21 + Υ12 ) cos θ sin θ + a sin2 θ = a which can only be true if
(Υ21 + Υ12 ) cos θ sin θ = 0.
This requires either cos θ = 0, sin θ = 0, or Υ21 = −Υ12 . For cos θ = 0, H1 (θ )δ1 would violate the nonnegativity condition, while sin θ = 0 corresponds to H1 (θ ) = ±I2 . Finally, if Υ21 = −Υ12 ,
one can verify that H1 (θ )Υ [H2 (θ )]′ = Υ for all θ . Alternatively, for reflections applied to a matrix Υ for which
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331
a = Υ11 = Υ22 , we see as in (D.2) that a cos2 θ + (Υ21 + Υ12 ) cos θ sin θ + a sin2 θ = a, which again can only hold for θ satisfying (D.3). In this case, sin θ = 0 is ruled out by the constraint H2
(θ)δ1 ≥ 0, but for cos θ = 0 we have H2 (π/2) =
H2 (−π /2) =
0 −1
−1 0
Both of these give H Υ H ′ = Υ ′ but only H2 (π /2)δ1 > 0. Finally, when Υ21 = −Υ12 , then H2 (θ )Υ [H2 (θ )]′ = Υ ′ for any θ . Thus ˜ Υ H˜ ′ that preserves equality of diagonal the only
transformation H elements is transposition, as claimed. Appendix E. Asymptotic standard errors of MLE Here we demonstrate that under the usual regularity conditions,
∂ 2 L(π (θ ); Y ) = −T Γ ′ RΓ ∂θ ∂θ ′ θ =θ0
∂π (θ ) Γ = ∂θ ′ θ=θ0 ∂ 2 L(π; Y ) −1 R = −T E . ∂π ∂π ′ π =π0 Note
∂θ1 ∂ L(π(θ ); Y ) ∂ L(π ) ∂ L(π ) .. .. = ··· . . ∂θ ′ ∂π1 ∂πq ∂π q ··· ∂θ1 ∂ 2 L(π(θ ); Y ) ∂π1 ∂πq = ··· ∂θi ∂θ ′ ∂θi ∂θi 2 ∂ L(π ) ∂ 2 L(π ) · · · ∂π1 ∂π1 ∂π1 ∂πq
. . .. .. .. × . 2 ∂ 2 L(π ) ∂ L(π ) ··· ∂π ∂π ∂πq ∂πq ∂π q 1 ∂π1 1 ··· ∂θ1 ∂θN . .. .. . × . . . ∂π ∂πq q ··· ∂θN ∂θ1 ∂ L(π ) ∂ L(π ) ··· + ∂πq 1 ∂π ∂ 2 π1
∂ 2 π1 ∂θ ∂θ · · · ∂θ ∂θ N i 1 i .. .. . × ... . . 2 ∂ 2 πq ∂ πq ··· ∂θ1 ∂θi ∂θN ∂θi
∂π1 ∂θN .. . ∂π
i θ=θ0
But the usual regularity conditions imply E {∂ L(π )/∂πj |π=π0 } = 0, so the second term in (E.2) vanishes. Stacking the row vectors represented by the first term into a matrix produces
··· .. . ···
∂ 2 L(π ) ∂π1 ∂πq π =π0 .. Γ . 2 ∂ L(π ) ∂πq ∂πq π =π0
as claimed. References
Evaluate (E.1) at θ = θ0 , take expectations with respect to the distribution of Y , and use the fact that Γ is not a function of Y :
∂ 2 L(π (θ ); Y ) ∂θi ∂θ ′ θ =θ0
i θ=θ0
∂ 2 L(π (θ ); Y ) E ∂θ ∂θ ′ θ=θ0 ∂ 2 L(π ) ∂π1 ∂π1 π=π 0 .. = Γ ′E . ∂ 2 L(π ) ∂πq ∂π1 π=π0
∂ 2 L(π ) ∂ 2 L(π ) · · · ∂π1 ∂π1 π=π ∂π1 ∂πq π=π0 0 ′ ′ . . .. .. .. = ei Γ E Γ . 2 ∂ 2 L(π ) ∂ L (π ) ··· ∂πq ∂π1 π=π0 ∂πq ∂πq π=π0
∂ L(π ) ∂ L(π ) · · · + E ∂π1 π =π0 ∂πq π=π0 ∂ 2 π ∂ 2 π1 1 · · · ∂θ1 ∂θi θ=θ ∂θN ∂θi θ=θ0 0 . . .. .. .. × . . 2 ∂ 2 πq ∂ π q ··· ∂θ ∂θ
∂θ ∂θ
Aït-Sahalia, Yacine, Kimmel, Robert L., 2010. Estimating affine multifactor term structure models using closed-form likelihood expansions. Journal of Financial Economics 98. Ang, Andrew, Dong, Sen,
Piazzesi, Monika, 2007. No-Arbitrage Taylor Rules. National Bureau of Economic Research, Working Paper no. 13448. Ang, Andrew, Piazzesi, Monika, 2003. A no-arbitrage vector autoregression of term
structure dynamics with macroeconomic and latent variables. Journal of Monetary Economics 50, 745–787. Ang, Andrew, Piazzesi, Monika, Wei, Min, 2006. What does the yield curve tell us about GDP
growth. Journal of Econometrics 131, 359–403. Bauer, Michael D., 2011. Term Premia and the News. Federal Reserve Bank of San Francisco, Working paper. Beechey, Meredith J., Wright, Jonathan H., 2009.
The high-frequency impact of news on long-term yields and forward rates: is it real? Journal of Monetary Economics 56, 535–544. Bekaert, Geert, Cho, Seonghoon, Moreno, Antonio, 2010. New keynesian
macroeconomics and the term structure. Journal of Money, Credit, and Banking 42, 33–62. Chamberlain, Gary, 1982. Multivariate models for panel data. Journal of Econometrics 18, 5–46. Chen, Ren-Raw,
Scott, Louis, 1993. Maximum likelihood estimation for a multifactor equilibrium model of the term structure of interest rates. The Journal of Fixed Income 3, 14–31. Christensen, Jens H.E., Diebold,
Francis X., Rudebusch, Glenn D., 2011. The affine arbitrage-free class of Nelson–Siegel term structure models. Journal of Econometrics 164, 4–20. Christensen, Jens H.E., Lopez, Jose A., Rudebusch,
Glenn D., 2010. Inflation expectations and risk premiums in an arbitrage-free model of nominal and real bond yields. Journal of Money, Credit, and Banking 42, 143–178. Christensen, Jens H.E., Lopez,
Jose A., Rudebusch, Glenn D., 2009. Do Central Bank Liquidity Facilities Affect Interbank Lending Rates? Working Paper 2009–13, Federal Reserve Bank of San Francisco. Cochrane, John H., Piazzesi,
Monika, 2009. Decomposing the Yield Curve. AFA 2010 Atlanta Meetings Paper. Collin-Dufresne, Pierre, Goldstein, Robert S., Jones, Christopher S., 2008. Identification of maximal affine term structure
models. Journal of Finance 63, 743–795. Dai, Qiang, Singleton, Kenneth J., 2002. Expectation puzzles, time-varying risk premia, and affine models of the term structure. Journal of Financial Economics
63, 415–441. Dai, Qiang, Singleton, Kenneth J., 2000. Specification analysis of affine term structure models. The Journal of Finance 55, 1943–1978.
J.D. Hamilton, J.C. Wu / Journal of Econometrics 168 (2012) 315–331 Duarte, Jefferson, 2004. Evaluating an alternative risk preference in affine term structure models. Review of Financial Studies 17,
379–404. Duffee, Gregory R., 2002. Term premia and interest rate forecasts in affine models. The Journal of Finance 57, 405–443. Duffee, Gregory R., 2011. Forecasting with the Term Structure: The
Role of NoArbitrage Restrictions. Working Paper, Johns Hopkins University. Duffee, Gregory R., Stanton, Richard H., 2008. Evidence on simulation inference for near unit-root processes with
implications for term structure estimation. Journal of Financial Econometrics 6, 108–142. Duffie, Darrell, Kan, Rui, 1996. A yield-factor model of interest rates. Mathematical Finance 6, 379–406.
Fisher, Franklin M., 1966. The Identification Problem in Econometrics. McGraw-Hill, New York. Fisher, R.A., 1924. The conditions under which χ 2 measures the discrepancey between observation and
hypothesis. Journal of the Royal Statistical Society 87, 442–450. Gallant, A. Ronald, Tauchen, George E., 1992. A nonparametric approach to nonlinear time series analysis: estimation and simulation.
In: David, Brillinger, Peter, Caines, John, Geweke, Emanuel, Parzen, Murray, Rosenblatt, Taqqu, Murad S. (Eds.), New Directions in Time Series Analysis Part II. Springer-Verlag. Gourieroux,
Christian, Monfort, Alain, Renault, Eric, 1993. Indirect inference. Journal of Applied Econometrics 8S, S85–S118. Hamilton, James D., Wu, Jing Cynthia, 2012. The effectiveness of alternative monetary
policy tools in a zero lower bound environment. Journal of Money, Credit & Banking 44 (s1), 3–46. Hamilton, James D., Wu, Jing Cynthia, Testable implications of affine term structure models. Journal
of Econometrics (forthcoming). Hansen, Lars P., 1982. Large sample properties of generalized method of moments estimators. Econometrica 50, 1029–1054. Hördahl, Peter, Tristani, Oreste, Vestin, David,
2006. A joint econometric model of macroeconomic and term-structure dynamics. Journal of Econometrics 131, 405–444. Joslin, Scott, Singleton, Kenneth J., Zhu, Haoxiang, 2011. A new perspective on
Gaussian dynamic term structure models. Review of Financial Studies 24, 926–970. Kim, Don H., 2008. Challenges in Macro-Finance Modeling. BIS Working Paper No. 240, FEDS Working Paper No. 2008-06.
Kim, Don H., Orphanides, Athanasios, 2005. Term Structure Estimation with Survey Data On Interest Rate Forecasts. Federal Reserve Board, Finance and Economics Discussion Series 2005-48. Kim, Don H.,
Wright, Jonathan H., 2005. An Arbitrage-Free Three-Factor Term Structure Model and the Recent Behavior of Long-Term Yields and DistantHorizon Forward Rates. Federal Reserve Board, Finance and
Economics Discussion Series 2005-33. Magnus, Jan R., Neudecker, Heinz, 1988. Matrix Differential Calculus with Applications in Statistics and Econometrics. John Wiley & Sons, Ltd.. Malinvaud, Edmond,
1970. Statistical Methods of Econometrics, third revised edition. North-Holland, New York. Newey, Whitney K., 1987. Efficient estimation of limited dependent variable models with endogenous
explanatory variables. Journal of Econometrics 36, 231–250. Neyman, J., Pearson, E.S., 1928. On the use and interpretation of certain test criteria for purposes of statistical inference: part II.
Biometrika 20A, 263–294. Pericoli, Marcello, Taboga, Marco, 2008. Canonical term-structure models with observable factors and the dynamics of bond risk premia. Journal of Money, Credit and Banking
40, 1471–1488. Rothenberg, Thomas J., 1971. Identification in parametric models. Econometrica 39, 577–591. Rothenberg, Thomas J., 1973. Efficient Estimation with a Priori Information. Yale University
Press. Rudebusch, Glenn D., Swanson, Eric T., Wu, Tao, 2006. The bond yield ‘Conundrum’ from a macro-finance perspective. Monetary and Economic Studies (Special Edition), 83–128. Rudebusch, Glenn D.,
Wu, Tao, 2008. A macro-finance model of the term structure, monetary policy and the economy. The Economic Journal 118, 906–926. Singleton, Kenneth J., 2006. Empirical Dynamic Asset Pricing. Princeton
University Press. Smith, Josephine M., 2010. The Term Structure of Money Market Spreads During the Financial Crisis. Ph.D. Thesis, Stanford University. Smith Jr., Anthony A., 1993. Estimating
nonlinear time-series models using simulated vector autoregressions. Journal of Applied Econometrics 8S, S63–S84. Vasicek, Oldrich, 1977. An equilibrium characterization of the term structure.
Journal of Financial Economics 5, 177–188. | {"url":"https://p.pdfkul.com/identification-and-estimation-of-gaussian-affine-term-_59b5ba661723dda273d9a322.html","timestamp":"2024-11-08T02:49:10Z","content_type":"text/html","content_length":"173310","record_id":"<urn:uuid:77a4c801-8499-4790-b7bf-d290cd161055>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00219.warc.gz"} |
LLMs do not understand anything | joeac’s blog
LLMs do not understand anything
Save this for the next time someone tells you that LLMs 'understand' things.
LLMs do not understand what they are talking about. They just don’t. It’s not how they’re built. They have a statistical model of language, not a semantic one. Philosophical puzzles about whether
silicon can be conscious do not arise, because LLMs are not even close to the right shape for having anything like ‘understanding’.
If you don’t believe me, there are plenty of examples out there on the Internet, but this is as good as any. It includes Chat GPT 4o explaining in detail why ~~A → ~A (which is a classical
contradiction) is trivially true in classical logic.
It’s even better given that I had explicitly asked it to explain why that sentence implies the trivial logic, not why it is trivially true. And even had the explanation not been complete garbage from
beginning to end, it would only have shown that the sentence was true, not that it was trivial.
In other words, the output:
• Attempts to prove a contradiction (unprompted!)
• Confuses the concepts ‘truth’ and ‘triviality’
• Is irrelevant to the prompt
In case you want it handy, I’ll put the full conversation down below. The good bit is at the end. The only edits I’ve made are to replace TeX syntax with Unicode.
You have the following axioms.
1. |- (A -> B) -> A
2. |- (~B -> ~A) -> (A -> B)
3. |- (A -> (B -> C)) -> ((A -> B) -> (A -> C))
4. A -> B, A |- B
Prove ~~A -> A.
Chat GPT 4o
To prove ~~ A → A, we will use the given axioms and inference rules. Let's denote the axioms as follows:
1. ⊢ (A → B) → A
2. ⊢ (~ B → ~ A) → (A → B)
3. ⊢ (A → (B → C)) → ((A → B) → (A → C))
4. If A → B and A, then ⊢ B
Here is the proof step-by-step:
1. Axiom (2) Instantiation:
From axiom 2, we have:
⊢ (~ A → ~~ A) → (~~ A → A)
Let B = A and A = ~ A.
2. Intermediate Step:
To use this axiom, we need to show ~ A → ~~ A.
3. Negation Introduction:
By definition of negation, ~ A → ~~ A is a tautology, because if A is false, then ~ A is true, and ~~ A (which is A) must be false, making ~ A → ~~ A always true.
4. Modus Ponens Application:
Since ~ A → ~~ A is true, we apply modus ponens (axiom 4) to derive:
⊢ ~~ A → A
Thus, we have proved ~~ A → A using the given axioms and inference rules. | {"url":"https://joeac.net/blog/2024/06/13/llms_do_not_understand_anything/","timestamp":"2024-11-11T08:37:26Z","content_type":"text/html","content_length":"6365","record_id":"<urn:uuid:afab63ec-0736-4d9b-9097-0b5f68836177>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00063.warc.gz"} |
Solution Pool: "All" Is Not All
I was struck by an idle thought (which left a wicked bruise) regarding yesterday's post about
finding "all" MIP solutions
using the
solution pool.
algorithms generally prune a node for one of three reasons:
• the LP relaxation is infeasible, in which case no amount of further torture will cause the node to confess to a solution;
• the objective value of the LP relaxation is worse (or at least no better) than that of the current incumbent solution, in which case further division of the node will never yield a solution
better than the incumbent; or
• "the" optimal solution to the LP relaxation is integer-feasible, in which case it is unclear how to further partition the node (and whether to bother).
Note the quotation marks around "the" in the third bullet. When searching for all optimal solutions, or even all feasible solutions, the first rule (prune an infeasible node) still makes sense. The
second rule needs to be tempered, as there may be solutions in other parts of the tree that are as good as the incumbent even if they are not better (and, in the case of finding all feasible
solutions, you do not care that solutions in other branches of the tree have worse objective values). That brings us to the third rule.
A node problem whose LP relaxation has an integer-feasible optimum may in fact have multiple integer-feasible optima, and certainly may have multiple integer-feasible solutions (which is important if
you are looking for all feasible solutions, not just all optima). The usual branching rule (pick a variable that should be integer but is not and separate into two nodes by rounding it up or down) is
of no avail if the solution is integer-feasible. One could concoct an arbitrary rule for further partitioning, but it would be outside the scope of the normal branch-and-bound algorithm.
[Edit: Ed Klotz pointed out an important distinction to me. When the node relaxation solution is integer-feasible, you need to look at the sequence of branching decisions leading to the node. If
those decisions fixed all the integer variables to specific values, then the node LP solution is the only integer-feasible solution at that node, and there's nothing left to do but prune it. If the
branching decisions leave open the possibility that some of the integer variables might take on different values consistent with all the added cuts that got you there, then further processing is
required, and CPLEX will indeed do this.]
So the idle thought was "what does the CPLEX
function do in the third case?", and the answer seems to be
[S:"prune the node":S]
. [Edit: I think I was wrong here. See the update below.] Consider the following trivial test problem:\[ \begin{array}{lrcl} \textrm{maximize} & x_{1}\\ \textrm{s.t.} & \sum_{i=1}^{d}x_{i} & \ge & \
frac{d}{3}\\ & x & \in & \mathbb{B}^{d} \end{array} \] where $\mathbb{B}=\{0,1\}$. There is no significance to the choice of 3 as the denominator of the right-side of the constraint, and in fact the
constraint is there primarily to make sure that CPLEX extracts and solves for all variables. (Without it, CPLEX treats the problem as having a single variable $x_1$.) The problem clearly has multiple
optimal solutions, formed by setting $x_1=1$ and $x_j=1$ for at least $d/3-1$ other values of $j$.
I ran the
method against this problem, using the same parameter choices mentioned in yesterday's post (and in particular asking for up to 100 optimal solutions). For various (small) choices of $d$, CPLEX
produced only a single solution in the pool ($x_j=1$ for all $j$). Turning off the presolver did not affect this, nor did monkeying with the root and node LP algorithms (nor any other parameter
change I tried). I'm not sure why that particular solution was the winner. As to why only one solution was returned, my guess is that the first step branches on (or otherwise fixes) $x_1$, rules out
the node where $x_1=0$ based on bound, finds an integer-feasible optimum in the node where $x_1=1$ and then
fails to partition that node any further
. (This is strictly conjecture, since I am not privy to the inner workings of
My conclusion is that if you really need all the optima for a MIP, you will need to do something clever (either with callbacks or looping through a sequence of problems) and not rely on the CPLEX
solution pool.
My conjecture about CPLEX branching once and then not partitioning the winning node was wrong. I noticed that neither the "phase I" nor "phase II" output contained a node log, which would suggest
that CPLEX obtained the one solution it found by presolving, even though I had presolving turned off. So I modified the objective function to $x_1 - \epsilon \sum_{j\gt 1}x_j$, which means it is no
longer harmless to force all $x_j$ to 1 for $j\gt 1$. With the new objective, CPLEX generated a node log and found all optimal solutions (as well as some suboptimal ones). Similarly, with just one of
the variables penalized, say an objective function $x_1 - \epsilon x_2$, CPLEX branched and found all optima.
So failing to partition a node with an integer solution is not the problem. This may just be a bug.
Update #2:
It is a bug. Here's an official response from the IBM/ILOG support team.
This is a bug that we will fix. Thomas Opfer's conjecture from Jan 28 is on target:
> Might it be that they iterate trivial examples because it is faster than starting the complete CPLEX-overhead?
Specifically, before CPLEX's main presolve gets started, CPLEX computes a trivial objective upper bound on this maximization problem by removing all the constraints and setting the variables to their
appropriate bound based on the objective value. In this case that established an upper bound of 1. CPLEX also applies some very simple heuristics, which find a feasible solution with an objective of
1. So, the lower and upper bound for the MIP already match, which means an optimal solution has already been found. So, no tree, which the solution pool algorithm requires, is available, and CPLEX
essentially stops with the one optimal solution in the pool. This is not correct behavior when the solution pool intensity has been set to 4 to tell CPLEX to enumerate all solutions. CPLEX properly
handles a similar case when the full presolve solves the whole model and no tree is available to the solution pool, so we need to do likewise in this situation.
We will fix this in either the next fixpack or feature release, whichever comes first. Sorry for the inconvenience, although the good news is that the bug will only bite on very simple models that
CPLEX can solve to optimality with some very minimal presolving effort. In the short term, you can work around this on such models by setting the heuristic frequency parameter to -1 to disable
11 comments:
1. That's very interesting and clearly contradicts the CPLEX user manual. I checked this and experienced the same problem. I think you should report a bug (at least for the wrong manual).
Concerning this and your former blog post, I think one should point out that enumerating all solutions only makes sense for pure integer programs. For MIPs, this is much more difficult in
general. (One possibility would be to project the linear part away and iterate all integer feasible solutions. For each of them, one then can take the remaining LP and do some vertex enumeration
like LRS.)
Best regards,
1. Thomas: I agree with both paragraphs, although I think it might be a code bug rather than a documentation bug. (I just added an update to the text of the post.) Regarding the second
paragraph, I tried to make the same point in the "grandparent" of this post (the post preceding it's immediate predecessor).
2. Maybe they find all non-isomorphic solutions? (Did you monkey with the 'symmetry' parameter).
But I agree -- this is at least a bug in the documentation. There is an old IPCO paper by the CPLEX people on this topic, so they should know what they are taking about.
Emilie Danna, Mary Fenelon, Zonghao Gu, Roland Wunderling: Generating Multiple Solutions for Mixed Integer Programming Problems. IPCO 2007: 280-294
1. Jeff: I had not tried the Symmetry parameter, but turning it off did not change the result. I just added an update above that makes me think this is a bug. I wonder if some preprocessing
occurs regardless of parameter settings? At any rate, I'll let them know and we'll see what happens.
2. There is some preprocessing concerning the clique tables. This cannot be turned off. Some time ago, I experienced (and Daniel confirmed) that this can fix variables.
Did you have a look at the root LP?
Best regards,
3. Thomas: No, I use the Java API, and the root LP is available only in the C API. :-(
3. Hmm, it is really strange. I tried to access some information using the C API. I disabled everything that came into my mind and I added several callbacks. Result: "Found incumbent of value
1.000000 after 0.00 sec. (0.00 ticks)". No callback was called... How does CPLEX find this solution? Voodoo? Enumeration (probing is off!)?
On the other hand, I found out that the interactive optimizer finds several solutions if I do the following: r test.lp, set pre pre n, o, pop.
Best regards,
1. Your result with C is identical to mine with Java. Perhaps there is an ESP heuristic? Or perhaps the IBM ILOG folks have figured out how to program intuition?
I matched your result with the interactive optimizer; but if I skip the optimize step and go straight from 'set pre pre n' to 'pop', I only get a single solution. Adding to my confusion (not
that I needed any more), if I do the same in Java (cplex.solve() first, then cplex.populate()), it makes no difference; I get one solution either way.
2. It seems that the behaviour in C and Java is the same.
Might it be that they iterate trivial examples because it is faster than starting the complete CPLEX-overhead?
Best regards,
4. Paul,
thanks for posting the support response. I checked this and setting CPX_PARAM_HEURFREQ to -1 in fact seems to solve the problem. I thought, I had tried this before. Maybe, I accidently set it to
0 or CPX_OFF.
Best regards,
1. Thomas: I've had that same experience. There are too many parameters to keep track of them all.
Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an
operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange. | {"url":"https://orinanobworld.blogspot.com/2013/01/solution-pool-all-is-not-all.html","timestamp":"2024-11-04T14:56:53Z","content_type":"application/xhtml+xml","content_length":"173047","record_id":"<urn:uuid:e2c05667-1413-42ca-a3ae-79c1c91fa9bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00233.warc.gz"} |
How to determine interest rate on loan in excel
10 Aug 2012 rate - The interest rate for the loan expressed as a monthly rate. nper - The length of the loan in months. pv - The amount being borrowed, also
This calculates the monthly payment with interest for the loan. Figure 2. of Excel PMT Function. Loans consist of 4 basic parts. The Loan amount, Rate of Interest, the loan duration (number of
regular payments), and an amount to be paid per period. We can use the Excel PMT Function to calculate the payment amount when we have all four components. Figure 3. of Final Result. Calculate total
interest paid on a loan in Excel For example, you have borrowed $100000 from bank in total, the annual loan interest rate is 5.20%, and you will pay the bank every month in the coming 3 years as
below screenshot shown. There are the range of built-in functions in Excel, that allow you to compute the effective rate of interest, with taking into account additional charges and fees, and
excluding (relying only on the nominal interest and the loan term). The debtor took a credit in the sum of 150 000$ on the term of 1 year (12 months). As with all Excel financial functions, the
period for the rate must match the other arguments. For example, if you enter arguments that include monthly payments, the RATE function’s result will be the monthly interest rate, and you must
multiply this by 12 to get an annual rate. Let’s try out the RATE function. Start with a blank worksheet and then follow these steps:
This calculates the monthly payment with interest for the loan. Figure 2. of Excel PMT Function. Loans consist of 4 basic parts. The Loan amount, Rate of Interest, the loan duration (number of
regular payments), and an amount to be paid per period. We can use the Excel PMT Function to calculate the payment amount when we have all four components. Figure 3. of Final Result.
23 Sep 2010 Also called annual percentage rate (APR) and annual percentage yield (APY), Excel makes it easy to calculate effective mortgage, car loan, The function calculates the payment for a loan
based on constant payments and a constant interest rate. Syntax: =PMT (rate, nper, pv, [fv], [type]). Syntax for loan Loan Calculator for Excel - Download a simple loan calculator spreadsheet to
calculate the payment, time to payoff, interest rate, or loan amount. The comparison rate takes into account fees and charges as well as interest, so if you use it, you will get a higher amount of
interest than you should. Calculating P is the original loan amount or principal, R is the rate of interest that is applicable per annum and N is the number of monthly installments/ loan tenure. In
case you interest rate - the loan's stated APR. loan term in years - most fixed-rate home loans across the United States are scheduled to amortize over 30 years. Other
Find out about compound interest and how to use the compounding interest formula in Microsoft Excel to calculate the compound interest on a loan.
P is the original loan amount or principal, R is the rate of interest that is applicable per annum and N is the number of monthly installments/ loan tenure. In case you
The comparison rate takes into account fees and charges as well as interest, so if you use it, you will get a higher amount of interest than you should. Calculating
An interest rate formula is used to calculate the repayment amounts for loans and You can download this Interest Rate Formula Excel Template here – Interest
Enter the interest payment formula. Type =IPMT(B2, 1, B3, B1) into cell B4 and press ↵ Enter. Doing so will calculate the amount that you'll have to pay in interest for each period. This doesn't give
you the compounded interest, which generally gets lower as the amount you pay decreases.
As with all Excel financial functions, the period for the rate must match the other arguments. For example, if you enter arguments that include monthly payments, the RATE function’s result will be
the monthly interest rate, and you must multiply this by 12 to get an annual rate. Let’s try out the RATE function. Start with a blank worksheet and then follow these steps: To calculate the periodic
interest rate for a loan, given the loan amount, the number of payment periods, and the payment amount, you can use the RATE function. In the example shown, the formula in C10 is: = RATE (C7, C6, -
C5) * 12 Loans have Calculate payment periods for loan The first step determines the monthly payment. The second step calculates the interest rate, and the third step determines the loan schedule.
You can build a table in Excel that will tell you the RATE: After typing the open parenthesis, Excel will first ask for the RATE, or interest rate on the loan. Here you will enter the interest rate
in percentage terms for each period. So if you want to calculate a monthly mortgage payment using a 5% interest rate, you can enter "5%/12" or "0.05/12". The "/12" divides the annual interest rate
into monthly amounts. Enter the interest payment formula. Type =IPMT(B2, 1, B3, B1) into cell B4 and press ↵ Enter. Doing so will calculate the amount that you'll have to pay in interest for each
period. This doesn't give you the compounded interest, which generally gets lower as the amount you pay decreases.
5 Dec 2017 Tools and Calculators. Generally, interest on student loans is calculated daily. Use this calculator to figure out the interest amount owed since your last payment . Annual Interest Rate
(%)*. Daily Interest Rate Factor 24 Oct 2016 To calculate the monthly accrued interest on a loan or investment, you first need to determine the monthly interest rate by dividing the annual To
calculate compound interest in Excel, you can use the FV function. This example assumes that $1000 is invested for 10 years at an annual interest rate of 5%, compounded monthly. In the example shown,
the formula in C10 is: = FV (C6 / C8, C7 * Calculate payment periods for loan This calculates the monthly payment with interest for the loan. Figure 2. of Excel PMT Function. Loans consist of 4 basic
parts. The Loan amount, Rate of Interest, the loan duration (number of regular payments), and an amount to be paid per period. We can use the Excel PMT Function to calculate the payment amount when
we have all four components. Figure 3. of Final Result. | {"url":"https://btctopxfxezf.netlify.app/herbick47717sydu/how-to-determine-interest-rate-on-loan-in-excel-252.html","timestamp":"2024-11-12T17:09:23Z","content_type":"text/html","content_length":"34809","record_id":"<urn:uuid:607405de-5a96-4704-94a9-85ca3ab99d71>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00784.warc.gz"} |
Today there is a dilemma for an ISO/IEC 17025 accredited laboratory service provider in issuing a statement of conformity with specification to the clients after testing, particularly when the
analysis result of the test sample is close to the specified value with its upper or lower measurement uncertainty crossing over the limit. The laboratory manager has to decide on the level of risk
he is willing to take in stating such conformity.
However, there are certain trades which buy goods and commodities with a given tolerance allowance against the buying specification. A good example is in the trading of granular or pelletized
compound fertilizers which contain multiple primary nutrients (e.g. N, P, K) in each individual granule. A buyer usually allows some permissible 2- 5% tolerance on the buying specification as a lower
limit to the declared value to allow variation in the manufacturing process. Some government departments of agriculture even allow up to a lower 10% tolerance limit in their procurement of compound
fertilizers which will be re-sold to their farmers with a discount.
Given the permissible lower tolerance limit, the fertilizer buyer has taken his own risk of receiving a consignment that might be below his buying specification. This is rightly pointed out in the
Eurolab’s Technical Report No. 01/2017 “Decision rule applied to conformity assessment” that by giving a tolerance limit above the upper specification limit, or below the lower specification limit,
we can classify this as the customer’s or consumer’s risk. In hypothesis testing context, we say this is a type II (beta-) error.
What will be the decision rule of test laboratory in issuing its conformity statement under such situation?
Let’s discuss this through an example.
A government procurement department purchased a consignment of 3000 bags of granular compound fertilizer with a guarantee of available plant nutrients expressed as a percentage by weight in it, e.g.
a NPK of 15-15-15 marking on its bag indicates the presence of 15% nitrogen (N), 15% phosphorus (P2O5) and 15% potash (K2O) nutrients. Representative samples were drawn and analyzed in its own
fertilizer laboratory.
In the case of potash (K2O) content of 15% w/w, a permissible tolerance limit of 13.5% w/w is stated in the tender document, indicating that a fertilizer chemist can declare conformity at this
tolerance level. The successful supplier of the tender will be charged a calculated fee for any specification non-conformity.
Our conventional approach of decision rules has been based on the comparison of single or interval of conformity limits with single measurement results. Today, we have realized that each test result
has its own measurement variability, normally expressed as measurement uncertainty with 95% confidence level.
Therefore, it is obvious that the conventional approach of stating conformity based on a single measurement result has exposed the laboratory to a 50% risk of having the true (actual) value of test
parameter falling outside the given tolerance limit, rendering it to be non-conformance! Is the 50% risk bearable by the test laboratory?
Let say the average test result of K2O content of this fertilizer sample was found to be 13.8+0.55%w/w. What is the critical value for us in deciding on conformity in this particular case with the
usual 95% confidence level? Can we declare the result of 13.8%w/w found to be in conformity with specification referencing to its given tolerance limit of 13.5%w/w?
Let us first see how the critical value is estimated. In hypothesis testing, we make the following hypotheses:
Ho : Target tolerance value > 13.5%w/w
H1 : Target tolerance value < 13.5%w/w
Use the following equation with an assumption that the variation of the laboratory analysis result agrees with the normal or Gaussian probability distribution:
mu is the tolerance value for the specification, i.e. 13.5%,
x(bar) , the critical value with 95% confidence (alpha- = 0.05),
z, the z -score of -1.645 for H1’s one-tailed test, and
u, the standard uncertainty of the test, i.e. U/2 = 0.55/2 or 0.275
By calculation, we have the critical value x(bar) = 13.95%w/w, which, statistically speaking, was not significantly different from 13.5%w/w with 95% confidence.
Assuming the measurement uncertainty remains constant in this measurement region, such 13.95%w/w minus its lower uncertainty U of 0.55%w/w would give 13.40% which has (13.5-13.4) or 0.1%w/w K2O
amount below the lower tolerance limit, thus exposing some 0.1/(2×0.55) or 9.1% risk.
When the reported test result of 13.8%w/w has an expanded U of 0.55%w/w, the range of measured values would be 13.25 to 14.35%w/w, indicating that there would be (13.50-13.25) or 0.25%w/w of K2O
amount below the lower tolerance limit, thus exposing some 0.25/(2×0.55) or 22.7% risk in claiming conformity to the specification limit with reference to the tolerance limit given.
Visually, we can present these situations in the following sketch with U = 0.55%w/w:
The fertilizer laboratory manager thus has to make an informed decision rule on what level of risk that can be bearable to make a statement of conformity. Even the critical value of 13.95%w/w
estimated by the hypothesis testing has an exposure of 9.1% risk instead of the expected 5% error or risk. Why?
The reason is that the measurement uncertainty was traditionally evaluated by two-tailed (alpha- = 0.025) test under normal probability distribution with a coverage factor of 2 whilst the hypothesis
testing was based on the one-tailed (alpha- = 0.05) test with a z-score of 1.645.
To reduce the risk of testing laboratory in issuing statement of conformity to zero, the laboratory manager may want to take a safe bet by setting his critical reporting value as (13.5%+0.55%) or
14.05%w/w so that its lower uncertainty value is exactly 13.5%w/w. Barring any evaluation error for its measurement uncertainty, this conservative approach will let the test laboratory to have
practically zero risk in issuing its conformity statement.
It may be noted that the ISO/IEC 17025:2017 requires the laboratory to communicate with the customers and clearly spell out its decision rule with the clients before undertaking the analytical task.
This is to avoid any unnecessary misunderstanding after issuance of test report with a statement of conformity or non-conformity. | {"url":"https://consultglp.com/2019/may","timestamp":"2024-11-06T21:32:21Z","content_type":"text/html","content_length":"51747","record_id":"<urn:uuid:15459012-0b0a-4caa-9ef2-8ac688a2ff8a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00413.warc.gz"} |
How is a Lie algebra able to describe a group?
in Lie Group Theory
How is a Lie Algebra able to describe a Group?
If you understand the idea Lie Group= Manifold, you can easily understand one of the most curious facts of Lie theory:
The Lie algebra $\frak{g}$, which is defined as the tangent space at the identity $T_eG$, is able to tell us almost everything about a given Lie group $G$.
The connection between Lie algebra elements and Lie group elements is established by the exponential map. We can understand this easily from the “naive”, historical approach to Lie theory that deals
with infitesimal transformations. The differential geometry perspective enables us to understand this connection in a more pictorial way.
There are four simple concepts one needs to know in order to understand the connection between Lie algebra elements and Lie group elements: Curves, Functions, Vector Fields, and Integral Curves.
One way to think about a curve on a manifold $M$ is as continuous series of points in $M$. The definition we want to use is, that a curve is a mapping from an open set of $R^1$ into $M$. Therefore, a
curve associates with each point $\lambda$ of $R^1$ (which is just a real number) a point in $M$. The curved is said to be parametrized by $\lambda$. The point in $M$ is called the image point of $\
lambda$. Two curves can be different even though they have the same image points in $M$, if they assign a different parameter value to the image points.
Functions are in some sense inverse to curves. A function on a manifold $M$ assigns a real number (= a element of $R^1$) to each point of $M$. To make sense of the word differentiable when talking
about functions on $M$, it helps to think about differentiability in terms of coordinates.
Remember that a manifold is defined as a set for which in the neighborhood of any point a map onto $R^n$ exists. In other words: We can assign in the neighborhood of any point coordinate values to
the points in the neighborhood. Therefore we can combine this map onto $R^n$, with the map (function) from $M$ to $R^1$ to get a map from $R^n$ onto $R^1$. A function is said to be differentiable if
the it is differentiable in $R^n$.
In the abstract sense a function is a map $f(P)$, where $P$ denotes some point in $M$. By assigning coordinate values to this point, we have a map $f(x^1,x^2,…,x^n)$. If this function is
differentiable in its arguments, the function is said to be differentiable. The coordinate map itself gives us a function for each coordinate. For example $x^2(P)$ is a function that maps each point
$P$ to the corresponding coordinate value $x^2$.
Tangent Vectors:
Now we head to something that is quite hard to grasp when stumbling about it for the first time: The modern, abstract definition of a tangent vector.
We start with a curve that passes through some point $P$ of $M$. This curve is described, using the coordinate map, by the equations $x^i(\lambda)$. A differentiable function $f=f(x^1,x^2,…,x^n)$,
abbreviated $f=f(x^i)$, on $M$ assigns a value to each point of the curve. To be precise we call this function, $g(\lambda)$, because $f$ is a function of $(x^1,x^2,…,x^n)$ and $g$ is a function of $
$$g(\lambda) = f(x^1(\lambda),x^2(\lambda),…,x^n(\lambda))= f(x^i(\lambda))$$
We can differentiate, which yields, using the chain rule
$$\frac{dg}{d\lambda} = \sum_i \frac{dx^i}{d \lambda}\frac{ \delta f }{ \delta x^i} .$$
Because this equation holds for any function $f$, we can write
$$\frac{d}{d\lambda} = \sum_i \frac{dx^i}{d \lambda}\frac{ \delta }{ \delta x^i} .$$
(If this seems strange to you, you may check out chapter 8.1 of the Feynman Lectures about Quantum Mechanics, which has a great explanation regarding this, “removing objects from an equation, if it
holds for arbitrary objects of this kind)
If we would be talking exclusively about Euclidean space, we would interpret $\frac{dx^i}{d \lambda}$ as the components of a vector tangent to the curve. $dx^i$ are infinitesimal displacement along
the curve by dividing them by a real number $\lambda$, gives the rate of change in this direction. Dividing by a real number does only change the scale, not the direction of the displacement.
Every curve has a unique tangent vector at any point. In contrast a tangent vector can be the tangent vector for infinitely many curves. For example, if we take a look at the simple curve
$$x^i(\lambda)=\lambda a^i$$
with constants $a^i$. The tangent vector at the point $P$, $\lambda=0$ is $\frac{dx^i}{d \lambda} = a^i$. A slightly different curve
$$x^i(\mu)=\mu^2 b^i+ \mu a^i$$
goes through the same point $P$ for $\mu=0$ and has the same tangent vector at $P$, $\frac{dx^i}{d \mu} = a^i$.
Furthermore, we can reparametrize the first curve by $x^i(\nu)=(\nu^3 + \nu) a^i$. This curve goes through the same points, but has different values $\nu$ associated with them. At $\nu=0$, this curve
goes again through $P$ and the tangent vector is $\frac{dx^i}{d \nu} = a^i$.
We conclude: Every vector belongs to a whole equivalence classes of curves at this point. More dramatically one can say: The vector characterizes a whole equivalence classes of curves at this point.
The defining feature of this equivalence class is the vector.
This observation motivates the modern definition of a tangent vector. In Euclidean space we have no problem with talking about displacements. A vector in Euclidean space points from one point to
another. On a manifold there is, in general, no distance relation between points. Therefore, we need a more sophisticated idea to be able to talk about vectors.
Consider a curve $x^i=x^i(\mu)$ through $P$. Analogous to the considerations above we have
$$\frac{d}{d\mu} = \sum_i \frac{dx^i}{d \mu}\frac{ \delta }{ \delta x^i} .$$
Using two numbers $a$ and $b$, we can take a look at the linear combination
$$a\frac{d}{d\lambda} + b \frac{d}{d\mu} = \sum_i \left( a \frac{dx^i}{d \lambda} + b \frac{dx^i}{d \mu} \right) \frac{ \delta } { \delta x^i} .$$
$ a \frac{dx^i}{d \lambda} + b \frac{dx^i}{d \mu} $ are components of a new vector that is tangent to some curves through $P$. Therefore, for one of these curves that is parametrized by $\Phi$, we
can write at $P$
$$\frac{d}{d\Phi}= \sum_i \left( a \frac{dx^i}{d \lambda} + b \frac{dx^i}{d \mu} \right) \frac{ \delta } { \delta x^i} .$$
and therefore
$$\frac{d}{d\Phi}= a\frac{d}{d\lambda} + b \frac{d}{d\mu} .$$
At this point we can see that we have a one-to-one correspondence between the space of all tangent vectors at some point $P$ and the space of all derivatives along curves at $P$.
The directional derivatives, like $\frac{d}{d\lambda}$, behave like the usual vectors under addition and are therefore said to form a vector space. A basis for this vector space is always given by $\
{ \frac{ \delta } { \delta x^i} \}$, because we have seen that any directional derivative can be written as a linear combination of the derivatives $\frac{ \delta } { \delta x^i} $, the derivatives
along the coordinate lines. The components in this basis are $ \{ \frac{dx^i}{d \lambda} \}$.
The definition of a tangent vector that mathematicians prefer is that $\frac{d}{d\lambda}$ is the tangent vector to the curve $x^i(\lambda)$. This definition is advantageous, because it involves no
displacements over finite separations and it makes reference to coordinates. The definition makes sense, because, as we can see above, $\frac{d}{d\lambda}$ has the components $ \{ \frac{dx^i}{d \
lambda} \}$, if we pick some coordinate system, which are exactly the numbers one associates naively with an “arrow” tangent to a curve.
The subtle point is that we are no longer allowed to add vectors located at different points of the manifold. The vectors are said to live in the tangent space of $M$ at $P$, denoted $T_P M$.
Therefore vectors at different points live in different spaces and are not allowed to be added together. Using the sphere as an example, one may picture this tangent space as a plane lying tangent to
the sphere this point.
Vector Fields and Integral Curves
Another important term is vector field. A vector field is a rule that defines a vector at each point of the manifold. We have one tangent space at each point of the manifold and a vector field is a
rule that picks one vector from each tangent space. As already noted above: Every curve has at each point a tangent vector. We can turn this a bit upside-down. Given a vector field, i.e. a tangent
vector at each point of the manifold, we can find, if we pick one point $P$, precisely one curve going through $P$ that has exactly the tangent vector the vector field picks as a tangent vector at
any point the curves passes through. Such a curve is called a integral curve, because given a vector field $V^i(P)$, in coordinates $V^i(P)=v^i(x^j)$, the statement that these vectors are tangent
vectors to curves is mathematically
$$\frac{dx^i}{d \lambda} = v^i(x^j) .$$
This is just a first order differential equation for $x^i(\lambda)$ that has always a unique solution in some neighborhood of $P$.
The Exponential Function on a Manifold
We are now in the position to understand the tool that connects Lie algebra elements and Lie group elements: The exponential function. Given a vector field $ Y = \frac{d}{d \lambda}$, we can find the
corresponding integral curve $x^i(\lambda)$ through some point. The coordinates of two points ($\lambda_0$ and $\lambda_0 + \epsilon$) on this curve are related by the Taylor series
\begin{align} x^i (\lambda_0 + \epsilon)
&= x^i(\lambda_0) + \epsilon \left( \frac{d x^i}{d \lambda} \right)_{\lambda_0} + \frac{1}{2!}\epsilon^2 \left( \frac{d^2 x^i}{d \lambda^2} \right)_{\lambda_0} + … \\
&= \left( 1 + \epsilon \frac{d}{d \lambda} + \frac{1}{2!} \epsilon^2 \frac{d^2}{d \lambda^2} + … \right) x_i \Big |_{\lambda_0} \\
&= \exp \left( \epsilon \frac{d}{d \lambda} \right) x^i \Big |_{\lambda_0}
The $\exp$ notation here should be understood as a shorthand notation for series of differential operators that act on $x^i$, evaluated at $\lambda_0$.
The Lie algebra
The Lie algebra is defined as the tangent space at the identity $T_eG$, of the manifold $G$. We will now see that each element of $T_eG$, i.e., each tangent vector at the identity defines a unique
vector field $V$ on $G$.
We can then find the corresponding integral curve for this vector field $V$ that goes through $e$ and has tangent vector $V_e$ at the identity. By using the $\exp$ operator, as defined above, we can
get to any point on this curve. $V$ is completely determined by $V_e$ and therefore the points of $G$ on this curve can be denoted by
$$g_{v_e(t)} = \exp{t V}|_e$$
This is how we are able to get Lie group elements from Lie algebra elements. We have a one-to-one correspondence between tangent vectors and differential operators and are therefore able to use the $
\exp$ notation to describe the corresponding Taylor series. This Taylor series connects different points on the same curve and therefore we are able to get from each element of the Lie algebra
$T_eG$, elements of the Lie group $G$.
We will now take a look at how each tangent vector at the identity defines a unique vector field.
The Lie algebra and left-invariant vector fields
Given a Lie group $G$, natural maps (diffeomorphisms) are left- and right-translations. That is any element $g$ of $G$ defines a map from a point $h$ of the manifold $G$ onto some new point $gh$
(left-translation) or $hg$ (right-translation).
$$h \rightarrow gh \mathrm{ \ \ \ \ left-translation} \qquad \mathrm{ or } \qquad h \rightarrow hg \mathrm{ \ \ \ \ right-translation}$$
Here the group acts on itself. Of course we are able to examine how a group acts on different manifolds, too, and this idea is investigated further by representation theory. For now it suffices to
have a look at these maps of $G$ onto itself.
Everything that follows can be done with right-translations, but its convention to use left-translations.
As always we denote the identity element of the group by $e$. The left-translation map along a particular $g$, maps any neighborhood of $e$ onto a neighborhood of $g$ (See the figue below). In
addition, this map, maps curves into curves and therefore tangent vectors into tangent vectors.
The map between tangent vectors is commonly denoted $L_g : T_e \rightarrow T_g$. A vector field $V$ is said to be left-invariant if $L_g$ maps $V$ at $e$ to $V$ at $g$ (and not to some different
vector field at $g$, for example, some $W$), for all $g$. Because of the group composition law this is equivalent to saying that a vector field is left-invariant if $L_g$ maps $V$ at $h$ to $V$ at
$gh$, for any $h$.
Every vector in $T_e$ defines a left-invariant vector field. The rule for getting a vector at each point of the manifold is given by $L_g$. By multiplication with some $g\in G$, we get from any
vector in $T_eG$ a vector in $T_gG$, and by definition this is what we call a left-invariant vector field.
Therefore, the set of left-invariant vector fields is isomorphic to the elements of $T_eG$, i.e., the Lie algebra of $G$. Mathematicians say the set of left-invariant vector fields is the Lie algebra
of $G$. ( The defining feature of a Lie algebra, the Lie bracket can be easily proved using coordinates. One needs to show that $L_g$ maps $[V,W]$ at $e$ into $[V,W]$ at $g$, which means the vector
field $[V,W]$ is again left-invariant. )
The distinction is necessary, because of the abstract definition of a Lie algebra. The set of all vector fields on $G$ form a Lie algebra, too, but this is not the Lie algebra one has normally in
mind when talking about the Lie algebra of a given Lie group.
In the last section we saw, how each element of $T_eG$ defines uniquely a vector field on $G$. Each element of the Lie algebra can be identified with a left-invariant vector field.
To each left-invariant vector field, and therefore to each element of $T_eG$ belongs one unique integral curve through the identity element $e$ of $G$. We can get to points on this curve, by using
the exponential map, which is shorthand for the Taylor series
$$g_{v_e(t)} = \exp{(t V)}|_e.$$
Starting with a different element of $T_eG$ we have a different integral curve and are able to reach different points of $G$. (This does not hold in general. Two different elements of $T_eG$, would
be $v$ and $2v$. The corresponding integral curves go through the same points of $G$, only the “speed” (parameter value) differs. Nevertheless, for some elements we can get to different points)
This is how the Lie algebra $T_eG$ of group $G$ is able to describe to describe the group $G$.
P.S. I wrote a textbook which is in some sense the book I wished had existed when I started my journey in physics. It's called
"Physics from Symmetry"
and you can buy it, for example, at
. And I'm now on
too if you'd like to get updates about what I'm recently up to.
If you want to get an update whenever I publish something new, simply put your email address in the box below.
1. Thank you.
2. thanks, awesome introduction
A complete guide to the adjoint representation of a Lie group? 24. December 2018
[…] written a long post about how and why this […] | {"url":"http://jakobschwichtenberg.com/lie-algebra-able-describe-group/","timestamp":"2024-11-03T21:44:53Z","content_type":"text/html","content_length":"53603","record_id":"<urn:uuid:4b8a4d6e-9a44-4aa9-8af3-80022324ffca>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00597.warc.gz"} |
How Many Cups Of Coffee Are In A Bag? We Got The Answer
How many cups of coffee are in a bag? Jump into our article and find out how many cups of joe you can make from a bag of coffee.
So, how many cups of coffee are in a bag?
It’s a simple question that should have a simple answer, right?
Not quite.
You may have noticed that there are so many different answers floating around on the web.
That’s because the number of cups of coffee that you can get out of a bag of coffee will depend on a bunch of factors.
Don’t worry. I’ve spent hours researching and done all the calculations for you so that you don’t have to.
Let’s get to it!
How Many Cups Of Coffee Do You Get From a 12-ounce Bag?
We need to figure out how much coffee is used per cup
First thing first.
In order for us to even think about how many cups we can squeeze out of a 12oz bag, we need to figure out how much coffee is used per cup.
Okay, so with that in mind, let’s use the weight of the coffee.
The next step is to figure out what the coffee-to-water ratio and cup size will be.
Now, to simplify things for us, I’ve done the calculations using a standard water ratio of 1:17 (1 gram of coffee: 17 grams of water) and a cup size of 12oz (355ml).
1 gram of water is the same as 1 milliliter of water.
Given that we need 1 gram of coffee per 17 units of water, we will need 355ml of water and 21 grams of coffee.
This is how we determine that you can get 16 cups of coffee from a 12oz bag of coffee.
Factors That Influence How Many Cups You Get From a Bag Of Coffee
There are a bunch of things that affect how much ground beans you use when brewing coffee.
In a nutshell, the amount of coffee that you use is going to affect the number of cups you can get out of a bag.
I know what you’re probably thinking.
“But there are so many factors that can affect this!”
That’s why you may find varying answers to this question.
Depending on your brew method, different amounts of coffee will be used each time. That leaves us with many different combinations of how many cups can be made from a 12-ounce bag.
Another variable is the coffee-to-water ratio. This is basically what dictates the strength of your coffee and, therefore, how much coffee you actually use per cup.
Even the size of your mug will affect how many cups you can get out of a bag.
Brewing Method
The brewing method that you use is going to affect the number of cups that you can get out of a 12-ounce bag of coffee.
That’s because different methods of brewing require different:
• Coffee-to-water ratios
• Grind size
Let’s take a look at the coffee-to-water ratio for some different brewing methods.
Drip Coffee Machine
If you’re using a drip coffee maker, the optimal coffee ratio is 1:17, meaning 1 part ground coffee beans for every 17 parts of water.
Espresso Machine
If you’re going to brew an espresso coffee, then you will need a much larger amount of water compared to the other brewing methods. For every 1 gram of coffee, you will need 2 parts water.
French Press
Are you using a French press to make your morning cup of coffee?
The ideal ratio to use is 1 part coffee to 12 parts water.
Coffee-To-Water Ratio
A cup of coffee’s strength is determined by how much ground coffee is added in relation to water.
Your coffee-to-water ratio is essentially shaped by how strong you like your coffee, and it’s totally up to you how much water you use.
However, the most common coffee-to-water ratios range from 1:15 to 1:18. That is 1 gram of coffee ground to 15 grams of water.
The Specialty Coffee Association (SCA) has come up with a ‘Golden Cup Ratio’ based on their research and expertise. They suggest that a coffee-to-water ratio of 1:18 will achieve the best results
across all brewing methods.
However, the ratio of 1:17 is very popular among seasoned coffee fanatics.
That’s why I recommend calculating your coffee-to-water ratio so that you can accurately calculate how many cups of coffee you can get out of a bag.
Make sure that you are using grams, then divide the water used by the amount of coffee, and this will give you your coffee-to-water ratio.
Mug Size
Not all mugs are made equal.
That’s why the number of cups of coffee that you can get out of a 12-oz bag depends on the size of the mug.
Measure how many ounces your typical cup of coffee is by using a measuring jug or a kitchen scale.
Let’s look at some examples below using 3 common coffee cup sizes. I’ve used a coffee-to-water ratio of 1:17. This means that for every 1g of coffee, we use 17ml of water.
• 8 oz – 24 cups per 12 oz bag
• 12 oz – 16 cups per 12 oz bag
• 16 oz – 12 cups per 12 oz bag
So, if you’re using a 12 oz cup, you should be able to make around 16 cups of coffee from your 12 oz bag.
Here’s a handy chart to help you with it.
A cup of coffee’s strength is determined by how much ground coffee is added in relation to water
How Many Cups of Coffee Are in a Bag of Different Sizes?
There you have it.
All the ins and outs of how many cups of coffee you can get from a 12oz bag.
What’s that?
Do you have some different-sized bags of coffee lying around?
Don’t worry. I’ve got you covered.
Check out the charts that I’ve created for you to reference, keeping in mind that they are based on a coffee-to-water ratio of 1:17.
They show other common bag sizes and cup sizes, showing the subsequent number of cups of coffee that you can yield.
All the ins and outs of how many cups of coffee you can get from a 12oz bag
To Sum Things Up
To put it simply, a 12oz bag of coffee should give you roughly sixteen 12oz cups of coffee.
But now we know that many factors go into play when calculating the exact number of cups that you can get out of a 12oz bag.
Your preferred brewing method and go-to mug are going to affect your final result.
Keep in mind that if you typically make strong coffee, you’ll get fewer cups from a bag.
And if you make coffee on the weaker side, you’ll get more cups.
Looking for the perfect way to open a bag of coffee? Check out our article on how to do just that.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post? | {"url":"https://fluentincoffee.com/how-many-cups-of-coffee-are-in-a-bag/","timestamp":"2024-11-07T01:23:05Z","content_type":"text/html","content_length":"245036","record_id":"<urn:uuid:46601c19-81b1-43f1-8157-7e84b240f327>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00797.warc.gz"} |
Determine Transition Matrix
This functor determines a matrix of transition rates between two time-series maps.
Name Type Description
Initial Landscape Categorical Map Type Initial map of land use and cover classes.
Final Landscape Categorical Map Type Final map of land use and cover classes.
Time Steps Positive Integer Value Type Number of time steps between initial and final landscape maps. Step can be any time unit, such as year, month, etc.
Name Type Description
Single Step Matrix Transition Matrix Type Transition matrix for the entire period.
Single Step Matrix Transition Matrix Type Transition matrix for the time step specified by the number of units that the time period is divided.
To analyze a historical context, the initial map should be considered the older map of the time series.
Multi-step transition matrix only applies to an ergodic matrix, i.e. a matrix that possesses eigenvalues and vectors.
The transition matrix describes a system that changes over discrete time increments, in which the value of any variable in a given time period is the sum of fixed percentages of the value of the
variables in the previous time period. The sum of fractions along the column of the transition matrix is equal to one. The diagonal Line of the transition matrix needs not to be filled in since it
models the percentage of unchangeable cells. The transition rates are passed on to the model as a fixed parameter within a given phase. For Dinamica, time step can comprise any span of time, since
the time unit is only a reference parameter externally set.
, j = 1, 2 … n
An estimation of is given below, where n is the number of states
The transition matrix is calculated for a time period. Dinamica can also be run in multiple time steps. It is necessary for this purpose to derive the multiple time step transition matrix, as this is
equivalent to the number of time steps in which the time period is divided.
H and V are Eigen values and Eigen vector matrices.
Internal Name | {"url":"https://csr.ufmg.br/dinamica/dokuwiki/doku.php?id=determine_transition_matrix","timestamp":"2024-11-05T12:22:58Z","content_type":"application/xhtml+xml","content_length":"16875","record_id":"<urn:uuid:368cce06-dcb9-4c34-a6a6-a89d63d88419>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00208.warc.gz"} |
Understanding Nordstrom Metric & Freely Falling Massive Bodies
• Thread starter c299792458
• Start date
Overall, the geodesics of massive particles in the Nordstrom metric can be described as being affected by an effective gravitational force due to the curvature of spacetime, similar to how objects
experience gravitational forces in the presence of a Newtonian gravitational field. In summary, the Nordstrom metric describes the spacetime geometry produced by a static, spherically symmetric
source of mass or energy and the geodesics of massive particles in this metric can be described as being affected by an effective gravitational force due to the curvature of spacetime.
Could somebody please explain something regarding the Nordstrom metric?
In particular, I am referring to the last part of question 3 on this sheet --
about the freely falling massive bodies.
My thoughts: The gravitational effects would be significant since for a massive body, the geodesic is timelike. There woud thus be a [tex]\eta^{\mu\delta}\partial_\delta \phi \dot x^\beta \dot x_\
beta[/tex] is not of the form [tex]f(\lambda)\dot x^\mu[/tex] so the affine parametrization does not eliminate this term containing the gravitational potential [itex]\phi[/itex].
Does this argument make any sense at all? Also, what more can I say about the geodesics of such massive particles?
Yes, your argument does make sense. The Nordstrom metric describes the spacetime geometry produced by a static, spherically symmetric source of mass or energy. When a massive body moves through this
geometry, it will experience an effective gravitational force due to the curvature of spacetime, as described by the metric. This force can be calculated from the Christoffel symbols associated with
the metric. The geodesic equation for the massive body then takes the form of a second-order differential equation, with the gravitational force represented by terms containing derivatives of the
gravitational potential.
FAQ: Understanding Nordstrom Metric & Freely Falling Massive Bodies
1. What is the Nordstrom Metric?
The Nordstrom Metric is a mathematical model used to describe the gravitational field around a massive body, such as a planet or star. It was developed by Finnish physicist Gunnar Nordstrom in the
early 20th century as an alternative to Einstein's theory of general relativity.
2. How is the Nordstrom Metric different from Einstein's theory of general relativity?
The main difference between the two is that the Nordstrom Metric assumes a constant gravitational field, while general relativity takes into account the curvature of space-time caused by massive
objects. This means that the Nordstrom Metric cannot fully explain phenomena such as gravitational lensing or the precession of Mercury's orbit.
3. Can the Nordstrom Metric be used to study freely falling massive bodies?
Yes, the Nordstrom Metric can be used to study the motion of freely falling massive bodies. By solving the equations of motion derived from the metric, we can accurately predict the trajectory of a
falling object and the gravitational force acting on it.
4. How is the Nordstrom Metric related to Newton's law of universal gravitation?
The Nordstrom Metric is a more advanced and accurate model than Newton's law of universal gravitation. While Newton's law treats gravity as an instant force acting between two objects, the Nordstrom
Metric takes into account the effects of space-time curvature and can accurately predict the behavior of gravity in more complex scenarios.
5. Is the Nordstrom Metric still used in modern scientific research?
While the Nordstrom Metric is not as widely used as Einstein's theory of general relativity, it is still studied and used in certain areas of research, such as in studies of modified gravity and
alternative theories of gravity. It also serves as a valuable comparison to other models and theories, helping scientists to better understand the complexities of the universe. | {"url":"https://www.physicsforums.com/threads/understanding-nordstrom-metric-freely-falling-massive-bodies.665901/","timestamp":"2024-11-03T19:30:49Z","content_type":"text/html","content_length":"78822","record_id":"<urn:uuid:80de34ea-8b32-498e-9741-b6a4d52b8aa8>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00480.warc.gz"} |
Delta Hedging
Delta hedging is a technique used in order to hedge, i.e.protect, the value of a portfolio against small price movements in the underlying asset. Its objective is to have a net zero delta for a
portfolio that consists of a combination of options, underlying and/or futures/forwards. If, say, a stock option covers 100 underlying shares, buying one call option with a delta of 0.10 is a proxy
for being long 10 shares. To delta hedge the position would involve selling 10 shares. For small price changes the value of the position would be unchanged irrespective of which way the price moved.
Delta changes continuously as the market moves up and down, as option implied volatilities shift and the time to expiry reduces. Consequently, delta hedging needs in theory to be conducted
dynamically, such that the hedge ratio is always at or close to zero.
Related terms | {"url":"https://ondemand.euromoney.com/discover/glossary/delta-hedging","timestamp":"2024-11-11T19:40:04Z","content_type":"text/html","content_length":"101479","record_id":"<urn:uuid:651b620f-6cdb-42ac-8366-33fd172c4df5>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00777.warc.gz"} |
German / English Strömungslehre 2 / Fluid Dynamics
The lecture offers deeper knowledge on important problems in fluid dynamics, and is divided into the following chapters:
• Conservation equations in fluid dynamics, describing the conservation of mass, momentum and energy (Navier-Stokes equations), stress-strain relations, and thermal and caloric equations of state
• Similarity theory
• Creeping flows
• Potential flows
• Boundary layer theory
• Introduction to turbulent flows
• One-dimensional gas dynamics
The students should be able to analyse and mathematically describe complex theoretical or experimental problems in fluid dynamics, and for simpler problems also perform calculations.
English Gasdynamics
The lecture gives an introduction to thermodynamics and fluid mechanics of compressible media, in particular gases. The structure of the lecture is:
• Introductory repetition of kontinuummechanics fundamentals and a short introduction to the kinetic theory
• One-dimensional streamtube theory for steady nozzle flow, stationary shocks and the compressible, viscous flow
• One-dimensional streamtube theory for linear (acoustic) and non-linear wave propagation, the shocktube (aka the Riemann Problem), and the analogy to the shallow-water flow
• Gas dynamics of combustion: deflagration and detonation
• Introduction to analytical and numerical solution methods
The students will be able to understand the effects and phenomena of compressible flows and to identify them in technical systems. The will be able to perform calculations of steady and transient,
one-dimensional flow phenomena and to evaluate the implications for plant or device design. The introduction to numerical solution methods gives a very first foundation for application of specialized
simulation software.
English Turbulent Flows
The lecture gives an introduction to the fluid mechanics of viscous turbulent fluids. Flows can be laminar or turbulent. Laminar flows can be modeled exactly. However, turbulent flows, which are
relevant to almost all technical applications, can only be approximated due to their stochastic nature. The lecture analyses the structure of turbulent flows, and built on this highlights the
treatment of the most important approaches for the calculation of turbulent flows.
Rough outline:
• Origin of turbulence
• Statistical assessment of turbulence
• Structure of turbulent flows
• Simulation of turbulence - LES and DNS
• Reynolds-averaged equations
• Approaches for turbulence modelling
• Compressible turbulent flows
Understanding of modern methods to evaluate the effects of turbulence
German / English Strömungslehre 1 / Fluidmechanics
The lecture gives an introduction to the mechanics of fluids and is divided into the following chapters:
• static of fluids
• kinematics of fluids
• streamtube theory of incompressible fluids
• derivation of conservation equations for mass and energy
• Bernoulli equation
• energy equation with external energy and with friction
• momentum theorem
• angular momentum theorem
• streamtube theory of compressible fluids (introduction to gasdynamics)
English Numerics & Flow Simulation
Voraussetzung "Fluiddynamik" und "Strömungsmechanik"
The course aims to provide a detailed understanding of numerical techniques that are used for fluid flow simulation and to enable the students to understand the strengths and shortcomings of these
methods. The first half of the course will focus on numerical techniques and the discretisation of the Navier-Stokes equations; the respective tutorials will apply Matlab to develop a simplified (1D)
CFD tool. The second half of the course will teach the basics of the OpenFOAM software and apply this program to solve the flow in canonical geometries.
The course material can be accessed via moodle, for which you need a password. This is the birth date of John von Neumann. The birthday must be formatted like "DDMMYYYY".
The course will cover the following topics:
• Interpolation, numerical integration and differentiation, discretisation of convective and diffusive fluxes, advancing/integration in time, pressure-velocity coupling, 3D-CFD, Reynolds averaged
simulations, Large-Eddy Simulation
• Introduction to concepts of OpenFOAM, basics of grid generation, setting up a simple flow simulation, programing a custom solver in OpenFOAM
Upon successful completion of the course, students will have obtained the following skills and knowledge:
• Knowledge of schemes for solving the partial differential equations of fluid mechanics
• Knowledge of the Finite Volume Method
• Knowledge of terms and abbreviations of computational fluid mechanics
• Knowledge and an understanding of discretization schemes for convective and diffusive fluxes
• Knowledge and an understanding of the properties of the above schemes
• Knowledge about schemes for pressure-velocity coupling in incompressible descriptions
• Ability to implement simplified CFD codes in Matlab
• Basic skill set to use 3D CFD programs for solving fluid mechanical problems
• The ability to apply OpenFOAM to solve three-dimensional flow problems
• Knowlege and an understanding of the limitations of CFD approaches, models and numerical schemes
• An understanding of the sources and properties of numerical error (Dissipation/Dispersion)
In the courses Strömungslehre 1 / fluid mechanics as well as in Strömungslehre 2 / fluid dynamics a written exam is taken.
In the courses Numerics & Flow Simulation, Turbulent Flows and Gas Dynamics an oral exam is taken.
Registration for all exams is done via the responsible examination office within the given deadlines.
The appointments for the oral exams are made directly with us at the end of the lecture period.
Further details on examination dates, registration deadlines and contact addresses can be found here.
Should you have questions regarding lectures, exercises and tutorials, please send a message to the corresponding e-mail address below or contact directly your course instructor: | {"url":"https://www.uni-due.de/empi/cfd/en/teaching.php","timestamp":"2024-11-06T15:08:38Z","content_type":"text/html","content_length":"36448","record_id":"<urn:uuid:bdfca58c-c4a3-4fe7-87e2-9b8b0dd6155a>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00613.warc.gz"} |
non-local correlated hopping term expressed as MPO
For a Hubbard model with correlation hopping, the hopping term is
H_0=-J\sum_{<i,j>}a_i^{\dagger} F_0(\hat{n}_i-\hat{n}_j)a_j
here, @@J_0@@ is first-order Bessel function. SInce @@F_0@@ is non-linear, it is impossible to express @@ F_0 @@ as a product of two local operators on sites i and j, i.e.
F_0(\hat{n}_i-\hat{n}_j) ?= O_i \cdot O_j
Then in this case, how can we perform DMRG as usual using ITensor ?
Thanks, but I don't have a good idea about this. What is Omega? What is F0? Could you provide a link to a reference since I haven't heard of correlation hopping before. Thanks - | {"url":"http://itensor.org/support/977/non-local-correlated-hopping-term-expressed-as-mpo?show=982","timestamp":"2024-11-13T08:36:50Z","content_type":"text/html","content_length":"23734","record_id":"<urn:uuid:482f73e8-cf2f-4dd3-8f36-129043fb14a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00292.warc.gz"} |
6.6 Interpreting and Using Z-Scores
Course Outline
• segmentGetting Started (Don't Skip This Part)
• segmentStatistics and Data Science: A Modeling Approach
• segmentPART I: EXPLORING VARIATION
• segmentChapter 1 - Welcome to Statistics: A Modeling Approach
• segmentChapter 2 - Understanding Data
• segmentChapter 3 - Examining Distributions
• segmentChapter 4 - Explaining Variation
• segmentPART II: MODELING VARIATION
• segmentChapter 5 - A Simple Model
• segmentChapter 6 - Quantifying Error
□ 6.6 Interpreting and Using Z-Scores
• segmentChapter 7 - Adding an Explanatory Variable to the Model
• segmentChapter 8 - Digging Deeper into Group Models
• segmentChapter 9 - Models with a Quantitative Explanatory Variable
• segmentPART III: EVALUATING MODELS
• segmentChapter 10 - The Logic of Inference
• segmentChapter 11 - Model Comparison with F
• segmentChapter 12 - Parameter Estimation and Confidence Intervals
• segmentFinishing Up (Don't Skip This Part!)
• segmentResources
list High School / Advanced Statistics and Data Science I (ABC)
6.6 Interpreting and Using Z-Scores
How Z-Scores Are Different From Standard Deviation
Standard deviation (SD) is roughly the average deviation of all scores from the mean. It can be seen as an indicator of the spread of the distribution. A z-score uses SD as a sort of ruler for
measuring how far an individual score is above or below the mean.
A z-score tells you how many standard deviations a score is from the mean of its distribution, but doesn’t tell you what the standard deviation is (or what the mean is). Another way to think about it
is that a z-score is a way of comparing a deviation of a score (the numerator) to the standard deviation of the distribution (the denominator).
Let’s use z-scores to help us make sense of our Thumb data. Calculate the z-score for a 65.1 mm thumb.
require(coursekata) # this saves the mean and standard deviation of Thumb mean <- mean(Fingers$Thumb) sd <- sd(Fingers$Thumb) # write code to calculate the z-score for a 65.1 mm Thumb mean <- mean
(Fingers$Thumb) sd <- sd(Fingers$Thumb) (65.1 - mean) / sd ex() %>% { check_output_expr(., "(65.1 - mean) / sd") }
CK Code: B2_Code_Using_01
A single z-score tells us how many standard deviations away this particular 65.1 mm thumb is from the mean. Because the standard deviation is roughly the average distance of all scores from the mean,
it is likely that most scores are clustered between one standard deviation above and one standard deviation below the mean. It is less likely to find scores that are two or three standard deviations
away from the mean. Z-scores give us a way to characterize scores in a bit finer way than just bigger or smaller than the mean.
Using Z-Scores to Compare Scores From Different Distributions
One more use for the z-score is to compare scores that come from different distributions, even if the variables are measured on different scales.
Here’s the distribution of scores for all players of the video game Kargle again. We know that the distribution is roughly normal, the mean score is 35,000, and the standard deviation is 5,000.
Her z-score is +2. Wow, two standard deviations from the mean! Not a lot of scores are way up there.
Now let’s say you have another friend who doesn’t play Kargle at all. She plays a similar game, though—Spargle! Spargle may be similar, but it has a completely different scoring system. Although the
scores on Spargle are roughly normally distributed, their mean is 50, and the standard deviation is 5. This other friend has a high score of 65 on Spargle.
Now: what if we want to know which friend, in general, is a better gamer? The one who plays Kargle, or the one who plays Spargle? This is a hard question, and there are lots of ways to answer it. The
z-score provides one way.
We’ve summarized the z-scores for your two friends in the table below.
Player Player Score Game Mean Game SD Player Z-Score
Kargle Player 45,000 35,000 5,000 +2.0
Spargle Player 65 50 5 +3.0
Looking at the z-scores helps us to compare the abilities of these two players, even though they play games with different scoring systems. Based on the z-scores, we could say that the Spargle player
is a better gamer, because she scored three standard deviations above the mean, compared with only two standard deviations above the mean for the Kargle player.
Of course, nothing is really definite with such comparisons. Someone might argue that Spargle is a much easier game, and so the people who play it tend to be novices. Maybe the Kargle player is
better, because even though her z-score is lower, she is being compared to a more awesome group of gamers! | {"url":"https://coursekata.org/preview/book/f84ca125-b1d7-4288-9263-7995615e6ead/lesson/9/5","timestamp":"2024-11-05T13:50:11Z","content_type":"text/html","content_length":"93958","record_id":"<urn:uuid:d8b1f3e2-47aa-45b4-9e6e-6402e6bfc93a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00478.warc.gz"} |
Every GMAT Geometry Formula You Need to Know • PrepScholar GMAT
If you’re like me, you probably spent a lot of time in high school memorizing the difference between sine and cosine and sighing over long, multi-step proofs, only to forget all of this hard-earned
knowledge the second that classes dismissed for break.
If you’ve forgotten a lot of your high school geometry rules or are just in need of a refresher before taking the GMAT, then you’ve found the right article. In this article, I’ll be giving you a
comprehensive overview of GMAT geometry.
First, I’ll talk about what and how much geometry is actually on the GMAT. Next, I’ll give you an overview of the most important GMAT geometry formulas and rules you need to know. Then, I’ll show you
four geometry sample questions and explain how to solve them. Finally, I’ll talk about how to study for the geometry you’ll encounter on the GMAT and give you tips for acing test day.
GMAT Geometry: What to Expect
If you feel like you’ve forgotten a lot of the geometry that you learned in high school, don’t worry. The GMAT only covers a fraction of the geometry that you probably studied in high school. In the
next section, I’ll talk about the geometry concepts that you’ll actually find on the GMAT.
You’ll find geometry concepts in both data sufficiency and problem-solving questions. Geometry questions make up just under a quarter of all questions on the GMAT quant section. As with all GMAT
quant questions, you won’t just need to know how to apply geometry principles in isolation. You’ll need to know how to combine your geometry knowledge with knowledge of other concepts (like number
properties, for instance) to get at the correct answer. I’ll talk more about what this actually means when I go over some geometry sample questions.
Not sure how or what to study? Confused by how to improve your score in the shortest time possible? We've created the only Online GMAT Prep Program that identifies your strengths and weaknesses,
customizes a study plan, coaches you through lessons and quizzes, and adapts your study plan as you improve.
We believe PrepScholar GMAT is the best GMAT prep program available, especially if you find it hard to organize your study schedule and don't want to spend a ton of money on the other companies'
one-size-fits-all study plans.
As I mentioned before, the GMAT only covers a fraction of the geometry that you learned in high school. As with the rest of the content on the GMAT Quant section, you’ll only need to know how to
apply high school geometry concepts, which may be a relief to some test-takers.
Unfortunately, unlike some other standardized tests (like the SAT), the GMAT doesn’t provide any formulas for you. You’ll have to memorize all the formulas and rules you’ll need to know for the test.
In the next section, I’ll talk to you about the most important rules and formulas you’ll need to know to answer geometry problem solving and data sufficiency questions.
The Most Important GMAT Geometry Formulas and Rules to Know
The good news about GMAT geometry is that you don’t need to brush up on a whole bunch of topics in order to do well. The bad news about GMAT geometry is that you’ll have to memorize all the rules and
formulas you need to know for the test, because none will be provided to you on test day. You also can’t bring in any aids to help you with the exam.
In this section, I’ll talk about the major GMAT geometry formulas and rules that you should study and memorize as you’re preparing for the exam.
Lines and Angles
• A line is a one-dimensional abstraction that goes on forever.
• For any two points, there is one straight line (only one!) that passes through them.
• A line section is a segment of a straight line that has two endpoints. The midpoint is the point that divides the line segment into two equal parts.
• Two lines are parallel if they lie in the same plane and never intersect. Two lines are perpendicular if they intersect at a 90° angle.
• An angle is made when two lines intersect at a point. This point is called the vertex of the angle.
• Angles are measured in degrees (°).
• An acute angle is an angle whose degree measure is less then 90°.
• A right angle’s degree measure is exactly 90°.
• An obtuse angle’s degree measure greater than 90°.
• A straight angle’s degree measure is 180°.
• The sum of the measures of angles on a straight line is 180°.
• The sum of the measures of the angles around a point (which make a circle) is 360°.
• Two angles are supplementary if their sums make a straight angle.
• Two angles are complementary if their sums make a right angle.
• Vertical angles are opposite angles formed by two intersecting line segments.
• A line or a segment bisects an angle if it splits the angle into two, smaller equal angles.
• Vertical angles are a pair of opposite angles formed by intersecting line angles. The two angles in a pair of vertical angles have the same degree measure.
• A triangle is a closed figure with three angles and three straight sides.
• The sum of the interior angles of a triangle is 180°.
• Each interior angle is supplementary to an adjacent exterior angle. Together, they equal 180°.
• The formula for finding the area of a triangle is $½bh$.
• An isosceles triangle has two sides of equal length.
• An equilateral triangle has three equal sides and three angles of 60°.
• There are two kinds of special right triangles:
□ Isosceles right triangles have a side relationship of 1:1:$√2$.
□ 30°60°90° triangles have a side relationship of 1:$√3$:2.
• A right triangle has one 90° interior angle. The side opposite the right angle is called the hypotenuse and it’s the longest side of the triangle.
• Pythagorean Theorem for finding side lengths of a right triangle: $a^2 + b^2 = c^2$
• Two triangles are similar if their corresponding angles have the same degree measure.
• Two triangles are congruent if corresponding angles have the same measure and corresponding sides have the same length.
• The diameter of a circle is a line segment that connects two points on the circle and passes through the center of the circle.
• The radius is a line segment from the center of the circle to any point on it.
• A circle’s central angle is formed by two radii.
• The distance around the circle is called circumference:
• An arc is a part of the circumference of a circle.
□ $\Length = (n/360°)C$, where $n$ is the measurement of the central angle of the circle portion in degrees.
• The area of a circle is found with the formula $A = πr^2$.
• A polygon is a closed figure that has straight line segments as its sides.
• The perimeter of a polygon is the distance around the polygon (the sum of the length of all its sides).
• The sum of the four interior angles of a quadrilateral is 360°.
• Area of a rectangle: $l$$w$
• Area of a parallelogram: $b$$h$
• Area of a trapezoid: $1/2(a + b)h$
• A cylinder is a solid whose horizontal cross section is a circle.
• Volume of a cylinder: $Bh$, where $B$ is the area of the base.
• Area of the base of a cylinder: ?r[2] (because, remember, a cylinder has a circular cross section)
• A cube is a rectangular solid where all the faces are squares.
□ Volume of a cube: $Bh$, where $B$ is the area of the base.
• A rectangular solid is a solid with six rectangular faces.
□ Volume of a rectangular solid: $lwh$
Want to improve your GMAT score by 60 points?
We have the industry's leading GMAT prep program. Built by Harvard, MIT, Stanford, and Wharton alumni and GMAT 99th percentile scorers, the program learns your strengths and weaknesses and customizes
a curriculum so you get the most effective prep possible.
Coordinate Geometry
• The slope of a line tells you how steeply that line goes up or down the coordinate plane.
□ $slope$ = $rise$/$run$
□ $slope = change in $y$ / change in $x$
• The rise is the difference between the $y$-coordinate values of two points on the line; the run is the difference between the x-coordinate values of two points on the line.
• You can also find the slope of a line using the slope-intercept equation, which is $y = mx + b$, where the slope is $m$ and the $b$ is the value of the $y$-intercept.
• Perpendicular lines have slopes that are negative reciprocals of one another.
• To determine the distance between any two points on a coordinate plane, you can use the Pythagorean theorem.
4 Tips for GMAT Geometry Questions
Even the most prepared test-takers can feel a lot of anxiety on test day. Follow these tips to boost your score and help you work your way through tricky GMAT geometry questions.
#1: Use What You Know
For all GMAT geometry questions, start by identifying what you know and what you need to find out. Use the information in the question and in any diagrams to build up your understanding of a figure.
For instance, if you know that the measure of two different angles in a triangle are 60 degrees and 80 degrees, respectively, you can use what you know to figure out the measure of the third angle.
The more information you have, the more you’ll be able to figure out.
#2: Look for Connections on Multiple Figure Questions
If there is more than one recognizable shape in a diagram, there is a connection between them. Look for what one of the figures tells you about the other. Perhaps the diagonal of a square is the same
as the radius of a circle. Or the height of one triangle is the hypotenuse of another. Whatever the connection, it’s probably the key to answering the question.
#3: Don’t Assume That Drawings Are To Scale
You can’t assume that diagrams on the GMAT are to-scale. If you’re assuming a shape is a square and it’s actually a rectangle, you can make big mistakes in your calculations. Only use the information
given to you on the diagram or in the question itself. Don’t ever assume anything that you can’t reason out with cold, hard math.
#4: Make Your Own Diagram
If you’re solving a question that involves a shape, but the test doesn’t give you a diagram, make your own. Making your own diagram will help you better visualize a question. You can also re-draw a
diagram on your scrap paper even if the test provides you with a diagram to view. Sometimes, re-drawing a diagram will help you get a better understanding of the figure so that you can more easily
solve the problem.
GMAT Geometry Practice Questions
One of the most important parts of preparing for the GMAT is to practice solving real GMAT questions. Solving real GMAT geometry questions helps you prepare for the content that you’ll actually see
on the test. In this section, I’ll walk you through four real GMAT sample questions that use geometry concepts: two problem-solving questions and two data sufficiency questions.
Problem Solving Sample Question 1
A rectangular floor that measures 8 meters by 10 meters is to be covered with carpet squares that each measure 2 meters by 2 meters. If the carpet squares cost $12 apiece, what is the total cost for
the number of carpet squares needed to cover the floor?
1. $200
2. $240
3. $480
4. $960
5. $1920
To start, since this problem doesn’t provide a diagram, we want to draw our own on scrap paper. Drawing your own diagram helps you better visualize the problem. So, draw a rectangle and label the
sides “8 m” and “10 m,” since we know that from the problem.
Next, let’s take a step back and think about what the question is asking us. It’s asking to figure out the cost of covering a floor in carpet squares. When you’re covering a floor in carpet squares,
you want to cover the entire area of the floor. So, our next step is to find the area.
We know that the formula for area of a rectangle is $a = lw$. Let’s solve that using the information we have. $A = (8)(10)$. The area of this rectangle is 80 $m^2$.
Now, we need to figure out how much area each carpet square covers. The formula for finding the area of a square is also $lw$, so let’s go ahead and do that. $Area = (2m)(2m)$. The area covered by
each carpet square is 4$m^2$.
To find the number of carpet squares needed to cover the floor, we’ll divide the total area of the floor by the area of each individual carpet square. $80 m^2/ 4 m^2 = 20$ total carpet squares needed
to cover the floor.
Want to Identify YOUR GMAT Strengths and Weaknesses?
Our proprietary GMAT Diagnostic Assessment creates a customized study plan for you that takes you from registration all the way to test day! It is included with every account and proven to
significantly maximize your score.
Get your personalized assessment as part of your 5 day risk-free trial now:
The cost of each carpet square is 12, so for our final step, we’ll multiply 20 (number of carpet squares needed) by 12 (cost per carpet square) to get a total of $240.
The correct answer is B.
Problem Solving Sample Question 2
As always, let’s start by figuring out what this question’s asking us. It’s asking us to compare the distance Mary walked to the distance Ted walked. In order to do that, we need to first figure out
how far they actually walked.
It’s pretty easy to figure out how far Mary walked. We can just add 8 + 6. Mary walked 14 miles.
It’s a little trickier to figure out how far Ted walked. Notice that the diagram is in the shape of a right triangle. That tells us that we can use the Pythagorean theorem to find the length of Ted’s
walk, which is really just the missing side of this triangle. Since Ted’s side is across from the right angle, we know that it’s the hypotenuse. Therefore, we can plug in our sides pretty easily. $8^
2$[ ]+$6^2$ = $PR^2$ or $64$ + $36$ = $PR^2$, or $100 = $PR^2$. We can then find the square root of 100, which is 10. So, $PR = 10mi$.
Now, we know that Mary walked 14 miles and Ted walked 10 miles. Therefore, the distance Mary walked exceeded the distance Ted walked by 4 miles ($14 – 10 = 4$). 4 is 40% of 10, so the correct answer
is B. Mary exceeded the distance Ted walked by 40%.
Data Sufficiency Sample Question 1
1. The measure of BDC is 60°.
2. The degree measure of BAC is less than the degree measure of $\angle ∠ {BCD}$.
1. Statement (1) ALONE is sufficient, but statement (2) alone is not sufficient.
2. Statement (2) ALONE is sufficient, but statement (1) alone is not sufficient.
3. BOTH statements TOGETHER are sufficient, but NEITHER statement ALONE is sufficient.
4. EACH statement ALONE is sufficient.
5. Statements (1) and (2) TOGETHER are NOT sufficient.
This question’s asking us to determine measure of an interior angle of a triangle. For data sufficiency questions, we always want to address each statement separately FIRST. Let’s begin with
statement (1).
Statement (1) states that angle BDC measures 60 degrees. Since we know that $\angle ∠ {BDC}$ is on a straight line, we know that the angle adjacent to it ($\angle ∠ {BDA}$) can be added to $\angle ∠
{BDC}$to equal 180°. So, we can find the measure of angle BDA by using the equation: $180 – 60$ = $\angle ∠ {BDA}$. Therefore, we know the measure of $\angle ∠ {BDA}$ is 120°.
Next, we know that all the angles inside a triangle add up to 180°. Since we now know the measure of angle BDA (120) and the measure of $\angle ∠ {ABD}$ (20), we can find the third angle in that
triangle by using the equation 180 – (20 + 120) = $\angle ∠ {BAC}$. So, statement (1) is sufficient. We now can eliminate answer B and answer E.
Now, let’s move on to statement (2). We want to forget everything we know about statement (1) at first and address statement (2) by itself.
The statement tells us that the degree measure of $\angle ∠ {BAC}$ is less than the degree measure of $\angle ∠ {BCD}$. However, we don’t have enough information to figure out what the measure of
either angle actually is. So, statement (2) is not sufficient.
The correct answer then is A; statement (1) alone is sufficient.
Data Sufficiency Sample Question 2
1. $x = y = 1$
2. $w = 2$
1. Statement (1) ALONE is sufficient, but statement (2) alone is not sufficient.
2. Statement (2) ALONE is sufficient, but statement (1) alone is not sufficient.
3. BOTH statements TOGETHER are sufficient, but NEITHER statement ALONE is sufficient.
4. EACH statement ALONE is sufficient.
5. Statements (1) and (2) TOGETHER are NOT sufficient.
Remember, when solving data sufficiency questions, you want to take each statement by itself first. Also keep in mind that you can’t assume that any diagrams given are to scale. You might be tempted
to say that the triangle pictured is an isosceles right triangle, but you can’t assume that. Keeping all this in mind, let’s look at statement (1).
Statement (1) says that $x = y = 1$. That means that both $x$ and $y$ = 1. Can we use that to find the value of z?
Well, we know that the value of z is equal to 1 + the value of the base of the right triangle. There’s no information in the problem to tell us what the value of the base of the right triangle is.
So, the value of the base can vary, so the value of $z$ can vary.
That means that statement (1) isn’t sufficient by itself.
Now, let’s look at statement (2) by itself first. Statement (2) says $w = 2$. However, even though we know that $w = 2$, we don’t know anything about the rest of the sides. That means all the other
sides can vary, so z can vary as well. Statement (2) isn’t sufficient by itself either.
Now, let’s look at the two statements together.
Taking (1) and (2) together, we know that $z = y + (z – y)$ [the base of the triangle]. Or, we can say that $z = 1y + (z – 1)$.
The value of $z – 1$ can be determined by applying the Pythagorean theorem to the triangle. We know that the hypotenuse is 2 (from statement (2): $w = 2$) and we know that one side = 1 (from $x = 1$)
and one side equals $z – 1$.
We can then write the equation $1^2 + (z – 1)^2 = 2^2$. Since we only have one variable in the equation, we can solve through for z.
You don’t need to solve a data sufficiency question. You only need to know that you can! So since we know we can solve the question using both statements, the correct answer is C. Both statements
together are sufficient.
How to Study for Geometry on the GMAT
Studying for the GMAT may seem overwhelming, because there’s a lot of content to review. The good news is that executing a well-thought-out study plan will help you achieve your goals. Here are some
tips geometry for the GMAT.
#1: Use High Quality Practice Materials
The best way to prepare for the GMAT is by using real GMAT geometry questions in your prep. Real GMAT geometry questions will simulate the GMAT’s style and content. For instance, you’ll have to use
more than one skill in the question, or you’ll get practice using your geometry skills on data sufficiency questions, which are unique to the GMAT. Using resources like GMATPrep or the GMAT Official
Guide will give you access to real, retired GMAT questions.
As you might’ve noticed from our practice questions, you’ll rarely see a straightforward question on the GMAT that just asks you to use your geometry skills. You’ll likely have to combine your
knowledge of geometry with your knowledge of arithmetic or number properties or ratios… or all of the above! Practicing GMAT-style questions (real, retired GMAT questions if you can get them) will
give you practice at using multiple skills in one question.
#2: Memorize Important Formulas
As I mentioned before, you won’t get to use a formula cheat sheet on the GMAT. You’ll have memorize all the formulas you expect to need on test day. Using flashcards is a great way to build your
knowledge so that you can quickly recall and use important formulas on test day.
What’s Next?
You’ve read all about the formulas you need to know for GMAT geometry. Are you ready to master them? Using flashcards can be a great way to boost your memory. Before you get started with flashcards,
check out our total guide to GMAT flashcards to learn about the best GMAT flashcards out there and the best way to study with flashcards.
Feel like you’ve mastered GMAT geometry? Looking for a new challenge on your quest to total GMAT quant domination? Check out our guide to GMAT probability to conquer a new type of math of the GMAT.
Are you totally confused by the data sufficiency practice questions? If so, don’t worry. Data sufficiency questions may seem strange, but our total guide to data sufficiency on the GMAT will break
down everything you need to know to master this question type.
Was this helpful? Sign up for FREE GMAT and MBA guides! | {"url":"https://gmat.psblogs.com/gmat-geometry-questions-formulas/","timestamp":"2024-11-01T20:55:15Z","content_type":"text/html","content_length":"104321","record_id":"<urn:uuid:a0533acd-e86f-441a-aaa4-f1b16a625ad8>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00100.warc.gz"} |
Linear Regression Channel
Great for ranging as well as trending conditions
The Linear Regression Channel indicator (LRC)
The LRC is designed as a trend-identifying reversion to the mean indicator
The LRC is a dynamic indicator, it is not static. As price action changes the linear regression lines will automatically adjust and the channel will become up trending, down trending, or sideways
depending on current market conditions. It can be applied to any market on any time frame, and is a valuable tool for swing trading as well as intraday.
The LRC – A Truly Universal Indicator
The linear regression channel uses the financial instruments price and time period to determine its system parameters. It works in real-time on your chart, continuously adapting to market movements.
Applying it to price movement can accurately predict future price movement and trend reversals.
Why do I use the Linear Regression Channel?
Because ‘smart money’ uses it! Smart money and large institutions utilise linear regression channels because even they recognise that the LRC indicator tends to be predictive in nature (in fact it’s
one of the only indicators they WILL use). The Smart Money also know that the LRC is one of the RARE indicators that work in both ranging AND trending conditions. They are right too! (the LRC is the
ONLY indicator I will use when price is ranging, and it works extremely well)
• Trend-identifying reversion to the mean indicator
• Works in both trending and ranging conditions
• Predictive in nature, smart money use this indicator
• Plots a channel & rotates with price
• The channel width is determined by the level of price volatility
• The length of the channel is adjustable
• Displays signal arrows for both deviation 1 and 2 outer lines
• Can be used on any instrument and timeframe
• User manual – 12 page PDF of comprehensive step by step Instructions with pics
Linear Regression Channel Indicator package
Not familiar with “Smart Money” concepts? Want to learn?
All of my products come with 25+ hours of Smart Money video courses (in full HD) by some of my mentors, a VERY valuable education!
$29 – Purchase the Linear Regression Channel indicator package via Paypal
Instant download after payment
Most retail traders focus on trend indicators and oscillators…
If you ask retail traders what their favourite indicator is, the answers always look the same. You’ll hear all about the RSI (Relative Strength Index), Stochastics and the MACD (Moving Average
Convergence Divergence). They’re the most popular ones.
But technical indicators today go in various other directions and can be much more complicated than a simple RSI calculation.
The linear regression channel is one of the lesser known technical indicators that plot its values based on mathematical formulas (at its heart, linear regression is a method of estimating the
undefined relationship between price and time).
The Linear Regression of Time and Price
While there are many other technical indicators such as moving averages or oscillators, the linear regression channel actually falls into an advanced mode of calculation between price and time.
Linear regression is a statistical tool used to predict the future from past data. It is used to determine when prices are overextended.
Linear regression channel trading strategies offer one of the best risk-reward ratios. No one can say a stop loss won’t be hit, but most of the times it won’t be, and moreover, when it doesn’t the
resulting trade offers a larger than average reward.
Use the LRC and abide by the rules defined within this article on any financial instrument and timeframe you like, and I guarantee you will be surprised by its universal nature!
The standard MT4 LRC vs my Custom Coded LRC
Yes there is a standard LRC in MT4, well sort of.… its ‘kinda’ like a linear regression channel in the way that McDonalds is ‘kinda’ like a restaurant
The standard LRC can be found in Metatrader 4… (Go to the top ‘Insert’ tab / Channels / Linear Regression)
Standard MT4 LRC
• Static on chart – manual adjust
• Single outside lines
• Non-adjustable Deviation settings
• No signals at all
But my custom coded LRC offers many more features:
Master Trader Secrets custom LRC
• Dynamic (Auto updating)
• Double outside lines
• Adjustable channel length
• Adjustable Deviation settings
• Deviation 1 and 2 signal arrows
I have coded my LRC to have 2 separate Deviation lines (both top and bottom of the middle linear regression line) and both are adjustable.
This basically means:
• The wider you set the channels the higher your tolerance threshold (less trading signals, but more reliable signals) or…
• The narrower you set the channels, the lower your tolerance threshold (more trading signals, but less reliable signals).
The standard MT4 LRC indicator has only one deviation line that is above and below the middle linear regression line, and its not adjustable at all. Yup, basically its useless and it sucks.
LRC Construction
The linear regression channel is constructed from the inside out, it begins with a centre line (the equilibrium or ‘linear regression line’) and from that it plots the outer lines. Four additional
lines are then drawn, both top and bottom (both outer channels having the same distance from the equilibrium line, with their corresponding distances being based on past price volatility).
The outer lines are created at 1 and 2 standard deviations outside of the median price. The ‘inner’ channel lines contain 68% of all prices between them (if 1 standard deviation is used) or 95% of
all prices (if 2 standard deviations are used).
• If you set the inner deviation line at 1, you should see price stay within this boundary 68% of the time
• If you set the outer deviation line at 2, you should see price stay within this boundary 95% of the time (this means that price only breaches this line 5% of the time)….
Effectively this means:
• When price enters INSIDE the top or bottom deviation channel, price will reverse 68% of the time
• When price goes OUTSIDE the top or bottom deviation channel, price will reverse 95% of the time!
Imagine if your trading strategy was to trade only when price goes OUTSIDE the top or bottom deviation channel. Price will then reverse 95% of the time….. Hmm, do you recognise any opportunities
here? Lol
The LRC consists of three parts:
1. The Linear Regression Line
2. The Upper Deviation Channel
3. The Lower Deviation Channel
1 – The Linear Regression Line
The Linear Regression Line acts as the midpoint of the trend. Think of it as the equilibrium price, or the baseline, where any move above or below the linear regression line indicates overzealous
buyers or sellers.
2 – The Upper deviation Channel
The Upper deviation Channel is 2 lines that run parallel to the Linear Regression Line. It marks the top of the trend.
• The inner line represents one standard deviation above the Linear Regression Line and contains 68% of price data
• The outer line is 2 standard deviations above the Linear Regression Line and contains 95% of price data
The use of standard deviation gives traders an idea as to when prices are becoming overbought or oversold, relative to the long term trend.
When price ventures into or outside the boundaries of the upper deviation channel, these extremes are routinely corrected and you can expect price to move back towards the Linear Regression Line
(which means that these price breaches outside the boundaries are considered as selling opportunities)
3 – The Lower deviation Channel
The Lower deviation Channel is 2 lines that run parallel to the Linear Regression Line. It marks the bottom of the trend.
• The inner line represents one standard deviation below the Linear Regression Line and contains 68% of price data
• The outer line is 2 standard deviations below the Linear Regression Line and contains 95% of price data!
The use of standard deviation gives traders an idea as to when prices are becoming overbought or oversold, relative to the long term trend.
When price ventures into or outside the boundaries of the lower deviation channel, these extremes are routinely corrected and you can expect price to move back towards the Linear Regression Line
(which means that these price breaches outside the boundaries are considered as buying opportunities)
So how do we work out where these price breaches occur?
One way is to utilise the statistical concept of a normal distribution, and the accompanying measure of standard deviation. To better understand this standard deviation Forex strategy, let’s quickly
have a run through of what I mean by these terms.
A normal distribution is a probability distribution that follows a bell-shaped curve. The bell curve represents the form of the various data point occurrences. The bulk of the points normally take
place toward the middle of the bell curve, but over time, the points stray, or deviate from the population…
The highest probability density is centred around the mean (the Linear Regression Line) and is represented by the thick black line in the diagram above. An important point to note is that all normal
distributions are symmetrical. This places both the mean and the median at the exact centre of the bell curve.
Standard deviation is another statistical measure, and quantifies how scattered the values are within a data set. The larger the standard deviation, the wider the bell curve. The mathematics that
govern this curve are relatively complex. But here’s the good news: the concept that it represents is actually fairly simple.
The further we get away from the middle of the bell, the smaller the chances are of those values of X occurring. This means that the majority of values for X occur one standard deviation either side
of the mean. In fact, in a normal distribution, we would expect around 68% of the data values to occur in this range.
Two standard deviations either side of the mean cover roughly 95% of all data values. At the tails of the curve we get the outliers, and these are rarer occurrences.
Why does this matter? Well if we see a data value that is an outlier, it is a fair assumption that future values will regress back towards the mean…
Trading with the Linear Regression Channel
The Linear Regression Channel gives potential buy and sell signals based on price volatility. Trading the Linear Regression Channel involves keeping an eye on the price whenever it interacts with one
of the three lines.
Each time that the price interacts with the Upper or Lower Channel:
• You should expect to see a potential turning point on the price chart
• The goal is to fade these extremes and look for price to return to the linear regression line, and potentially to the other side of the channel
• When price closes outside of the Linear Regression Channel for long periods of time, this is often interpreted as an early signal that the past price trend may be breaking and a significant
reversal might be near
Buy Signal
• If you expect a continuation of the trend and price falls into or exceeds the lower deviation channel, this is considered a buy signal (you should wait for confirmation by waiting for the price
to move higher and close back inside the linear regression channel)
Sell Signal
• If you expect a continuation of the trend and price rises into or exceeds the upper deviation channel, this is considered a sell signal (you should wait for confirmation by waiting for the price
to move lower and close back inside the linear regression channel)
Stop Loss on a Linear Regression Trade
You should always use a stop loss when trading a Linear Regression based strategy.
• If you are trading a bullish linear regression setup, the stop loss should be placed just below the swing low created by the price bounce from the lower deviation channel
• Conversely, if you are trading a bearish linear regression setup, your stop loss should be placed just above the swing high created by the price bounce from the upper deviation channel
Linear Regression Trading Strategy (Bullish):
• Enter a trade when the price bounces from the lower deviation channel (an accompanying signal arrow being a bonus) and heads back towards the mean
• Place your stop loss below the bottom created prior to the bounce
• Hold the trade until one of the following conditions is met:
□ Price reaches the linear regression line (mean) but can go no further and starts to reverse
□ Price reaches the top deviation channel
• Price completely breaks the top deviation channel and a trend opposing signal arrows appears against you
(Simply reverse the procedure for a bearish trade strategy)
A bullish trade example using four points of reference
• 1 is the entry point. This only becomes an entry point when the price has traded into the lower deviation channel and has started to move back inside the one standard deviation line (you don’t
simply enter without confirming that price has started to turn back, because it may go further). Instead, we want the outlying event to have taken place and the price to indicate it is reverting
back to the mean. A move back within the first standard deviation confirms the regression.
• 2 provides a stop-loss point in case price continues to move against you
• 3 and 4 will be the two price targets you can set for profitable trade exits:
□ Our first expectation with the trade was for price to revert to the mean, so this is the first exit option
□ The second target works in the assumption of a continuing trend, so the more aggressive trade exit target will be set at the upper deviation channel
And that’s trading with the linear regression channel. Entering and exiting the market where statistically you know price will react. Go trade it!
Linear Regression Channel Indicator package
Not familiar with “Smart Money” concepts? Want to learn?
All of my products come with 25+ hours of Smart Money video courses (in full HD) by some of my mentors, a VERY valuable education!
$29 – Purchase the Linear Regression Channel indicator package via Paypal
Instant download after payment | {"url":"https://mastertradersecrets.com/linear-regression-channel-indicator/","timestamp":"2024-11-12T22:17:16Z","content_type":"text/html","content_length":"92919","record_id":"<urn:uuid:4854c041-0554-41d0-84ef-9a6de484d8fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00099.warc.gz"} |
Book of Proof
Richard H. Hammack
This book is an introduction to the language and standard proof methods of mathematics. It is a bridge from the computational courses (such as calculus or differential equations) that students
typically encounter in their first year of college to a more abstract outlook. It lays a foundation for more theoretical courses such as topology, analysis and abstract algebra. Although it may be
more meaningful to the student who has had some calculus, there is really no prerequisite other than a measure of mathematical maturity. Topics include sets, logic, counting, methods of conditional
and non-conditional proof, disproof, induction, relations, functions and infinite cardinality.
Bibliographic information | {"url":"https://books.google.com/books/about/Book_of_Proof.html?id=vt2cnQEACAAJ","timestamp":"2024-11-04T04:31:22Z","content_type":"text/html","content_length":"24227","record_id":"<urn:uuid:c94431ac-308f-4472-8645-874cf5e3076b>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00893.warc.gz"} |
[Seminar 2020.11.25] On the ordering of the Markov numbers (AM 10:00~12:00)
Date: November 25, 2020
Time: AM10~12
Speaker: Kyungyong Lee (University of Nebraska-Linocln)
Title : On the ordering of the Markov numbers
The Markov numbers are the positive integers that appear in the solutions of the equation x2+y2+z2=3xyz. These numbers are a classical subject in number theory and have important ramifications in
hyperbolic geometry, algebraic geometry and combinatorics. It is known that the Markov numbers can be labeled by the lattice points (q,p) in the first quadrant and below the diagonal whose
coordinates are coprime. In this paper, we consider the following question. Given two lattice points, can we say which of the associated Markov numbers is larger? A complete answer to this question
would solve the uniqueness conjecture formulated by Frobenius in 1913. Using tools from cluster algebras, we give a partial answer in terms of the slope of the line segment that connects the two
lattice points. We prove that the Markov number with the greater x-coordinate is larger than the other if the slope is at least −87 and that it is smaller than the other if the slope is at most −54.
As a special case, namely when the slope is equal to 0 or 1, we obtain a proof of two conjectures from Aigner's book "Markov's theorem and 100 years of the uniqueness conjecture". This is joint work
with Li Li, Michelle Rabideau, and Ralf Schiffler. | {"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&listStyle=viewer&order_type=desc&l=en&sort_index=title&page=8&document_srl=883","timestamp":"2024-11-10T23:59:25Z","content_type":"text/html","content_length":"21605","record_id":"<urn:uuid:223a06a5-3653-4f99-ace3-11d173cee74c>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00049.warc.gz"} |
Sensitivity Analysis, Bayesian Calibration and Forward Propagation of Uncertainties Using quoFEM
Pedro Arduino - University of Washington
Sang-Ri Yi - SimCenter, UC Berkeley and
Aakash Bangalore Satish - SimCenter, UC Berkeley
key Words: quoFEM, OpenSees, Tapis, Python
This example makes use of the following DesignSafe resources:
Step Notebook
Sensitivity analysis
Bayessian calibration
Forward propagation
The notebooks, and required scripts, are available in the Community Data folder and can be executed without any modification. Users are invited to try this notebook and use any parts of it.
This illustrative use_case demonstrates several UQ techniques using the parameters of the PM4Sand constitutive model, a liquefaction-capable soil model in OpenSees. This complex material model is
often calibrated using a small number of experimental results which yields imperfect information about its parameters. This leads to uncertain model predictions. Quantifying such uncertainties and
inspecting the uncertainty bounds of model predictions can provide more information about the importance of each model parameter. Recognizing these uncertainties can incentivize more sophisticated
modeling and calibration techniques that can better utilize the available data from experiments to reduce these bounds and provide more robust and higher fidelity simulations.
In this use case, the amount of reduction in the uncertainty in PM4Sand parameters calibrated to Cyclic Direct Simple Shear (CyDSS) test data is inspected, and the resulting uncertainty is propagated
in an earthquake excitation simulation of a soil column. Three steps of UQ analyses, schematically shown in Fig. 1, are presented:
1. Global sensitivity analysis to get an insight into which parameters are critical in triggering liquefaction. This is an important first step to decide which parameters need to be included in the
calibration process.
2. Bayesian calibration to obtain the posterior probability distribution of the PM4Sand parameters based on the CyDSS test dataset.
3. Forward propagation to investigate how the uncertainty that remains after the Bayesian calibration (characterized by the posterior probability distribution) affects the prediction of an
earthquake response.
Fig.1 - Probabilistic calibration of soil model (step 2) with sensitivity analysis (step 1) and prediction of uncertainty in estimation of lateral sperading (step 3)
Uncertainty Quantification Using quoFEM
Accurate quantification of uncertainty requires well-established workflows that incorporate sophisticated UQ techniques with advanced simulation models and frameworks. The SimCenter quoFEM tool
streamlines this process by offering comprehensive workflows in a single tool, which can be accessed locally or remotely through a web browser using the DCV client on DesignSafe. Furthermore, users
can utilize the Jupyter Hub environment on DesignSafe to manage the same, or additional, runs via Python scripts, defining job variables and submitting jobs through the Tapis system. This allows for
seamless collaboration and efficient job management, resulting in faster and more effective UQ analysis.
In this context, the notebooks included in this use-case complement input generated by quoFEM and therefore must be considered together. Details on how to run this example using the quoFEM desktop
can be found here[]. In this document three complementary notebooks are discussed that correspond to each of the steps mentioned above. A link to each notebook is included at the beginning of each
To connect SimCenter applications and Jupyter notebooks in DesignSafe, it's essential to ensure that all required tools are accessible from both frameworks. The SimCenter's vision is well-aligned
with this concept and offers all of the necessary functionality through backend applications installed in DesignSafe that can be accessed via Tapis apps. This is schematically shown in Fig 2.
Additionally, all SimCenter workflows are stored in JSON files that represent all steps in a workflow. This file is readily accesible using a JSON parser.
Fig.2 - Running quoFEM analysis using remote computing resources at DesignSafe
In quoFEM, the workflow data is stored in a tmp.SimCenter folder that can be accessed from the quoFEM desktop, sent to an HPC, or accessed from a notebook. For the examples presented in this
document, the information included in this folder is sufficient to run all cases.
In order to facilitate the discussion of each notebook, it is helpful to first identify common aspects that are present in all workflows run from Jupyter. These include:
1. Setup Tapis App job
2. Display job workflow
3. Run Tapis job
4. Post-process results
Instructions for (1) setting up and (2) running Tapis jobs can be found here. These steps are generally applicable for launching any Tapis app from a Jupyter notebook in DesignSafe. The most
significant step in this process is determining the appropriate Tapis app to utilize. To perform uncertainty quantification within SimCenter backend applications, the following Tapis app is used:
#Select tapis-app
app_name = 'simcenter-uq-frontera'
app_id = 'simcenter-uq-frontera-4.0.0u4'
storage_id = 'designsafe.storage.default'
# Get Tapis app
app = ag.apps.get(appId=app_id)
Post-processing of results is specific to the problem being solved and can be achieved using Python to access output data stored in archived files. For this purpose it is important to identify the
location of data files. This is explained here.
Displaying a quoFEM job workflow is useful for understanding the data and steps followed in the workflow. These include: simulation tools, input variables, UQ methods used, and remote directories/
folders. An schematic of a typical quoFEM workflow is shown in Fig. 3.
Fig.3 - Elements of quoFEM workflow (only relevant elements for launching notebook from JupyterHub
To display the JSON file the IPython.display module can be used:
python # Display Workflow in JSON file import IPython.display import json jsonPath = os.getcwd()+"/tmp.SimCenter/templatedir/"+parameters["inputFile"] with open(jsonPath) as f: jsonInfo = json.load
(f) IPython.display.JSON(jsonInfo) To modify the workflow, the user can either manually change the workflow files within the tmp.SimCenter folder or regenerate the workflow using quoFEM. Regenerating
the workflow using quoFEM is the preferred approach, as the quoFEM desktop is specifically designed to facilitate workflow creation. On the other hand, Jupyter notebooks offer more flexibility in
terms of post-processing, generating plots, and manipulating data.
Step 1 – Global Sensitivity Analysis
The PM4Sand constitutive model has 24 parameters. Among them, apparent relative density Dr, shear modulus coefficient Go, and contraction rate parameter hpo, are known to be important for predicting
liquefaction responses [2]. Therefore, these three parameters theta = {Dr, Go, hpo} are considered in the UQ analyses and their prior distributions are assumed to be uniform distributions with the
ranges shown in Table 1. These prior distributions shall capture a plausible wide range that includes all possible parameter values for the target soils. The experimental data will be used to
constrain this wide range to the domain that best describes the behavior exhibited by the specimen during the experiments.
Table 1. - Prior distributions of PM4Sand parameter
Parameter Distribution Range
Dr Uniform 0.1-0.6
Go Uniform 200 - 2000
hpo Uniform 0.01 - 5
The sensitivity analysis is performed for a simulation model that reproduces the CyDSS test shown in Figs. 4 and 5. The output quantity of interest is the number of cycles until the onset of
liquefaction (denoted as Y). The onset of liquefaction is defined as the time step when the shear strain shown in Fig. 4 exceeds 3.5%. Liquefaction capacity is affected by the initial shear stress
typically characterized by the cyclic shear stress ratio (CSR; i.e., ratio of horizontal cyclic shear stress to vertical consolidation stress). In this sensitivity analysis, a CSR of 0.175 is
considered. Two variance-based global sensitivity indices are evaluated:
where theta_i is the parameter of interest ( i.e., one of the {Dr, Go, hpo} ) , theta~i denotes the other two parameters, E_X[.] and Var_X[.] denote mean and variance of function over X,
respectively, and the vertical bar denotes ‘conditional on’. The former index, called the main-effect index, quantifies how much of the variance of Y is attributed to the parameter theta_i, while the
latter index, called the total-effect index, also considers the joint contributions of theta_i and other parameters [3].
Fig.4 - Single element FE model used in sensitivity analysis and Bayesian calibration
Fig.5 - (a) simulated cyclic stress-strain curve; (b)stress path during the simulated cyclic direct simple shear test; (c) evolution of pore water pressure ratio during the simulated CyDSS test
The sensitivity analysis is performed using the algorithm in Weirs et al. (2012) through the Dakota engine that interfaces with quoFEM [3]. 2500 simulations were performed using the prior
distributions in Table 1. The resulting sensitivity is shown in Fig. 6(a) which indicates that Dr is the dominating parameter for the response Y. This is also confirmed by inspecting the scatter plot
of Fig. 6(b): Dr (horizontal axis) demonstrates a stronger influence on the output (vertical axis) compared to the influence of the other parameters shown in (c) and (d). Based on this, we can expect
that the CyDSS observations will help constrain the uncertainty in Dr, while the reduction of uncertainty in hpo and Go will be relatively limited. Additionally, different types of experiments would
be needed to better characterize those other parameters.
Fig.6 - (a) Sensitivity analysis results for the critical number of cycles given CSR = 0.172; (b)– (d) Individual input-output scatter plots
Step 2 – Bayesian Parameter Calibration
Consider now the observations of the CyDSS experiment in Table 2, that are publicly available on the DesignSafe data depot [4]. We assume that the observed count of cycles at different CSR values,
denoted as Y_i^m (i = 1,…,6), is given by the simulation model predictions and an added Gaussian noise. The latter captures various inaccuracies such as inherent uncertainty in the phenomenon, the
imperfection of our simulation model, and measurement error. Given the above assumptions, we can denote the relationship between the data and model prediction, Y_i (theta), as
where noise epsilon_i is assumed to have zero-mean and unknown variance sigma^2_{epsilon,i}. Given the six measurement values, we can use a Bayesian approach to evaluate the posterior distribution of
the parameters of PM4Sand and the unknown noise variances:
where p(∙) denotes the (joint) probability distribution, and c is the normalization constant that ensures the area under the posterior distribution is one. From Eq. (3),
p(Y^m_i | theta, sigma^2) is a Gaussian distribution with mean Y_i(theta) and variance of sigma^2. The prior distribution of theta is in Table 1. Following best practices, inverse Gamma priors with
the shape parameter alpha = 3 and scale parameter beta = 2 are introduced for the sigma^2 measurement variances [5]. The posterior sample of theta in this example is obtained using the transitional
Markov chain Monte Carlo (TMCMC) sampling technique [6] that is available in quoFEM through the UCSD-UQ engine. This is an expensive calculation that greatly benefits from the available HPC resources
at DesignSafe.
Table 2. -Cyclic direct simple shear (CyDSS) test experimental data
Cyclic Shear stress ratio (CSR) Number of cycles to onset of liquefaction
0.105 26
0.105 21
0.130 13
0.151 5
0.172 4
0.200 3
Fig.7 - Comparison of calibrated model predictions and experimental data
Figure 7 compares the experimental data with the calibrated model predictions of the load-cycle counts, while Fig. 8 shows the calibrated parameter sample from the joint posterior distribution.
Figure 8 shows that uncertainty in all variables is reduced by calibrating to the observed data, but the reduction was most apparent in Dr. This is in line with our expectations from the earlier
sensitivity analysis. The results also highlight a strong dependency between Dr and hpo, indicating that multiple combinations of Dr and hpo produce near-optimal solutions. None of these features are
captured by a deterministic estimator that results from a conventional error-minimizing optimization approach (e.g., red diamond marker shown in the same figure). It is also important to recognize
that a non-negligible amount of uncertainty remains in the parameter estimates, and this produces substantial uncertainty in the model predictions. The dark blue bounds in Fig. 7 show the level of
uncertainty in the estimated number of cycles to liquefaction, but this simulation model was prepared to reproduce the experimental setup. When the calibrated constitutive model is applied in another
simulation, the responses can exhibit different scales of uncertainties.
Fig.8 - PM4Sand model parameters sampled from the joint posterior distribution
Step 3 – Forward Propagation
A forward propagation analysis is helpful to characterize uncertainties in a simulation model. For this purpose it is good practice to run such an analysis and characterize the effect of
uncertainties on application-specific quantities of interest before practically applying these parameter values in a simulation for decision making.
The obtained samples of the soil parameters in Fig. 8 are used to predict the uncertainty in the lateral spreading response of a site subjected to an earthquake (Loma Prieta Gilroy Array #2) with
peak ground acceleration of 0.37 g. The soil column model shown in Fig. 9 is introduced in which the liquefiable layer in the middle is modeled using PM4Sand and the other parts are assumed to remain
elastic throughout the shaking.
Fig.9 - Schematic of 1D soil layer with liquefiable soil used in the forward propagation analysis.
The results of 500 simulations are shown in Fig. 10. The mean and standard deviation of the residual displacement at the surface level (6 m) are 0.24 m and 0.02 m, respectively. Depending on the
application, the uncertainty in these results can be considered reasonably low. The sample of the predictive distribution shown on the top of the vertical profile can further be utilized in
reliability and risk assessment workflows
Fig.10 - Predicted earthquake response of soil column
1. McKenna, F., et al.: NHERI-SimCenter/quoFEM: Version 3.0.0. Zenodo (2022).
2. Boulanger, R.W., Ziotopoulou, K.: PM4Sand (Version 3.1): A sand plasticity model for earthquake engineering applications. Department of Civil and Environmental Engineering, University of
California, Davis, Davis, CA, Report UCD/CGM-17/01 (2017).
3. Weirs, V.G., et al.: Sensitivity analysis techniques applied to a system of hyperbolic conservation laws. Reliab. Eng. Syst. Saf. 107, 157–170 (2012).
4. Morales, B., Humire, F., Ziotopoulou, K.: Data from: Cyclic Direct Simple Shear Testing of Ottawa F50 and F65 Sands, 1 February 2021. Distributed by Design Safe-CI Data Depot. https://doi.org/
10.17603/ds2-eahz-9466. Accessed 28 June 2021.
5. Ching, J., Chen, Y.C.: Transitional Markov chain Monte Carlo method for Bayesian model updating, model class selection, and model averaging. J. Eng. Mech. 133(7), 816–832 (2007).
6. Carlin, J.B., Vehtari, A., Stern, H.S., Rubin, D.B., Gelman, A., Dunson, D.B.: Bayesian Data Analysis, 3rd edn. Taylor & Francis, United Kingdom (2014).
Citation and Licensing | {"url":"https://designsafe-ci.org/user-guide/usecases/arduino/usecase_quoFEM/","timestamp":"2024-11-14T13:25:05Z","content_type":"text/html","content_length":"147526","record_id":"<urn:uuid:2b4c6a4b-0c9d-49b8-86c1-c5fa0839f5bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00466.warc.gz"} |
The Quantum World
It is difficult to overstate the importance of quantum theory. It is the most successful scientific theory ever, able to predict to many decimal places of accuracy; and it underpins so much of our
technology of today - and increasingly it looks like opening up some exciting technology of tomorrow. But it often comes across as difficult to understand - and if a new generation of quantum
physicists is to develop successfully, the introductory theory has to be put across to them clearly. We are going to focus on various aspects of the quantum world to highlight some of the way that
are being found to make it understandable. And we start by looking at why it is that this particular area of science is apparently fraught with such conceptual problems.
Being difficult
One of the disturbing moments for anyone trying to understand quantum theory is when they discover that the founding fathers of the subject apparently didn’t understand it either – and indeed seemed
to revel in their difficulty.
There are a range of oft-quoted statements that reappear time and again, such as Richard Feynman’s remark that:
‘I think I can safely say that no-one understands quantum mechanics…. Do not keep asking yourself, if you can possibly avoid it, “but how can it be like that?”… Nobody knows how it can be like that.’
Niels Bohr said that: ‘If someone says that he can think about quantum physics without becoming dizzy, that shows only that he has not understood anything whatever about it.’
However, this is not what normally happens in the onward progress of science. New theories explain phenomena, rather than become sources of intrinsic mystery. For example, the reason why planetary
orbits were elliptical was strange until Newton developed his theory of gravity. And within a few years, there was not only an acceptance and understanding of elliptical orbits but also of a new
model of the universe, running under laws like a machine, and laws that humans could home in on and identify. And the whole worldview of society changed as people adapted to this new picture of the
universe – this was the foundation of the Enlightenment.
Nature and nature’s laws lay hid in night:
God said, let Newton be! And all was light.
– was the epitaph that Alexander Pope proposed for Newton.
The same process of familiarity, as people adapt their perceptions to the new format, as happens in art. ‘A picture may seem extraordinarily strange to you,’ wrote Gertrude Stein, ‘and after some
time not only does it not seem strange but it is impossible to find what there was in it that was strange.’
But for quantum theory there has been no process of increasing familiarity, and no beam of Newtonian light. Eighty years on from its foundation, it remains in shadows and mist.
Why should this be? Well first of all, the way that quantum theory came into being was in some ways unusual. Some of the major pieces came into place fast, and almost out of thin air. Schrödinger’s
famous equation, for example, comes out of a few lines of elegant mathematics. Suddenly the equation is there – but then comes the equation as to what it may mean. The debate focused first on the
symbol ψ, the quantity that was doing the waving? But what was it, and what was it waving in?
Schrödinger himself did not know. After he had developed the equation, he tried to picture ψ as a kind of electron essence, that could flow as a wave and then concentrate together into a particle,
but that only worked for a single electron.
For two electrons the ψ was a feature of the combined system, depending on the locations of them both, and so could only be plotted in a space of six dimensions, three for the location of the first
electron and three for the second. Each of the electrons needs the whole of our three-dimensional space for its waves, and so a mathematical ‘configuration space’ has to be imagined for calculating
the combination of the two sets of waves. This space does not have any physical interpretation; it is purely a kind of filing-system, for the purposes of calculation.
Max Born then came up with the interpretation that ψ was in fact related to probability. ψ is a complex number, and so it has a complex conjugate ψ*. When we multiply the two, we get |ψ|², which
turns out to have a physical interpretation – its value at any particular point of space is the probability that the electron will be found there. ‘If ψ is mainly concentrated in one small stormy
area, it is practically certain that the electron is there,’ noted Sir Arthur Eddington. So ψ is interpreted as a kind of probability wave. But how can probability, which is a rather abstract and
human-centred concept, come in physical waves that travel through space? And again, in what are the waves waving in, and how?
In Eddington’s words: ‘Something unknown is doing we don’t know what.’
The sheer power of Schrödinger’s approach is awesome. How could he handle mathematically something whose essence was so ill-defined? Possible answers come from the story of his life, and particularly
his biography by Walter Moore. He had come through traumatic times in service in the First World War and the privations of post-War Vienna, blockaded by the victorious Allies. He had a deep love of
philosophy and a strong interest in Eastern texts. And the idea of something unnameable, that we cannot apprehend directly but which underlies all the material world, was a concept that he felt at
home with – sufficiently at home to build it up into a mathematical structure.
So the original uncertainties about ψ should not make us feel uneasy about the equation. It is true that there is something mysterious about it, but it’s a known type of unknown – or at least it was
known enough to Schrödinger for him to be able to cast it into a mathematical mould.
Howie Firth | {"url":"http://www.spanglefish.com/GoingNova/index.asp?pageid=45333","timestamp":"2024-11-06T01:42:46Z","content_type":"text/html","content_length":"16367","record_id":"<urn:uuid:6a8686a7-095b-4fb1-b18a-084b1e0d09c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00479.warc.gz"} |
My Soccer Analysis Theory
1. Establishing the "Soccer Ball Isn't Round, Maximizing Profits" Theory
I brought up the theory that the soccer ball isn't round first because it's a premise for predicting soccer matches. This is firstly a fact, which might be hard for some naive fans to accept, but it
indeed exists. When Asian gambling just started to rise, the manipulation of certain match results by Southeast Asian big syndicates was already a well-known secret. Before that, in the UK, as a
country with highly developed football matches and football betting industry, was also the birthplace of match-fixing. Some match outcomes had their odds lowered by bookmakers just before the game,
resulting in bizarre upsets. Such match results became increasingly common and are now commonplace. How else can we explain the strange events like the 1998 World Cup final with Brazil's team-wide
underperformance? How can we explain the Liverpool vs. AC Milan Champions League final where Liverpool miraculously equalized after being three goals down? "The soccer ball is round" became the best
excuse for bookmakers to cover themselves. The soccer ball isn't round! It can be manipulated, it can be predicted! And who predicts football matches most accurately? It's undoubtedly the bookmakers,
because they are omnipotent, with great control, as evidenced by the fact that even the famous British media "The Sun" and the English and Scottish police cannot track the footsteps of bookmakers.
Therefore, it's not surprising that bookmakers control one or two matches. If bookmakers can stage a World Cup final, what else is impossible? Furthermore, the match-fixing theory mentioned above
only accounts for a small number of matches. Most matches with a strong tendency are predicted by the bookmakers through their teams of actuaries using calculations. And the probability of these
tendencies being correct is high. According to the narrative in "Southeast Asian Bookmakers", around 10 out of 100 matches are ones where bookmakers are over 90% confident, and around 40 matches are
ones where bookmakers have 60%-70% confidence based on their predictions. The remaining 50 matches are simply adjusted by the bookmakers based on the volume of bets received. Since I haven't
specifically compiled the numbers and it's impossible for me to do so, as I'm not a bookmaker, but this figure is also reliable. After all, match-fixing is rare, and most matches with specific
tendencies are predicted by bookmakers. Regardless of whether it's match-fixing or bookmakers accurately predicting matches, one thing that deserves recognition is that the bookmakers' level of
control over matches far exceeds our imagination, much more so than our own confidence. Therefore, bookmakers manipulate the odds to induce us to bet and maximize their profits.
In this context, we need to understand the bookmakers' profit model. Whether it's European odds or Asian handicaps, whether it's European bookmakers or Asian bookmakers, theoretically, the profit of
a bookmaker comes from extracting the vig. What does extracting the vig mean? First, we need to understand how odds are set. I've already discussed in my blog post "An Analysis of Standard Odds" that
bookmakers calculate the probabilities of each team winning, drawing, or losing through meticulous formulas, and then set the odds based on these probabilities. This probability calculation is quite
precise and very scientific, and it can be verified by a large number of matches. Moreover, this probability is quite close to the actual distribution of betting funds in real life! Let's take an
example: in a match between Losos and Ohannes, a certain bookmaker calculates the probabilities of home win, draw, and away win as 47.57%, 27.98%, and 24.45%, respectively. This probability is quite
close to the real distribution of betting funds! Since the vig for this bookmaker is 5%, the payout rate is 95%. Then this bookmaker will set the odds for home win, draw, and away win as: 95% /
47.57% = 2.00, 95% / 27.98% = 3.40, 95% / 24.45% = 3.88. After setting these odds, if bets are placed at these odds, the distribution of funds for each result will be very close to their
win-draw-lose probabilities. Therefore, regardless of which result occurs, the bookmaker will stably pay out only 95% of the betting funds as winnings and stably extract 5% of the vig. This is how a
bookmaker theoretically operates. Now let's take a look at how Asian handicaps work. Similarly, the bookmaker will first calculate the win-draw-lose probabilities as 47.57%, 27.98%, and 24.45%,
respectively. Since the probabilities of home win and (draw/away win) are roughly equal, the bookmaker will set a half-ball handicap to separate home win and (draw/away win) onto two different odds.
So the odds will be set as: Losos 2.0, Ohannes 1.95 (note that the odds for the half-ball handicap are the same as the odds for the home win). Since the upper bet chips are about 47.57% and the lower
bet chips are about 52.43%, whether the upper bet chips or the lower bet chips are placed, the book
214 1726 9249 Your theoretical approach to betting seems quite effective. I'd like to learn and understand more about it.
Threads Posts Credits
Forum Veteran
• Send PM | {"url":"https://discuz.cgpay.ch/thread-10745-1-1.html","timestamp":"2024-11-10T09:35:51Z","content_type":"application/xhtml+xml","content_length":"33254","record_id":"<urn:uuid:29ab74de-8543-4508-80ec-8505f4a3237a>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00191.warc.gz"} |
Investigating the Effect of the Geometry of RC Barrier Walls on the Blast Wave Propagation
2.1 Definition
An explosion is defined as physical and chemical reactions of solids, liquids, or gases that result in a sudden, rapid, and large-scale burst of energy with high pressure, expanded gas, load sound,
high temperature, and fragments [18]. This process usually occurs as a result of terrorist bomb attacks, explosive equipment, or accidental explosions. The detonation of a bomb results in the
propagation of gas pressure waves, known as the blast waves, which travel at supersonic speed in all directions through the medium causes it to be compressed. These waves affect the surrounding
structures causing damages proportionate with the wave amplitude and intensity. The blast waves’ intensity varies according to the explosive charge weight and the detonation stand-off distance [19].
2.2 Explosions nature
The explosions types are different according to their nature as physical, chemical, and nuclear ones. Physical explosions can occur from natural catastrophic rapture, i.e. volcanic eruptions or
sudden material phase change from liquid to vapour under high pressure; the release of pressured material allows it to expand and form shock waves. In vessels of stored liquids under pressure above
the atmospheric pressure, in case of bursting, the liquid expands, vaporises supersonically, and explodes, forming shock waves [20].
Chemical explosions can occur from decomposition or combination reactions that release a tremendous amount of energy. The decomposition reaction usually occurs between materials contain oxygen; their
decomposition produces high-temperature gases with large volume. The supersonic increase in the gas volume generates the shock wave, i.e. materials such as trinitrotoluene (TNT) and nitro-glycerine.
On the other hand, combinations reactions occur between materials that react exothermically to produce high-temperature gases with large volumes. This type of reaction takes more time than
decomposition reactions, so the mixed materials should be chosen wisely to ensure the continuity of the reaction till the end, i.e. ammonium nitrate and fuel oil (ANFO), gunpowder (potassium nitrate,
carbon, and sulphur), and fireworks. As the explosion reaction time is inversely proportional to the strength of the explosion, the decomposition reactions are more destructive than combination
reactions, which is why decomposition reactions are more widely used [21].
Moreover, nuclear explosions are very rapid reactions between atoms and neutrons in a matter of a millionth of a second to generate massive heat cause the air to expand and form the blast waves.
2.3 Explosions physical state and ignition sensitivity
Explosive materials are also differentiated according to their physical state, speed of reactions, and sensitivity to ignition. The material’s physical state is classified as solids, liquids, or
gases. The sensitivity to ignition is classified as primary, secondary, and tertiary explosives, and the speed of reactions that move through the material is classified as high explosives and low
explosives. For the detonation sensitivity, the material ignited by a bit of heat or pressure like a spark or a shock is classified as primary explosives. On the other hand, the material that needs
to be ignited by insensitive heat, pressure, or primary explosive use as igniter is classified as secondary explosives, which is safer to handle and use. The third type is tertiary explosives, which
require secondary explosives boosters to be ignited due to their shock intensity, and it is used in mining and construction processes.
According to the speed of reactions, the burst usually occurs as a result of high explosives detonation. Alternatively, deflagration occurs due to low explosives reaction. Solid explosives are
considered high explosives, and from the detonation sensitivity perspective, materials like lead azide, lead styphnate, and mercury fulminate are categorised as primary explosives. Moreover, a
material like ammonium nitrate fuel oil (ANFO) is categorised as a tertiary explosive if its reactions are low. Finally, materials like trinitrotoluene (TNT) and cyclonite (RDX) are categorised as
secondary explosives, and they are also classified as high explosive materials [22].
High explosives detonation can generate extreme heat that expands the gases released from the reactions; they led to the expansion of the gas volume to produce colossal pressure. This pressure can
reach up to 100-300 kilo bar, and the heat generated can reach up to 3000-4000℃. The pressurised gases create blast waves as layers of compressed air are generated as the force and energy released by
the explosion. The blast wave pressure rises over the normal ambient atmospheric pressure at a point on the wave related to some distance away from the explosion source, and that is called the
overpressure. Then, this pressure decays as the wave expands away from the source, as the pressure may decrease to reach a value that is less than the normal ambient pressure to create a negative
phase of the blast wave. This phase of the wave acts as suction and creates a partial air vacuum, which results in the scattering of shrapnel and fragments away from the explosion source, as shown in
Figure 1.
Figure 1. Blast wave pressure and distance from explosion relationship, [22]
2.4 Explosions confinement
The explosions types are different according to their place of occurrence referring to structure, as they are classified as unconfined and confined explosions. Unconfined or burst explosions occur
outside the structure. On the opposite, confined or internal explosions occur inside the structure.
The unconfined explosions are classified as free-air burst, near ground airburst, and ground burst. The free air explosions happen far from the ground surface, as shown in Figure 2.
Figure 2. Free-air burst explosion, [23]
The ground surface does not amplify the blast waves, and they are only reflected by hitting a structure.
The near-ground air burst is different because ground surface reflections amplify the blast waves before the waves hit an object. The amplification of the waves away from the ground results from the
overlapping of the original waves with the reflected ones to form a Mach stem (an amplified shock) with the exact wave magnitude as shown in Figure 3.
Figure 3. Airburst with ground surface reflections, [23]
The ground burst occurs on the ground surface, as the immediate blast wave is reflected away and enhanced to create a new amplified wave. The amplified waves join the original waves to form new waves
and create a hemispherical shock front, as shown in Figure 4. This type of unconfined explosion is the closest to the reality touristic attacks, which take place around structures and near to the
ground surface.
Figure 4. Ground surface burst, [23]
Confined explosions occur when an explosion takes place inside a structure, the blast waves are amplified using their reflections inwards the structure. The level of confinement affects explosions
outcomes, such as temperature, the volume of gases, and the increment of pressure inside the structure. Confined explosions are classified according to vents around the structure into three major
categories. Fully vented explosions resulted when a structure has single or multiple parts exposed to the atmosphere, and the blast waves spread away using the open vent way. Partially vented
explosions resulted when a structure has limited open vents, and the blast waves are spread outside by time. The fully confined explosion resulted when a structure has a complete inclusion by walls
or surfaces. The three classifications of confined explosions are shown in Figure 5.
Figure 5. Fully vented, partially vented, and fully confined explosions [23]
2.5 Incident blast waves characteristics
After an explosion, the blast waves travel into the surrounding medium until they hit any other object with a different density than the original one; this makes the waves reflect in another way. The
transferring medium is already compressed due to blast waves, and their impact on objects makes them be compressed again. As a result of this, a new blast wave is created, and it is greater than the
initial blast wave [24]. The Unified Facilities Criteria describe this wave pressure-time history; the wave is divided into three segments. The first is that the ambient pressure increases instantly
to its peak value (p[so]), then it decays to reach the normal ambient pressure (p[o]) at the positive phase of the wave time (t[o]). The subsequent wave phase is the negative or suction phase; the
pressure at this phase is negative (p[so)], lower than the normal ambient pressure. This negative pressure (p[so]) has a lower amplitude than the positive pressure, and the time of this phase is much
longer than the positive phase. The wave is ended when the medium pressure is neutralised to its original ambient pressure and equilibrium state, as shown in Figure 6.
Figure 6. Blast wave pressure-time history, UFC3-340-02, [7]
2.5.1 Blast wave scaling laws
The parameters of blast waves generated by an explosion depend on the physical characteristics of the explosion source, the energy released from it, and the properties of the transferring medium.
Many experiments are made using controlled explosions under certain conditions to collect data and establish a reference for other non-ideal explosions. Hopkinson [25] and Cranz et al. [26]
established the cube root scaling law, which is the most common method to blast wave scaling.
$Z=\frac{R}{W^{1 / 3}}$ (1)
2.5.2 Determination of blast wave pressure
Sadovskiy [27] presented an equation to obtain the peak overpressure based on an explosions data study.
$\mathrm{P}_{\mathrm{so}}=0.085 \frac{\mathrm{W}^{1 / 3}}{\mathrm{R}}+0.3\left(\frac{\mathrm{W}^{1 / 3}}{\mathrm{R}}\right)^{2}+0.8\left(\frac{\mathrm{W}^{1 / 3}}{\mathrm{R}}\right)^{3}(\mathrm{MPa})
$ (2)
2.5.3 Determination of blast wave duration
The time elapsed between the end of the shock front of a blast wave and its positive pressure phase end reaching the zero overpressure. Both the magnitude and duration of blast waves are significant
aspects in the prediction of expected damage. Sadovskiy [27], as well, proposed an equation to calculate the time duration of positive overpressure.
$\mathrm{t}_{\text {pos }}=1.2 \sqrt[6]{\mathrm{W}} \sqrt{\mathrm{R}}(\mathrm{ms})$ (3)
2.5.4 Determination of positive impulse
This explosion parameter is essential for examining blast damage. Impulse parameter (I) is defined as the area under the pressure-time curve. Sadovskiy [27] presented an equation to calculate the
blast wave positive impulse as,
$\mathrm{I}_{\mathrm{pos}}=200 \frac{\mathrm{W}^{1 / 3}}{\mathrm{R}}(\mathrm{Pa}-\mathrm{s})$ (4)
2.5.5 Determination of under pressure phase parameters
This parameter is important for flexible structures, on the opposite; it is less affecting the rigid structures. Krauthammer and Altenberg [28] proposed these after mentioned equations to calculate
the blast wave negative pressure and duration.
$\mathrm{P}_{\text {neg }}=\frac{0.35}{Z} 10^{5} \mathrm{~Pa}(\mathrm{Z}>3.5)$ (5a)
$\mathrm{P}_{\text {neg }}=10^{4} \quad \mathrm{~Pa}(\mathrm{Z}<3.5)$ (5b)
$\mathrm{t}_{\mathrm{neg}}=0.0104 \mathrm{~W}^{1 / 3} \mathrm{~S}(\mathrm{Z}<0.3)$ (6a)
$\mathrm{t}_{\mathrm{neg}}=\left(0.003125 \log _{10} \mathrm{Z}+0.01201\right) \mathrm{W}^{1 / 3} \mathrm{~s}(1.9<\mathrm{Z}>0.3)$ (6b)
$\mathrm{t}_{\mathrm{neg}}=0.0139 \mathrm{~W}^{1 / 3} \mathrm{~s}(\mathrm{Z}>1.9)$ (6c) | {"url":"https://iieta.org/journals/ijsse/paper/10.18280/ijsse.110306","timestamp":"2024-11-04T14:35:53Z","content_type":"text/html","content_length":"137685","record_id":"<urn:uuid:f380e8b1-ba1b-4030-a99f-3ac3346bebd8>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00880.warc.gz"} |
Topic: Expressions
Online Help
Real constants can be positive and negative integer and decimal numbers. They can include digits "0" - "9" and decimal point ".". You can also enter numbers as fractions like "3/4". However, the
program will treat them as expressions (division of two numbers). You cannot define numbers in floating point format: "3.4e+6". You have to use an expression like "3.4*10^6" instead.
All constants and variables are internally stored as "double-precision floating point" numbers. Their values are ranged from -1.7976931348623157E+308 to 1.7976931348623157E+308. If a result is out of
the above interval, the program returns "-∞" or "+∞, respectively". Division by zero gives the same result, but "0/0" = "Undefined". The smallest positive number is 4.94065645841247E-324. Smaller
values are rounded exactly to 0.
If you select "Complex" mode, you can use complex numbers in calculations. Otherwise, only real arithmetic is applied. Each complex number is represented by the ordered couple (a; b), where "a" is
real number, and "b = |b|i" is called "imaginary". It can be written in so called algebraic form: ±a ± bi (e.g. "2 + 3i"). You can also use other forms, such as polar or exponential from, by entering
the respective expressions. In Calcpad, the imaginary unit can be entered either as "i" or "1i" in case you have a variable named "i". The imaginary unit is a special number that satisfies the
expression i^ 2 = -1. | {"url":"https://calcpad.eu/help/19/constants","timestamp":"2024-11-14T07:47:10Z","content_type":"text/html","content_length":"19089","record_id":"<urn:uuid:69d999b4-eff4-41b0-8f5a-c4411514a02b>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00331.warc.gz"} |
Laws of Production- Law of Variable Proportion and Law of Return to Scale
The Laws of Production are essential principles that illustrate the connection between inputs and outputs in the manufacturing of goods and services. These laws provide important insights into how
resources are allocated, how efficiently production can be conducted, and the relationships among different production factors. The Law of Variable Proportion and the Law of Returns to Scale are
particularly significant, as they play a critical role in shaping production processes and informing economic decisions. In this article, we will examine these production laws in detail.
Law of Variable Proportion and Law of Return to Scale Overview
In economics, the notion of cost and producer equilibrium is managed by the Law of Production. It’s a critical part of economics since it assists a company in deciding the quantum of produce that
will affect in the utmost gains. It also specifies the company’s multiple variable and fixed costs. The two laws of product are the Law of return to scale and the Law of variable proportion. The
rules of production define the technologically possible methodologies of building up produce. The produce can be grown in a variety of ways.
What are the Laws of Production?
The production laws describe the technically possible ways to increase the degree of output. Production can be increased in many ways. Production can be boosted by revising all means of production.
This is only possible in the long term. Hence, the Law of Return to Scale refers to the long-term analysis of production.
In the short run, the production can be boosted by utilising further variable factors, while holding capital (and conceivably other factors as well) constant. The marginal product of variable factors
will ultimately lower as further and further of these factors are combined with other constant factors. The expansion of produce with (at least) constant factor is described by the law of
(ultimately) diminishing returns to variable factor, often called the law of variable proportions.
The laws of production in economics are related to the notions of cost and equilibrium of directors. It’s an important aspect of economics as it helps the company determine the degree of production
that leads to maximum profit. It also defines the many fixed and variable costs of the business.
Types of Laws of Production
The Laws of Production consist of two main principles: the Law of Variable Proportion, also known as the Law of Diminishing Marginal Returns, which explains how adding more of a variable input can
lead to diminishing additional output; and the Law of Return to Scale, which examines how increasing all inputs in equal proportions affects overall output, with three possible outcomes – increasing
returns, constant returns, or decreasing returns to scale. Together, these laws provide essential insights into the efficiency and behaviour of production processes in different economic contexts.
Law of Variable Proportion
It states that when the volume of one factor of production is raised while keeping all other factors constant, it results in the decline of the marginal product of that factor. The law of variable
proportion is also understood as the Law of Proportionality. When the variable factor becomes more, it can result in a negative value of the marginal product.
The law of variable proportion can be concluded as “When the variable factor is raised while keeping all other factors constant, the total product will increase originally at a higher rate, it’ll be
increasing at a lower rate and ultimately there will be a decline in the rate of the product.”
Assumptions of the Law of Variable Proportion
The law of variable proportion holds good under certain circumstances, which will be explained in the lines below.
• A constant state of Technology: It’s assumed that the state of technology will be constant and with advancements in the technology, the production will upgrade.
• Variable Factor Proportions: Another assumption of the Law of Variable Proportion is that factors of production are variable. The law isn’t valid, if factors of production are fixed.
• Homogeneous factor units: This assumes that all the units manufactured are identical in quality, volume and price. In other words, the units or goods are homogeneous in nature.
• Short Run: This assumes that this law is appropriate for those systems that are operating for a short term, where it isn’t possible to alter all factor
Stages of the law of Variable Proportion
There are three stages of the law of variable proportion which are detailed below.
First Stage – Increasing Return to a Factor
In the initial stage, every additional variable factor adds further and further to the total output. It means TP increases at a vast rate and MP of each variable factor rises. Better utilization of
fixed factors and increases in the effectiveness of a variable factor due to specialization are the significant factors accountable for higher returns.
The increasing returns to a factor stage have been shown in the given illustration between O to L. It implies. TP increases at an increasing rate and MP rises till it reaches its maximum point’ L’.
which marks the end of the first phase.
Second Stage – Diminishing Return to a Factor
In the second phase, incorporating each additional variable input leads to a progressively smaller increment in output. This implies that the Total Product (TP) starts to increase at a slower pace,
and the Marginal Product (MP) of the variable input declines as the input quantity grows. The primary reason for this diminishing returns phenomenon is the disruption of the optimal balance between
fixed and variable inputs.
This second stage concludes at the juncture labelled ‘M’ where MP becomes zero while TP reaches its peak at point ‘R.’ This second stage is of significant importance as rational producers naturally
strive to operate within it. This is because TP attains its highest value here, and the MP associated with each variable input remains positive.
Third Phase – Negative Return to a Factor
During the third phase, the incorporation of extra variable inputs leads to a reduction in Total Product (TP). Moreover, the Marginal Product (MP) turns negative at this point. Consequently, this
stage is termed as experiencing negative returns to a factor. The fundamental reason for this phase lies in the insufficient coordination between the variable and fixed inputs.
The third stage commences right after point ‘M’ on the MPP curve and extends to a point on the TPP curve. The Marginal Product for each variable input remains negative throughout these three stages.
Consequently, no rational firm would intentionally opt to function within this phase.
Law of return to scale
As stated by Koutsoyiannis, the term “returns to scale” pertains to alterations in output when all factors undergo simultaneous proportional changes.
According to Leibhafsky, “returns to scale” encompasses the overall output’s behaviour as all inputs are adjusted simultaneously, representing a concept applicable in the long run.
Three types of Return to Scale
The three stages are explained below:
Increasing Returns to Scale or Diminishing Costs
• Increasing returns to scale, also known as diminishing costs, occur when boosting all production factors results in a greater-than-proportional surge in output.
• For instance, if inputs are doubled, the output grows more than twice as fast.
• This phenomenon, influenced by factors like division of labour and external economies of scale, can be visualized using Figure 8.
Illustrated in Figure 8, the X-axis (OX) denotes amplified labour and capital, while the Y-axis (OY) signifies heightened output. Upon elevating labour and capital from point Q to Q1, the output
surges from point P to P1, surpassing the increment in labour and capital factors.
Diminishing Returns to Scale
• Diminishing returns to scale, or rising costs, materialize when a consistent increase in all production factors leads to a proportionally smaller expansion in output.
• In essence, doubling inputs doesn’t lead to a doubling of output.
• This trend arises due to internal and external diseconomies outweighing economies. Figure 9 illustrates this pattern.
The X-axis (OX) symbolizes labour and capital quantities, while the Y-axis (OY) represents output. As factors of production surge from point Q to Q1 (higher quantity), the corresponding output
increase from point P to P1 is relatively smaller. The outcome is that the increase in factors of production is more substantial compared to the increment in production, thus underscoring the
presence of diminishing returns to scale.
Constant Returns to Scale
• Constant returns to scale, or constant costs, emerge when output scales up in direct proportion to the enlargement of production factors.
• This equilibrium arises when economies of scale balance out diseconomies, typically occurring after a certain production threshold.
As showcased in Figure 10. This graphical representation depicts that an increase in labor and capital corresponds precisely to an equivalent increase in output, resulting in constant returns to
In conclusion, these laws collectively form the bedrock upon which sound production strategies and economic policies are built. They guide producers and policymakers in making informed choices that
optimize output, minimize costs, and foster sustainable growth in economies. | {"url":"https://www.adda247.com/teaching-jobs-exam/laws-of-production/","timestamp":"2024-11-13T09:51:32Z","content_type":"text/html","content_length":"652825","record_id":"<urn:uuid:93601442-d308-44e7-820c-f1ab44d47b59>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00625.warc.gz"} |
Chennai Mathematical Institute
Physics Seminar
Date: Thursday, 8 February 2024
Time: 1:45 PM
Venue: Lecture Hall 1
Local Entropy Current on a Black-Hole Horizon and its Reparametrizations
Shuvayu Roy
NISER, Bhubaneswar.
Recently, working within the linearized regime of small amplitude expansion, an entropy current has been constructed on the horizons of dynamical black hole solution in any higher derivative theory
of gravity. However, the construction is strongly dependent upon the spatial slicing of the horizon along its null-generators. In this talk, we will explore the effect of two types of
reparametrizations of the null generators (1. affine to affine; 2. affine to non-affine) on the horizon entropy current. For the first case, we'll see that although the entropy current structures
change non-trivially, the divergence of the entropy current remains invariant up to linear order in amplitude expansion. For the second case, we'll dualize the horizon entropy current to a boundary
entropy current in an asymptotically AdS black hole metric with a dual description in terms of dynamical fluids living on the AdS boundary. Applying the construction to the Einstein-Gauss-Bonnet
theory, we'll see that the Gauss-Bonnet terms do not add any corrections to the fluid entropy up to the first order in boundary derivative expansion. However, at the second order in derivative
expansion, the boundary current will non-trivially depend on how we choose our horizon to boundary map, which need not be expressible entirely in terms of fluid variables. So generically, the
boundary entropy current generated by dualizing the horizon current will not admit a fluid dynamical description. | {"url":"https://www.cmi.ac.in/activities/show-abstract.php?absyear=2024&absref=23&abstype=sem","timestamp":"2024-11-11T17:00:58Z","content_type":"text/html","content_length":"8078","record_id":"<urn:uuid:654096a9-4266-4893-91b3-c7e7d6c4c4cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00838.warc.gz"} |
Numerical integration (quadrature).
The function to integrate.
A function handle or anonymous function is also supported.
Lower integration limit.
Type: double
Dimension: scalar | vector | matrix
Upper integration limit.
Type: double
Dimension: scalar | vector | matrix
Relative tolerance (default: sqrt(eps) or about 1.0e-8).
Type: double
Dimension: scalar
Absolute tolerance (default: sqrt(eps) or about 1.0e-8).
Type: double
Dimension: scalar
The estimated area.
Dimension: scalar | matrix
Number of function evaluations.
Dimension: scalar | matrix
Single interval:
function y = Integrand(x)
y = sqrt(x);
[area,count] = quad(@Integrand, 0, 2)
area = 1.88561808
count = 711
Multiple intervals with an anonymous function:
[area,count] = quad(@(x) sqrt(x), [0, 1], [1, 2])
area = [Matrix] 1 x 2
0.66667 1.21895
count = [Matrix] 1 x 2
Improper integral with an anonymous function:
[area,count] = quad(@(x) exp(-x ^ 2),0,Inf)
area = 0.886226925
count = 198
quad uses adaptive quadrature with a Gauss-Legendre kernel. The algorithm recursively bisects each interval until the difference between 4 and 5 point Gauss-Legendre rule estimates fall below the
relative or absolute tolerance. Either tolerance can be disabled by setting it equal to zero.
The maximum number of function evaluations is 10,000, and the minimum interval is 1.0e-12.
To pass additional parameters to a function argument use an anonymous function. | {"url":"https://help.altair.com/twinactivate/help/en_us/topics/reference/oml_language/Calculus/quad.htm","timestamp":"2024-11-03T04:18:12Z","content_type":"application/xhtml+xml","content_length":"58806","record_id":"<urn:uuid:9fe22389-022f-49be-8815-59cbb62a8ddf>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00853.warc.gz"} |
Algebra Homework Help
I'm going to open this page back up for question on how to solve particular algebra problems that you may have. I became overwhelmed with the number of submissions in the past, so I'm hoping to be
able to manage answering as many questions as possible.
If I am unable to answer all the questions, I'm hoping other Algebra-class users will be able to step in a comment to help each other out!
Do You Have an Algebra Question?
Are you stumped? Share your question here!
What Other Visitors Have Said
Click below to see contributions from other visitors to this page...
Need More Help With Your Algebra Studies?
Get access to hundreds of video examples and practice problems with your subscription!
Click here for more information on our affordable subscription options.
Not ready to subscribe? Register for our FREE Pre-Algebra Refresher course. | {"url":"https://www.algebra-class.com/algebra-homework-help.html","timestamp":"2024-11-13T14:41:23Z","content_type":"text/html","content_length":"77394","record_id":"<urn:uuid:af0698cb-995b-478d-9997-d9fcc8af197e>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00700.warc.gz"} |
'Something is Self-Identical' Cannot Translate 'There are Objects': Another Argument Against the Thin Theory'Something is Self-Identical' Cannot Translate 'There are Objects': Another Argument Against the Thin Theory
At Tractatus Logico-Philosophicus 4.1271 we read: "So one cannot say, for example, 'There are objects', as one might say, 'There are books'."
In endnote 9, p. 194, of "The Number of Things," Peter van Inwagen (Phil. Issues 12, 2002) writes:
Wittgenstein says that one cannot say " 'There are objects', as one might say, 'There are books'." I have no idea what the words 'as one might say' ['wie man etwa sagt'] could mean so I will
ignore them.
Is van Inwagen simply feigning incomprehension here? How could he fail to understand what those words mean? Wittgenstein's point is that object is a formal concept, unlike book. One can say,
meaningfully, that there are books. One cannot say, meaningfully, that there are objects. Whether Wittgenstein is right is a further question. But what he is saying strikes me as clear enough,
clear enough so that one ought to have some idea of what he is saying rather than no idea. By the way, van Inwagen is here engaging in a ploy of too many analytic philosophers. In a situation in
which it is tolerably, but not totally, clear what is being said, they say, 'I have no idea what you mean' when, to avoid churlishness, they ought to say, 'Would you please clarify exactly what you
Be this as it may. Philosophers are a strange, in-bred breed of cat, and they acquire some strange tics. My present topic is not the tics of philosophers, nor formal concepts either.
According to Wittgenstein, one cannot say (meaningfully) that there are objects. Van Inwagen responds:
Why can one not say that there are objects? Why not say it this way: '(Ex)(x = x)'? (p. 180)
Without endorsing Wittgenstein's claim, or trying to determine what exactly it means, my thesis is that van Inwagen's translation of 'There are objects' as 'Something is self-identical' is hopeless.
I do not deny the logical equivalence of the two sentences. I do not claim that there are self-identical items that do not exist. Everything exists. My claim is that to exist is not to be
self-identical. They are not the very same 'property.' If they were, then van Inwagen's translation would be unexceptionable. But they are not. Here is a reductio ad absurdum argument to show
that existence and self-identity are distinct, that existence cannot be reduced to self-identity.
0. Existence and self-identity are the very same property. (Assumption for reductio)
1. If existence and self-identity are the very same property, then nonexistence and self-diversity are the very same property, and conversely. (Self-evident logical equivalence.)
2. Possibly, I do not exist. (Self-evident premise: I am a contingent being.)
3. Possibly, I am not self-identical. (From 1, 2)
4. What is not self-identical is self-diverse. (True by definition)
5. Possibly, I am self-diverse. (From 3, 4)
6. (5) is necessarily false.
7. (0) is false. Q.E.D.
The thin theory of existence is the theory that existence is exhaustively explicable in terms of the purely logical concepts of standard first-order predicate logic with identity. Identity and
quantification are such concepts. Now the only way within this logic to translate 'There are objects' or 'Something exists' is the way van Inwagen suggests. But what I have just shown is that
'Something is self-identical' does not say what 'Something exists' says.
If things exist, then of course they are self-identical. What else would they be? Self-diverse? But their existence is not their self-identity. Their existence is their being there, their not
being nothing, their reality -- however you want to put it. If something is self-identical, it cannot be such unless it first exists. It astonishes me that there are people, very intelligent
people, who cannot see that. What should we call this fallacy? The essentialist fallacy? The fallacy of thinking that being = what-being? Or maybe it is not a fallacy of thinking, but a kind of
blindness. Some people are color-blind, some morally blind, some modally blind. And others existence-blind.
You can follow this conversation by subscribing to the comment feed for this post.
Does premise 5 mean that you might not have been the same as you are, or that you might not have been the same as you were?
I agree that nothing is (actually) not the same thing as it (actually) is, but I'm not sure if that entails that nothing could have not been what it (actually) is.
So I think it makes a difference whether we read premise 5 to mean that you might not have been the same thing as you might have been, or to mean that you might not have been the same thing as you
(actually) are. On the former reading, it goes through, but on the latter reading, I'm not so certain.
The question is whether you could have existed without being the same as yourself -- and I say you might have existed without being the same thing as you actually are, so long as you would not have
been a different thing than you would have been (what you would have been would in that case be different a different thing than what you actually are).
Sorry if I'm not putting this very clearly, but I'm confused by the phrase "self-diverse".
Thanks for the post.
I wonder if you might not run into this difficulty: you are saying that it is not possible that I be not self-identical, which amounts to saying that it is necessary that I am self-identical. But it
is possible that I not exist. So at some possible world W, I do not exist, but am self-identical. But, surely, if at W something is identical with Leo, then Leo exists at W. So, it would appear that
your assumptions land you in a contradiction (that at some possible world W, I both exist and do not exist).
OK, so existence is not captured with a equals sign.
What would
(Ex) (x)
mean? Would that work?
Identity, not equality.
'(Ex)(x)' does not express a complete thought. 'For some x, x' is neither true nor false. 'For some x, x =x' expresses a complete thought, and is true, but not the same thought expressed by
'Something exists.'
You are necessarily self-identical, but not a necessary being. In possible worlds jargon: in every world in which you exist, you are self-identical; but you do not exist in every possible world.
I don't see any contradiction.
You are necessarily self-identical but contingently existent. I conclude that existence is not self-identity.
Of course, when I say that you are necessarily self-identical I do not mean that every world W is such that you are in it and self-identical; I mean that every world you are in is a world in which
you are self-identical.
The point I am making is very simple. It is true that I might not have existed; it is not true that I might not have been self-identical. So existence is not self-identity.
Dr. Vallicella,
But then it seems you're illegitimately shifting the sense of "necessary" and "possible": in (2) you are using "possibly" to mean that there is some accessible possible world in which I do not exist,
but in (6) you are using "possibly" to mean that there is some accessible possible world in which I exist and at which I am not self-identical. Surely, "There is an accessible world at which p" and
"There is an accessible world at which I exist and p" are not equivalent. If you were to use one sense of "possible" consistently, you would need to reject either (2) or (6).
In other words, if what you mean by "necessarily self-identical" is that "every world you are in is a world in which you are self-identical," then by parity of reasoning "necessarily existent" should
be construed as "every world you are in is a world in which you exist," which would render (2) trivially false.
Well, in (6) I don't use 'possibly.' Do you mean (5)?
If you don't accept the reasoning, tell me which premise you reject or which inference you consider invald.
What's the alternative? That at all worlds where you do not exist, you are not self-identical? (if existence is self-identity, then that seems to follow) Well, at those worlds where you do not exist,
what is it that is not self-identical anyway? In a domain with everything but Bill, what is the truth-value of "Bill is self-identical"? "Bill is not self-identical"?
The alternative is that in a world in which I do not exist, I am distinct from everything, which does not imply that I am self-diverse in that world, since I don't exist there.
But, as you seem to appreciate, there must be something in those whorlds in which I do not exist that represents me there so that it can be true there that I do not exist. See my latest post for
Again, I am agreeing with you all the way here. Existence is not self-identity. It is, of course, the instantiation of a singular concept. But there we disagree.
Partial agreement is better than no agreement!
Recent Comments | {"url":"https://maverickphilosopher.typepad.com/maverick_philosopher/2012/08/there-are-objects.html","timestamp":"2024-11-05T15:42:52Z","content_type":"application/xhtml+xml","content_length":"304372","record_id":"<urn:uuid:39c5d40f-2ce4-4005-be20-e1ecdeb4979f>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00579.warc.gz"} |
Diagrams, charts and graphs
1 Scale diagrams
1.1 Understanding scale diagrams
Plans of houses and instructions for assembling shelves, etc., often come in the form of scale diagrams. Each length on the diagram represents a length relating to the real house, the real shelves,
etc. Often a scale is given on the diagram so that you can see which length on the diagram represents a standard length, such as a metre, on the real object. This length always represents the same
standard length, wherever it is on the diagram and in whatever direction.
Other lengths may represent fractions or multiples of this standard length. Thus, lengths which are half as long on the diagram represent lengths which are half as long in reality; lengths which are
twice as long on the diagram represent lengths which are twice as long in reality; and so on.
Scale diagrams are often drawn on a square grid. It is then possible to count squares on the grid rather than measure lengths on the diagram. Care must be taken with either method: the ends of a
length may fall between the marks on the ruler, or the grid lines may not be equally spaced.
Example 1
Below is a scale plan of a bathroom. Answer the questions listed below the plan. You might want to show the ruler and then drag it to make your measurements.
The background squares show the length representing 1 m.
Active content not displayed. This content requires JavaScript to be enabled.
Interactive feature not available in single page view (
see it in standard view
Click on 'Reveal answer' for a detailed solution.
On the plan, the top and bottom walls are 3 squares wide, and so the bathroom is 3 m wide. The side walls in the diagram are 3 and a bit squares long. If you measure the ‘bit’, you will find that it
is one-fifth of the length representing 1 m, and therefore it represents or 0.2 m. It follows that the total length of each side wall is 3.2 m. Hence the bathroom measures 3 m by 3.2 m.
The shower in the plan is 1 square in each direction, so in reality it is 1 m by 1 m.
The bath in the plan is nearly 2 squares long. If you measure it on the plan, you will find it is 1 square plus (or 0.8) of a square long. It is also or 0.8 of a square wide on the plan. This means
that in reality its dimensions are 1.8 m by 0.8 m.
As the doorframe is about 1 square wide on the plan, the actual door is about 1 m wide.
Example 2
• (a) The scale on a diagram is such that 2 cm represent 1 m. What lengths do 6 cm, 0.2 cm, 3 cm, 3.6 cm and 0.5 cm represent?
• (b) A window is 2.3 m wide and 1.4 m high. Draw a scale diagram of the window, using a scale in which 2 cm represent 1 m.
• (a) Because you are being asked to convert lengths on the diagram into real lengths, it is easiest to work with a diagram length of 1 cm. As 2 cm represent 1 m, 1 cm will represent 0.5 m. Then
□ 6 cm represent 0.5 × 6 m = 3 m,
□ 0.2 cm represent 0.5 × 0.2 m = 0.1 m,
□ 3 cm represent 0.5 × 3 m = 1.5 m,
□ 3.6 cm represent 0.5 × 3.6 m = 1.8 m,
□ 0.5 cm represent 0.5 × 0.5 m = 0.25 m.
• (b) Here 1 m in reality is represented by 2 cm on the diagram. So
□ 2.3 m are represented by 2.3 × 2 cm = 4.6 cm,
□ 1.4 m are represented by 1.4 × 2 cm = 2.8 cm.
The rectangle should be 4.6 cm by 2.8 cm and the 1 metre scale should be represented by 2 cm. | {"url":"https://www.open.edu/openlearn/science-maths-technology/mathematics-statistics/diagrams-charts-and-graphs/content-section-1.1","timestamp":"2024-11-11T11:54:41Z","content_type":"text/html","content_length":"107539","record_id":"<urn:uuid:7d60b40e-4153-4cad-9894-f0b0821e077f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00595.warc.gz"} |
History of probability
between the 16th and 17th century.
Probability deals with random experiments with a known distribution, Statistics deals with inference from the data about the unknown distribution.
Probable and probability and their cognates in other modern languages derive from medieval learned Latin probabilis, deriving from Cicero and generally applied to an opinion to mean plausible or
generally approved.^[1] The form probability is from Old French probabilite (14 c.) and directly from Latin probabilitatem (nominative probabilitas) "credibility, probability," from probabilis (see
probable). The mathematical sense of the term is from 1718. In the 18th century, the term chance was also used in the mathematical sense of "probability" (and probability theory was called Doctrine
of Chances). This word is ultimately from Latin cadentia, i.e. "a fall, case". The English adjective likely is of Germanic origin, most likely from Old Norse likligr (Old English had geliclic with
the same sense), originally meaning "having the appearance of being strong or able" "having the similar appearance or qualities", with a meaning of "probably" recorded mid-15 c. The derived noun
likelihood had a meaning of "similarity, resemblance" but took on a meaning of "probability" from the mid 15th century. The meaning "something likely to be true" is from 1570s.
Ancient and medieval
law of evidence developed a grading of degrees of proof, credibility,
to deal with the uncertainties of evidence in court.
In Renaissance times, betting was discussed in terms of odds such as "ten to one" and maritime insurance premiums were estimated based on intuitive risks, but there was no theory on how to calculate
such odds or premiums.^[3]
The mathematical methods of probability arose in the investigations first of Gerolamo Cardano in the 1560s (not published until 100 years later), and then in the correspondence Pierre de Fermat and
Blaise Pascal (1654) on such questions as the fair division of the stake in an interrupted game of chance. Christiaan Huygens (1657) gave a comprehensive treatment of the subject.^[4]^[5]
In ancient times there were games played using astragali, or talus bone.^[6] The pottery of ancient Greece provides evidence to show that the astragali were tossed into a circle drawn on the floor,
much like playing marbles. In Egypt, excavators of tombs found a game they called "Hounds and Jackals", which closely resembles the modern game snakes and ladders. According to Pausanias,^[7]
Palamedes invented dice during the Trojan wars, although their true origin is uncertain. The first dice game mentioned in literature of the Christian era was called hazard. Played with two or three
dice, it was probably brought to Europe by the knights returning from the Crusades. Dante Alighieri (1265–1321) mentions this game. A commenter of Dante puts further thought into this game: the
thought was that with three dice, the lowest number you can get is three, an ace for every die. Achieving a four can be done with three dice by having a two on one die and aces on the other two dice.
Cardano also thought about the sum of three dice. At face value there are the same number of combinations that sum to 9 as those that sum to 10. For a 9:(621) (531) (522) (441) (432) (333) and for
10: (631) (622) (541) (532) (442) (433). However, there are more ways of obtaining some of these combinations than others. For example, if we consider the order of results there are six ways to
obtain (621): (1,2,6), (1,6,2), (2,1,6), (2,6,1), (6,1,2), (6,2,1), but there is only one way to obtain (333), where the first, second and third dice all roll 3. There are a total of 27 permutations
that sum to 10 but only 25 that sum to 9. From this, Cardano found that the probability of throwing a 9 is less than that of throwing a 10. He also demonstrated the efficacy of defining odds as the
ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes).^[9]^[10]
In addition,
Galileo wrote about die-throwing sometime between 1613 and 1623. Unknowingly considering what is essentially the same problem as Cardano's, Galileo had said that certain numbers have the ability to
be thrown because there are more ways to create that number.
Eighteenth century
Abraham De Moivre's
The Doctrine of Chances
(1718) put probability on a sound mathematical footing, showing how to calculate a wide range of complex probabilities. Bernoulli proved a version of the fundamental
law of large numbers
, which states that in a large number of trials, the average of the outcomes is likely to be very close to the expected value - for example, in 1000 throws of a fair coin, it is likely that there are
close to 500 heads (and the larger the number of throws, the closer to half-and-half the proportion is likely to be).
Nineteenth century
The power of probabilistic methods in dealing with uncertainty was shown by
method of least squares to correct error-prone observations, especially in astronomy, based on the assumption of a
normal distribution
of errors to determine the most likely true value. In 1812,
issued his
Théorie analytique des probabilités
in which he consolidated and laid down many fundamental results in probability and statistics such as the
moment-generating function
, method of least squares,
inductive probability
, and hypothesis testing.
Towards the end of the nineteenth century, a major success of explanation in terms of probabilities was the
J. Willard Gibbs
which explained properties of gases such as temperature in terms of the random motions of large numbers of particles.
The field of the history of probability itself was established by Isaac Todhunter's monumental A History of the Mathematical Theory of Probability from the Time of Pascal to that of Laplace (1865).
Twentieth century
Probability and statistics became closely connected through the work on
clinical trials of drugs, as well as in
and elsewhere. A hypothesis, for example that a drug is usually effective, gives rise to a
probability distribution
that would be observed if the hypothesis is true. If observations approximately agree with the hypothesis, it is confirmed, if not, the hypothesis is rejected.
The theory of stochastic processes broadened into such areas as
The twentieth century also saw long-running disputes on the
frequentism was dominant, holding that probability means long-run relative frequency in a large number of trials. At the end of the century there was some revival of the
view, according to which the fundamental notion of probability is how well a proposition is supported by the evidence for it.
The mathematical treatment of probabilities, especially when there are infinitely many possible outcomes, was facilitated by Kolmogorov's axioms (1933).
1. ^ Franklin (2001), pp. 113, 126.
2. ^ Franklin (2001), pp. 278–288.
3. ^ Hacking (2006). For Cardano, see p. 54; for Fermat and Pascal, see pp. 59–61; for Huygens, see pp. 92–94
4. ^ Franklin (2001), pp. 296–316.
5. .
6. .
7. ^ Franklin (2001), pp. 293–294.
8. ^ Some laws and problems in classical probability and how Cardano anticipated them Gorrochum, P. Chance magazine 2012
9. ^ Franklin (2001), pp. 296–300.
10. ^ Franklin (2001), p. 302.
11. ^ Bernstein (1996), Chapter 18.
External links | {"url":"https://findatwiki.com/History_of_probability","timestamp":"2024-11-04T02:34:25Z","content_type":"text/html","content_length":"112296","record_id":"<urn:uuid:b4834b2b-ae95-4eba-8e44-c74fbafe591c>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00764.warc.gz"} |
Calculate the expected payoff from receiving a noisy signal. This website provides a limited interface for this Python package. For more flexibility, use the package directly.
When we gain information about a decision-relevant quantity, that information may improve the decision we ultimately make. The value of the (expected) improvement in the decision is the value of
information (VOI).
Sometimes, the information we gain tells us that one action is certain to be better than another (for example, knowing that tails came up in a coin toss implies we should bet on tails). But often the
information is imperfect, and can only pull our decision in the direction of optimality, in expectation.
Such imperfect information can be modelled as observing a random variable (or signal) B that is informative about the true state of the world T but contains some noise. The expected value of
information is the expected benefit from observing this random variable.
The realised value of information is:
VOI(T,B) = U(decision(B), T) - U(decision_0, T)
where U is the payoff function, decision is the decision function when we have access to the signal, and decision_0 is the decision we make in the absence of the signal.
If a signal of B fails to change our decision, the value we realised is zero (regardless of T). This is intuitive.
When the signal does change our decision, the size of the benefit depends on the true state T, and on our decision function decision, which in turn depends on how the distribution of B is related to
For each T=t, the expected value of information is
VOI(t) = E_B[VOI(T,B) | T=t] = E_B[U(decision(B), T) - U(decision_0, T) | T=t]
where E_B indicates that we're taking expectations with respect to (i.e. over the distribution of) B.
We can then find the entirely unconditional expected VOI V by taking expectations of the above with respect to T:
V = E_T[ E_B[VOI(t,b) | T=t]]
Of course we might also, by the law of iterated expectations, write V=E[VOI(t,b)], where the expectation sign without a subscript means the expectation is taken with respect to the joint distribution
of T and B.
Model details
We make some simplifying assumptions:
• We model the decision problem as a binary choice between:
□ the bar (d_1): an option with an expected payoff of bar about which we cannot gain additional information. Expressed mathematically, the inability to gain additional information means that U
(d_1, T) is independent of T. So we can write E[U(d_1)]=bar. (It's irrelevant whether or not there is uncertainty over the payoff U(d_1), what matters here is that this uncertainty is
independent of T so we cannot gain additional information).
□ the object of study (d_2): an uncertain option whose payoff is T, about which we can gain additional information.
• The decision-maker is rational, i.e. upon receiving a signal of B=b they update their prior P(T) to P(T|B=b). They risk-neutrally maximise expected U, which means they choose the object of study
if and only if E[T|B=b]>bar (or E[T]>bar in the absence of the signal).
• The problem is one-dimensional, i.e. T and B follow one-dimensional distributions.
• Currently, only one distribution family is supported for B: B has a normal distribution with unknown mean T and known standard deviation.
• On this website, the prior over T must be normal, log-normal, or metalog. The underlying Python package supports any prior specified as a one-dimensional SciPy continuous distribution.
This tool uses a simulation to approximate the expectation mentioned in the previous section. Specifically, for each iteration i of the simulation:
1. We draw a true value t_i from the decision-maker's prior P(T).
2. We draw an estimate b_i from Normal(t_i,sd(B)).
3. We can then calculate the decision that would be made with and without access to the signal:
□ With the signal. The decision-maker's subjective posterior expected value is E[T|b_i]. If E[T|b_i]>bar, the decision-maker chooses the object of study, otherwise they choose the bar.
□ Without the signal. If E[T]>bar, the decision-maker chooses the object of study, otherwise they choose the bar.
4. We calculate the decision-maker's payoffs with and without access to the signal. If choosing the object of study, they get a payoff of T_i; the payoff for the bar is bar.
Drawing t_i corresponds to the outer expectation E_T[] discussed above, and drawing b_i (dependent on t_i) corresponds to the inner expectation E_B[].
Astute readers will have noticed another simplification. In calculating V, we take expectations over T according to the decision maker's prior P(T) (this is because T_is are drawn from P(T) in step
1). In a subjective bayesian sense, this means that we compute the expected VOI by the lights of the decision-maker; a frequentist interpretation might be that the decision situation is drawn from a
larger reference class in which T follows P(T), and we are computing the average VOI in that class.
These concepts need not coincide in general. We could without difficulty model the decision-maker as acting according to P(T), but nonetheless compute the value of information by the lights of
another actor who believes Q(T) (or the VOI in a reference class following Q(T)).
Analogously, V is calculated according to the same values as the decision-maker's values, i.e. it is modeled from a risk-neutral U-maximisation perspective, but this need not be so. (Technically this
assumption is already present in the first section of this document).
Computational shortcut: skipping the decision-maker's Bayesian update
We make use of the following fact:
When the signal B is normally distributed, with mean T, then, for any prior distribution over T, E[T|B=b] is increasing in b.
This was shown by Andrews et al. 1972 (Lemma 1).
In these cases, instead of explicitly computing the posterior for every b-value, we 1. First run a numerical equation solver to find the threshold value b_* ("b-star"), such that E[T|B=b]>bar if and
only if b>b_*. 2. Then, simply compare subsequent b-values to b_*.
This is hundreds of times faster than explicitly computing the posterior probability distribution P(T|B=b) for each iteartion.
The shortcut can be disabled by selecting explicit_bayes.
Cost-benefit analysis
The cost-benefit analysis assumes:
• "Choosing" the bar or the object of study means spending one's capital implementing that option. The amount of capital may vary.
• T and bar are expressed in terms of value realised per unit of capital. For example, "deaths averted per million dollars" or "new clients per dollar".
• The decision-maker can choose to spend signal_cost to acquire the signal. All other capital is spent implementing the option with the highest expected value.
This model is well-suited when choosing between different options that can absorb flexible amounts of capital (e.g. venture capital, ad spend, or philanthropy). However, it should be easy to model
the costs and benefits differently yourself, while still using the VOI simulation, which is more generally applicable.
Simulation parameters
Be careful, this is not the mu parameter. mu is the expected value of T's logarithm
Be careful, this is not the sigma parameter. sigma is the standard deviation of T's logarithm
For example, if the signal is a regression coefficient from a sample, this is the standard deviation of the estimator for that coefficient. This is often called the standard error of the coefficient.
Cost-benefit analysis (optional)
The parameters below have no effect on the numerical result, but help make the results easier to interpret without errors. T, sd_B, and bar must be expressed in value_units per money_units spent.
This tool was developed by Tom Adamczewski under contract with Open Philanthropy. The source code for the value of information package is here. | {"url":"https://valueofinfo.com:443/","timestamp":"2024-11-10T03:07:13Z","content_type":"text/html","content_length":"32194","record_id":"<urn:uuid:f4026b0b-c5f7-46a2-b757-b058ba0afa2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00477.warc.gz"} |
Male and Female Olympic performance
Variation in male and female Olympic performance
Isabel Lugo posted an interesting article today called Variance in Olympic events in which she speculates about the variance in male versus female athletic performance.
… it may be the case that the difference between the very best men and the very best women in physical feats (say, times in some sort of race, because these are the most easily quantified) is
larger than the difference between the average man and the average woman, because there could be more variance among men than women.
I did a few back-of-the-envelope calculations to explore this possibility. Let X represent female athletic performance and Y male athletic performance in some context. Assume X and Y are normally
distributed and that we have rescaled so that X has mean 0 and standard deviation 1. (I know nothing about the statistics of athletic performance. This is just a rough exercise inspired by Isabel
Lugo’s question.) For this post, I will assume equal numbers of men and women are interested in a given sport. My next post looks at what happens when abilities are equal but more men than women are
interested in a given sport.
First, suppose men and women have equal average performance but that men have standard deviation σ > 1. Then a man who just makes the cutoff of n standard deviations above mean has performance nσ and
a woman who just makes the analogous cutoff has performance n. Then the ratio of their performance is σ for any value of n. At every percentile, the ratio of male to female performance would be the
same. The difference in performance, n(σ − 1), does increase as you look at more elite athletes, i.e. increasing values of n, but not by much. The difference would only be larger by 25% when looking
at 5-sigma athletes rather than 4-sigma athletes even though the former is over 100 times more exclusive.
What if in some context male and female performance both had variance 1 but had different means? Say the mean for men is μ > 0 and the mean for women is 0. Then the performance for a man n standard
deviations from the mean for men would be μ + n and the performance for a woman n standard deviations away from the mean for women would be n. The difference would remain constant at all levels of
performance, but the ratio of performance levels would tend toward 1 as n increases, that is, as you look at more and more elite athletes.
Next look at a different question. In either of the above situations, what proportion of the best athletes will be male? I will show that the odds of a top athlete being male increase exponentially
as your definition of “top” increases.
For a given level of performance k, we will look at P(Y > k)/P(X > k), the ratio of the proportion of men at that level to the proportion of women at that level. The probability that a woman has
performance greater than k is given by the approximation
Now suppose Y has mean 0 but standard deviation σ > 1. Then the odds in favor of someone with performance level greater than k being male equals
which increases exponentially as k increases, i.e. as we look at higher levels of performance. (By symmetry, this would also mean that the odds of a poor performer being male would increase as you
looked at worse and worse performers.) To plug in some particular numbers, suppose the standard deviation for men is 1.5 and we had a group of people with performance 2 or greater. The odds in favor
of someone in that group being male would be almost 4 to 1. But if we looked in a group with performance 5 or greater, the odds in favor of someone being male would be 322 to 1.
Next suppose Y has mean μ > 0 but standard deviation 1. Then the odds of a top performer being male are
This also increases exponentially as k increases. Again to put in some specific numbers, assume μ = 0.5 and look at performance levels of 2 and 5. The odds in favor of someone with performance level
at least 2 being male are about 3.2 to 1. The corresponding odds for a group with performance level at least 5 are about 12 to 1. | {"url":"https://www.johndcook.com/blog/2008/08/12/variation-in-male-and-female-olympic-performance/","timestamp":"2024-11-13T23:04:38Z","content_type":"text/html","content_length":"52917","record_id":"<urn:uuid:315bc49e-1757-43da-8657-198ccd5c1251>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00291.warc.gz"} |
A Large Black-White Scoring Gap Persists on the SAT
In 2006 The College Board reported the largest drop in SAT scores in over 30 years. But lost in the publicity surrounding this news was the good news that the racial scoring gap on the SAT actually
grew smaller, albeit by a small amount.
In 2006 average scores on the SAT (formerly known as the Scholastic Assessment Test) for students of all races dropped by the largest margin in more than 30 years. Everywhere this news generated
headlines. But largely obscured by the overall scoring decline was the fact that the racial scoring gap actually grew somewhat smaller.
Here we present our annual analysis on racial differences in scoring results on the SAT college admissions test. In order to make a legitimate comparison with past SAT results, we will ignore, for
the moment, test results on the new writing component of the SAT. This new part of the SAT is viewed with skepticism by admissions officials at scores of prestigious colleges and universities.
In 2006 the average black score on the combined math and verbal portions of the SAT test was 863. The mean score for whites on the combined math and verbal SAT was 1063, about 17 percent higher.
Not only are African-American scores on the SAT far below the scores of whites and Asian Americans, but they also trail the scores of every other major ethnic group in the United States including
students of Puerto Rican and Mexican backgrounds. In fact, few people realize that American Indian and Alaska Native students on average score 118 points higher than the average score of black
students. On average, Asian American students score 225 points, or 19 percent, higher than African Americans.
Trend in the Racial Scoring Gap
The racial gap in SAT scores this year actually declined by a slight margin from a year ago. But over the past decade the racial scoring gap has increased.
In 1976 The College Board published an analysis of the racial differences in scores on what was then called the Scholastic Aptitude Test (SAT). At that time the average black score was about 240
points, or 20 percent, below the average white score. By the early 1980s, the gap had shrunk to 200 points. Black scores were then 17 percent lower than the scores of whites. By 1988 the black-white
SAT test scoring gap was down to 189 points. The trend was distinctly encouraging. Many specialists in the educational community predicted that in time the racial scoring gap between blacks and
whites would disappear altogether.
But after 1988, progress in closing the SAT gap stopped abruptly and later it began to open up. The 189 point racial scoring gap in 1998 has grown as high as 204 points and this year sits at 200
These are persisting statistics that best tell the story of how deep the academic achievement gulf is between African Americans and the rest of the American population.
Explaining the Black-White Scoring Gap on the SAT Test
A number of reasons are advanced to explain the large and persisting black-white SAT scoring gap. Sharp differences in family incomes are a major factor. Always there has been a direct correlation
between family income and SAT scores. For both blacks and whites, as income goes up, so do test scores.
In 2006, 24 percent of all black SAT test takers were from families with annual incomes below $20,000. Only 4 percent of white test takers were from families with incomes below $20,000. At the other
extreme, 8 percent of all black test takers were from families with incomes of more than $100,000. The comparable figure for white test takers was 31 percent.
But there is a major flaw in the thesis that income differences explain the racial gap. Consider these observable facts from The College Board’s 2006 data on the SAT:
Whites from families with incomes of less than $10,000 had a mean SAT score of 993. This is 130 points higher than the national mean for all blacks.
Whites from families with incomes below $10,000 had a mean SAT test score that was 17 points higher than blacks whose families had incomes of more than $100,000.
Other Explanations for the Racial Scoring Gap on the SAT
Clearly, one of the main factors in explaining the SAT racial gap is that black students almost across the board are not being adequately schooled to perform well on the SAT and similar tests.
Public schools in many neighborhoods with large black populations are underfunded, inadequately staffed, and ill equipped to provide the same quality of secondary education that is offered in
predominantly white suburban school districts.
Data from The College Board shows that 59 percent of white students who took the SAT were ranked in the top 20 percent of their high school classes. This compares to 39 percent of black test takers.
Some 48 percent of white students who took the SAT report that their high school grade point average was in the A range. This compares to only 24 percent of black test takers. The mean high school
grade point average for all white students who took the SAT was 3.40. For blacks, the figure was 3.01. These figures alone explain a large portion of the racial scoring gap on the SAT.
A major reason for the SAT racial gap appears to be explained by the fact that black students who take the SAT have not followed the same academic track as white students. It is true that 97 percent
of both blacks and whites who take the SAT have studied algebra in high school. But in higher level mathematics courses such as trigonometry and calculus, whites hold a very large lead. In 2006, 47
percent of white SAT test takers had taken trigonometry in high school compared to 36 percent of black test takers. Some 30 percent of white test takers had taken calculus in high school. Only 15
percent of black students had taken calculus, one half as many as whites. Thirty-three percent of white SAT test takers had taken honors courses in mathematics compared to 20 percent of black SAT
test takers.
Similar discrepancies appear in the level of instruction in English, the other major component of the SAT. Some 90 percent of white test takers had completed coursework in American literature
compared to 80 percent of black test takers. For whites, 67 percent had taken high school courses in composition compared to 52 percent of blacks. Some 71 percent of whites and 61 percent of blacks
had completed coursework in grammar. A full 41 percent of all white test takers had completed honors courses in English compared to 29 percent of black test takers.
Also, whites are far more likely than blacks to have taken honors courses in science and social studies. Given the huge differences in course study between black and white high school students, it
comes as no surprise that white SAT scores are significantly higher than black SAT scores. Whites, who are more likely to attend high-quality schools, have simply achieved a greater mastery of the
subject matter than have blacks.
There are other reasons that contribute to the large scoring gap between blacks and whites on the SAT:
In some, if not many, cases, black schoolchildren are taught by white teachers who have low opinions of the abilities of black kids from the moment they enter the classroom. These teachers
immediately write off black students as academic inferiors and do not challenge them sufficiently to achieve the skills necessary to perform well on standardized tests.
The late John Ogbu, professor of anthropology at Berkeley, believed that broad cultural attributes among blacks — such as parental style, commitment to learning, and work ethic — bear a heavy
responsibility for the black-white educational gap. Ogbu wrote in his recent book, Black American Students in an Affluent Suburb: A Study of Academic Disengagement, that black students in the
affluent homes of doctors and lawyers are looking at rappers in ghettos as their role models. Students talk the talk about what it takes to be a good student, Ogbu wrote, but few put forth the
effort required to get good grades. This type of behavior is typical, Ogbu said, of racial minorities adapting to oppression and the lack of opportunity. Ogbu, much as Bill Cosby has done recently,
also placed the blame on black parents. He believed that many black parents are not offering sufficient guidance, do not spend enough time helping with homework, and do not pay adequate attention to
their children’s educational progress.
Black students in predominantly white schools who study hard are often the subject of peer ridicule. They are accused of “acting white” by other blacks. This so-called ghetto chic in the form of
peer pressure to shun academic pursuits undoubtedly has some dragging effect on average black SAT scores.
Black students may be subject to what Stanford psychology professor Claude Steele calls “stereotype vulnerability.” Steele contends that black students are aware of the fact that society expects
them to perform poorly on standardized tests. This added pressure put upon black students to perform well in order to rebut the racial stereotype in fact makes it more difficult for them to perform
well on these tests.
Black students in some urban schools are taught an Afrocentric curriculum that may serve to increase black pride and foster an awareness of black culture, but this form of education pays little
attention to the subject matters that are measured by the SAT.
In the United States even middle-class blacks tend to be brought up in basically segregated surroundings. They are not taught the pathways and modes of thinking that are embedded in white culture
and reflected in standardized tests. Black families that urge their children to go to college are often first-generation college graduates who grew up in households without the systems that support
first-rate academic achievement.
School administrators and guidance counselors — both black and white — often believe that black students are less capable and less able to learn. They routinely track black students at an early
age into vocational training or into a curriculum that is not college preparatory. Black students are rarely recommended for inclusion in gifted education, honors, or Advanced Placement programs.
Once placed on the slow academic track, most black kids can never escape. By the time black students are juniors and seniors in high school, they are typically so far behind their white counterparts
in the critical subject areas necessary to perform well on standardized tests that they have little hope of ever matching the scores of whites on the SAT.
Almost No Blacks Among the Top Scorers on the SAT Test
It is important to explain how the SAT racial scoring gap challenges affirmative action policies at the nation’s highest-ranked colleges and universities. Under the SAT scoring system, most
non-minority students hoping to qualify for admission to any of the nation’s 25 highest-ranked universities and 25 highest-ranked liberal arts colleges need to score at least 700 on each portion of
the SAT.
For admission to the very highest ranked, brand-name schools such as Princeton or MIT, applicants need scores of 750 to be considered for admission. Yet, as we shall see, only a minute percentage of
black test takers score at these levels. Thus, if high-ranking colleges and universities were to abandon their policies of race-sensitive admissions, they will be choosing their first-year students
from an applicant pool in which there will be practically no blacks.
Let’s be more specific about the SAT racial gap among high-scoring applicants. In 2006, 150,643 African Americans took the SAT test. They made up 10.3 percent of all SAT test takers. But only 976
African-American college-bound students scored 700 or above on the math SAT and only 1,117 scored at least 700 on the verbal SAT. Nationally, more than 95,000 students of all races scored 700 or
above on the math SAT and nearly 69,000 students scored 700 or above on the verbal SAT. Thus, in this top-scoring category of all SAT test takers, blacks made up only 1 percent of the students
scoring 700 or higher on the math test and only 1.6 percent of the students scoring 700 or higher on the verbal SAT.
If we eliminate Asians and other minorities from the statistics and compare just white and black students, we find that 5.4 percent of all white SAT test takers scored 700 or above on the verbal
portion of the test. But only 0.74 percent of all black SAT test takers scored at this level. Therefore, whites were nearly seven times as likely as blacks to score 700 or above on the verbal SAT.
Overall, there are more than 39 times as many whites as blacks who scored at least 700 on the verbal SAT.
On the math SAT, only 0.6 percent of all black test takers scored at least 700 compared to 6.4 percent of all white test takers. Thus, whites were more than 10 times as likely as blacks to score 700
or above on the math SAT. Overall, there were more than 53 times as many whites as blacks who scored 700 or above on the math SAT.
In a race-neutral competition for the approximately 50,000 places for first-year students at the nation’s 25 top-ranked universities, high-scoring blacks would be buried by a huge mountain of
high-scoring non-black students. Today, under prevailing affirmative action admissions policies, there are about 3,000 black first-year students matriculating at these 25 high-ranking universities,
about 6 percent of all first-year students at these institutions. But if these schools operated under a strict race-neutral admissions policy where SAT scores were the most important qualifying
yardstick, these universities could fill their freshman classes almost exclusively with students who score at the very top of the SAT scoring scale. As shown previously, black students make up at
best between 1 and 2 percent of these high-scoring groups.
The Racial Gap on the SAT Writing Test
Now let’s look at the racial gap on the new writing component of the SAT. The mean black score on the writing section in 2006 was 428. For whites, the mean score was 519. Therefore, the gap on the
writing portion of the SAT is slightly smaller than on the verbal portion of the SAT and significantly smaller than the racial gap on the mathematics section of the SAT.
Many observers believed that the introduction of the writing component would, by design, lessen the racial scoring gap in order to deflect criticism of The College Board. It is probable that in some
instances the people who score the new writing section will be able to detect the race of the writer by the vocabulary and subject matter of the student’s essay. There is then a suspicion in some
quarters that the scorers of the test may be inclined to “give a break” to black students. Therefore, it is suggested that the examination is graded on a curve that benefits blacks and Hispanics.
Looking to the Future
In the Grutter case upholding affirmative action in college admissions, Justice Sandra Day O’Connor’s decision expressed the goal of eliminating affirmative action over the next 25 years. At the
moment there is no evidence that substantial progress toward closing the racial scoring gap on the SAT test will occur.
A new study by The College Board based on family income trends estimates that there will still be a large racial gap in standardized test scores 22 years from now, the deadline for race-sensitive
admissions as outlined by Justice O’Connor.
Thus, the huge and growing gap in SAT scores, and particularly the scores at the highest levels, becomes one of the nation’s most urgent problems.
More Features | {"url":"http://www.jbhe.com/features/53_SAT.html","timestamp":"2024-11-07T23:11:09Z","content_type":"text/html","content_length":"29001","record_id":"<urn:uuid:0b414c74-45a2-421a-98f4-4b500bfb410d>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00305.warc.gz"} |
Generating Data
Numerics is all about analyzing and manipulating numeric data. But unless you can read in data from an external file, source or e.g. with the excellent F# Type Providers, you may need to generate
synthetic or random data locally, or transform existing data into a new form. The Generate class can help you in all these scenarios with a set of static functions generating either an array or an
IEnumerable sequence.
There is some overlap with LINQ, in case of F# also with some integrated language features and its fundamental types. This is intended for simplicity and consistency between array and sequence
operations, as LINQ only supports sequences.
Generates a linearly spaced array within the inclusive interval between start and stop, and either a provided step or a step of 1.0. Linear range is equivalent to the single colon : and double colon
:: operators in MATLAB.
F# has built in linear range support in array comprehensions with the colon operator:
[ 10.0 .. 2.0 .. 15.0 ]
val it : float list = [10.0; 12.0; 14.0]
[ for x in 10.0 .. 2.0 .. 15.0 -> sin x ]
val it : float list = [-0.5440211109; -0.536572918; 0.9906073557]
In C# you can get the same result with LinearRange:
Generate.LinearRange(10, 2, 15); // returns array { 10.0, 12.0, 14.0 }
Generate.LinearRangeMap(10, 2, 15, Math.Sin); // applies sin(x) to each value
Most of the routines in the Generate class have variants with a Map suffix. Instead of returning an array with the generated numbers, these routines instead apply the generated numbers to a custom
function and return an array with the results. Similarly, some routines have variants with a Sequence suffix that return lazy enumerable sequences instead of arrays.
Generates a linearly or log-spaced array within an interval, but other than linear range where the step is provided, here we instead provide the number of values we want. This is equivalent to the
linspace and logspace operators in MATLAB.
Generate.LinearSpaced(11, 0.0, 1.0); // returns array { 0.0, 0.1, 0.2, .., 1.0 }
Generate.LinearSpacedMap(15, 0.0, Math.Pi, Math.Sin); // applies sin(x) to each value
In F# you can also use:
Generate.linearSpacedMap 15 0.0 Math.PI sin
val it : float [] =
[|0.0; 0.222520934; 0.4338837391; 0.6234898019; 0.7818314825; 0.9009688679;
0.9749279122; 1.0; 0.9749279122; 0.9009688679; 0.7818314825; 0.6234898019;
0.4338837391; 0.222520934; 1.224606354e-16|]
LogSpaced works the same way but instead of the values \(10^x\) it spaces the decade exponents \(x\) linearly between the provided two exponents.
Generate.LogSpaced(4,0,3); // returns array { 1, 10, 100, 1000 }
The Kronecker delta \(\delta[n]\) is a fundamental signal in time-discrete signal processing, often referred to as unit impulse. When applied to a system, the resulting output is the system's impulse
response. It is closely related to the Dirac delta impulse function \(\delta(x)\) in continuous signal processing.
\[\delta[n] = \begin{cases} 0 &\mbox{if } n \ne 0 \\ 1 & \mbox{if } n = 0\end{cases}\]
The Impulse routine generates a Kronecker delta impulse, but also accepts a sample delay parameter \(d\) and amplitude \(A\) such that the resulting generated signal is
\[s[n] = A\cdot\delta[n-d] = \begin{cases} 0 &\mbox{if } n \ne d \\ A & \mbox{if } n = d\end{cases}\]
There is also a periodic version in PeriodicImpulse which accepts an additional period parameter.
Generate.Impulse(8, 2.0, 3)
val it : float [] = [|0.0; 0.0; 0.0; 2.0; 0.0; 0.0; 0.0; 0.0|]
Generate.PeriodicImpulse(8, 3, 10.0, 1)
val it : float [] = [|0.0; 10.0; 0.0; 0.0; 10.0; 0.0; 0.0; 10.0|]
Another fundamental signal in signal processing, the Heaviside step function \(H[n]\) is the integral of the Dirac delta impulse and represents a signal that switches on at a specified time and then
stays on indefinitely. In discrete time:
\[H[n] = \begin{cases} 0 &\mbox{if } n < 0 \\ 1 & \mbox{if } n \ge 0\end{cases}\]
The Step routines generates a Heaviside step, but just like the Kronecker Delta impulse also accepts a sample delay parameter \(d\) and amplitude \(A\) such that the resulting generated signal is
\[s[n] = A\cdot H[n-d] = \begin{cases} 0 &\mbox{if } n < d \\ A & \mbox{if } n \ge d\end{cases}\]
Generate.Step(8, 2.0, 3)
val it : float [] = [|0.0; 0.0; 0.0; 2.0; 2.0; 2.0; 2.0; 2.0|]
Generates an array of the given length with a periodic upper forward sawtooth signal, i.e. a line starting at zero up to some amplitude \(A\), then drop back to zero instantly and start afresh. The
sawtooth can be used to turn any arbitrary function defined over the interval \([0,A)\) into a periodic function by repeating it continuously.
Mathematically, the sawtooth can be described using the fractional part function \(\mathrm{frac}(x) \equiv x - \lfloor x \rfloor\), frequency \(\nu\) and phase \(\theta\) as
\[s(x) = A\cdot\mathrm{frac}\left(x\nu+\frac{\theta}{A}\right)\]
In a trigonometric interpretation the signal represents the angular position \(\alpha\) of a point moving endlessly around a circle with radius \(\frac{A}{2\pi}\) (and thus circumference \(A\)) in
constant speed, normalized to strictly \(0\le\alpha < A\).
• Sampling Rate: Number of samples \(r\) per time unit. If the time unit is 1s, the sampling rate has unit Hz.
• Frequency: Frequency \(\nu\) of the signal, in sawtooth-periods per time unit. If the time unit is 1s, the frequency has unit Hz. For a desired number of samples \(n\) per sawtooth-period and
sampling rate \(r\) choose \(\nu=\frac{r}{n}\).
• Amplitude: The theoretical maximum value \(A\), which is never reached and logically equivalent to zero. The circumference of the circle. Typically \(1\) or \(2\pi\).
• Phase: Optional initial value or phase offset. Contributes to \(\theta\).
• Delay: Optional initial delay, in samples. Contributes to \(\theta\).
The equivalent map function accepts a custom map lambda as second argument after the length:
Generate.periodicMap 15 ((+) 100.0) 1000.0 100.0 10.0 0.0 0
val it : float [] =
[|100.0; 101.0; 102.0; 103.0; 104.0; 105.0; 106.0; 107.0; 108.0; 109.0;
100.0; 101.0; 102.0; 103.0; 104.0|]
Generates a Sine wave array of the given length. This is equivalent to applying a scaled trigonometric Sine function to a periodic sawtooth of amplitude \(2\pi\).
\[s(x) = A\cdot\sin(2\pi\nu x + \theta)\]
Generate.Sinusoidal(15, 1000.0, 100.0, 10.0);
// returns array { 0, 5.9, 9.5, 9.5, 5.9, 0, -5.9, ... }
Generate random sequences by sampling from a random distribution.
Generate sample sequences distributed uniformly between 0 and 1.
Generate.Uniform(100); // e.g. 0.867421787170424, 0.236744313145403, ...
Uniform supports mapping to functions with not only one but also two uniform variables as arguments, with UniformMap and UniformMap2. As usual, lazy sequences can be generated using the variants with
the Sequence suffix, e.g. UniformMap2Sequence.
Instead of uniform we can also sample from other distributions.
• Normal - sample an array or sequence form a normal distribution
• Standard - sample an array or sequence form a standard distribution
In addition, the Random functions accept a custom distribution instance to sample from. See the section about random numbers and probability distributions for details.
Generates a new array or sequence where each new values is the result of applying the provided function the the corresponding value in the input data.
var a = new double[] { 2.0, 4.0, 3.0, 6.0 };
Generate.Map(a, x => x + 1.0); // returns array { 3.0, 5.0, 4.0, 7.0 }
In F# you'd typically use the Array module to the same effect (and should continue to do so):
Array.map ((+) 1.0) a
val it : float [] = [|3.0; 5.0; 4.0; 7.0|]
...but no equivalent operation is available in the .NET base class libraries (BCL) for C#. You can use LINQ, but that operates on sequences instead of arrays:
a.Select(x => x + 1.0).ToArray();
Similarly, with Map2 you can also map a function accepting two inputs to two input arrays:
let b = [| 1.0; -1.0; 2.0; -2.0 |]
Generate.Map2(a, b, fun x y -> x + y)
val it : float [] = [|3.0; 3.0; 5.0; 4.0|]
Typical F# equivalent:
Array.map2 (+) a b
val it : float [] = [|3.0; 3.0; 5.0; 4.0|]
And in C# with LINQ:
a.Zip(b, (x, y) => x + y).ToArray();
val x : float
val sin : value:'T -> 'T (requires member Sin)
<summary>Sine of the given number</summary>
<param name="value">The input value.</param>
<returns>The sine of the input.</returns>
module Array from Microsoft.FSharp.Collections
<summary>Contains operations for working with arrays.</summary>
<remarks> See also <a href="https://docs.microsoft.com/dotnet/fsharp/language-reference/arrays">F# Language Guide - Arrays</a>. </remarks>
val map : mapping:('T -> 'U) -> array:'T [] -> 'U []
<summary>Builds a new array whose elements are the results of applying the given function to each of the elements of the array.</summary>
<param name="mapping">The function to transform elements of the array.</param>
<param name="array">The input array.</param>
<returns>The array of transformed elements.</returns>
<exception cref="T:System.ArgumentNullException">Thrown when the input array is null.</exception>
val a : float []
val b : float []
val map2 : mapping:('T1 -> 'T2 -> 'U) -> array1:'T1 [] -> array2:'T2 [] -> 'U []
<summary>Builds a new collection whose elements are the results of applying the given function to the corresponding elements of the two collections pairwise. The two input arrays must have the same
lengths, otherwise an <c>ArgumentException</c> is raised.</summary>
<param name="mapping">The function to transform the pairs of the input elements.</param>
<param name="array1">The first input array.</param>
<param name="array2">The second input array.</param>
<exception cref="T:System.ArgumentException">Thrown when the input arrays differ in length.</exception>
<exception cref="T:System.ArgumentNullException">Thrown when either of the input arrays is null.</exception>
<returns>The array of transformed elements.</returns> | {"url":"https://numerics.mathdotnet.com/Generate","timestamp":"2024-11-08T23:48:20Z","content_type":"text/html","content_length":"30869","record_id":"<urn:uuid:82fa0caf-422d-4784-a47b-335427a0d962>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00893.warc.gz"} |
Dividend Discount Model (DDM): A Comprehensive Guide to Stock Valuation
B.2.1 Dividend Discount Model (DDM)
The Dividend Discount Model (DDM) is a fundamental method used in finance for valuing a company’s stock price based on the theory that its value is the present value of all its future dividends. This
model is particularly useful for investors who focus on dividend-paying stocks and seek to determine whether a stock is overvalued or undervalued based on its dividend payments.
Understanding the Principles Behind the Dividend Discount Model
The core principle of the DDM is that the intrinsic value of a stock is the present value of its expected future dividends. This approach assumes that dividends are the primary source of a stock’s
value, and it discounts these future dividends back to their present value using a required rate of return.
General Formula of the Dividend Discount Model
The general formula for the DDM is expressed as:
$$ P_0 = \sum_{t=1}^{\infty} \frac{D_t}{(1 + k_e)^t} $$
• \( P_0 \) is the current stock price.
• \( D_t \) is the dividend expected in period \( t \).
• \( k_e \) is the required rate of return.
This formula represents the sum of the present values of all expected future dividends, assuming they continue indefinitely.
Calculating the Intrinsic Value of a Stock
To calculate the intrinsic value of a stock using the DDM, one must estimate the future dividends and the required rate of return. The model can be adapted to different scenarios depending on the
expected growth pattern of dividends.
The Gordon Growth Model (Constant Growth DDM)
One of the most commonly used forms of the DDM is the Gordon Growth Model, also known as the Constant Growth DDM. This model assumes that dividends will grow at a constant rate indefinitely. The
formula is:
$$ P_0 = \frac{D_1}{k_e - g} $$
• \( D_1 \) is the expected dividend next year.
• \( g \) is the constant growth rate of dividends.
Example of the Gordon Growth Model
Consider a company expected to pay a dividend of $2 per share next year. The dividends are anticipated to grow at a rate of 4% annually, and the required rate of return is 9%. Using the Gordon Growth
Model, the intrinsic value of the stock is calculated as follows:
$$ P_0 = \frac{\$2}{0.09 - 0.04} = \frac{\$2}{0.05} = \$40 $$
This implies that the stock is valued at $40 per share based on its expected future dividends and growth rate.
Variations of the Dividend Discount Model
The DDM can be adapted to various scenarios depending on the expected dividend growth pattern. Two notable variations are the Zero-Growth Model and the Multi-Stage DDM.
Zero-Growth Model
The Zero-Growth Model is used when dividends are expected to remain constant over time. This model is suitable for companies with stable dividend payments and no expected growth. The formula is:
$$ P_0 = \frac{D}{k_e} $$
Where \( D \) is the constant dividend.
Multi-Stage Dividend Discount Model
The Multi-Stage DDM is applied when dividend growth rates are expected to change over time. This model is more complex and involves calculating the present value of dividends for different growth
stages. It is particularly useful for companies experiencing varying growth phases, such as high initial growth followed by stable growth.
Assumptions and Limitations of the Dividend Discount Model
While the DDM is a powerful tool for stock valuation, it is essential to recognize its assumptions and limitations:
1. Dividend Requirement: The model requires that the company pays dividends. It is not applicable to companies that do not distribute dividends.
2. Sensitivity to Growth Rates and Required Return: The model is highly sensitive to the estimated growth rates and the required rate of return. Small changes in these inputs can significantly
impact the calculated intrinsic value.
3. Applicability to Non-Dividend-Paying Companies: The DDM is less suitable for valuing companies that do not pay dividends, such as growth companies that reinvest earnings into expansion.
4. Assumption of Perpetual Growth: The Gordon Growth Model assumes that dividends will grow at a constant rate indefinitely, which may not be realistic for all companies.
Using the Dividend Discount Model in Investment Analysis
Despite its limitations, the DDM remains a fundamental tool in investment analysis and stock valuation. It provides a systematic approach to evaluating dividend-paying stocks and can be used
alongside other valuation methods to gain a comprehensive understanding of a stock’s value.
Investors often use the DDM to compare the intrinsic value of a stock with its current market price to determine if it is overvalued or undervalued. By doing so, they can make informed investment
decisions and identify potential opportunities in the market.
The Dividend Discount Model is a cornerstone of financial analysis and stock valuation. It offers a structured approach to determining the intrinsic value of dividend-paying stocks based on expected
future dividends. While the model has its assumptions and limitations, it remains a valuable tool for investors seeking to evaluate the worth of a stock in relation to its dividend payments.
By understanding and applying the DDM, investors can enhance their investment analysis and make more informed decisions in the ever-evolving financial markets.
Quiz Time!
📚✨ Quiz Time! ✨📚
### What is the primary principle behind the Dividend Discount Model (DDM)? - [x] The intrinsic value of a stock is the present value of its expected future dividends. - [ ] The intrinsic value of a
stock is determined by its earnings per share. - [ ] The intrinsic value of a stock is based on its market capitalization. - [ ] The intrinsic value of a stock is calculated using its book value. >
**Explanation:** The DDM values a stock by calculating the present value of its expected future dividends, reflecting the core principle of the model. ### Which formula represents the Gordon Growth
Model? - [x] \\( P_0 = \frac{D_1}{k_e - g} \\) - [ ] \\( P_0 = \frac{D}{k_e} \\) - [ ] \\( P_0 = \sum_{t=1}^{\infty} \frac{D_t}{(1 + k_e)^t} \\) - [ ] \\( P_0 = \frac{E}{k_e - g} \\) >
**Explanation:** The Gordon Growth Model, a form of the DDM, assumes constant dividend growth and is represented by the formula \\( P_0 = \frac{D_1}{k_e - g} \\). ### In the Gordon Growth Model, what
does \\( g \\) represent? - [x] The constant growth rate of dividends - [ ] The required rate of return - [ ] The dividend payout ratio - [ ] The growth rate of earnings > **Explanation:** In the
Gordon Growth Model, \\( g \\) represents the constant growth rate of dividends, a key component in calculating the stock's intrinsic value. ### What is the Zero-Growth Model used for? - [x] When
dividends are expected to remain constant over time - [ ] When dividends are expected to grow at a constant rate - [ ] When dividends are expected to decline - [ ] When dividends are not paid >
**Explanation:** The Zero-Growth Model is used when dividends are expected to remain constant, making it suitable for companies with stable dividend payments. ### Which of the following is a
limitation of the DDM? - [x] It requires dividends to be paid. - [ ] It is applicable to all companies. - [x] It is sensitive to growth rate estimates. - [ ] It is not affected by changes in the
required rate of return. > **Explanation:** The DDM requires dividends to be paid and is sensitive to growth rate estimates, which are limitations when applying the model. ### How does the
Multi-Stage DDM differ from the Gordon Growth Model? - [x] It accounts for changing growth rates over time. - [ ] It assumes constant dividend growth. - [ ] It is used for companies that do not pay
dividends. - [ ] It calculates intrinsic value based on earnings. > **Explanation:** The Multi-Stage DDM accounts for changing growth rates over time, making it suitable for companies with varying
growth phases. ### What is the required rate of return in the DDM? - [x] The rate used to discount future dividends to their present value - [ ] The rate at which dividends are expected to grow - [ ]
The rate of inflation - [ ] The rate of return on government bonds > **Explanation:** The required rate of return is used to discount future dividends to their present value, a critical component in
the DDM. ### Why is the DDM less applicable to non-dividend-paying companies? - [x] Because it requires dividends to calculate intrinsic value - [ ] Because it relies on earnings growth - [ ] Because
it focuses on market capitalization - [ ] Because it uses book value > **Explanation:** The DDM is less applicable to non-dividend-paying companies because it requires dividends to calculate the
stock's intrinsic value. ### What does the term "intrinsic value" refer to in the context of the DDM? - [x] The present value of expected future dividends - [ ] The market price of the stock - [ ]
The book value of the company - [ ] The earnings per share > **Explanation:** In the context of the DDM, "intrinsic value" refers to the present value of expected future dividends, which the model
aims to calculate. ### True or False: The DDM assumes that dividends are the primary source of a stock's value. - [x] True - [ ] False > **Explanation:** True. The DDM assumes that dividends are the
primary source of a stock's value, which is the basis for its valuation approach. | {"url":"https://csccourse.ca/32/4/1/","timestamp":"2024-11-04T23:25:07Z","content_type":"text/html","content_length":"117897","record_id":"<urn:uuid:36d0da78-c468-4431-ba97-bb240ff0fa0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00260.warc.gz"} |
Footing Concrete Calculator – Accurate Measurements for Construction
The footing concrete calculator quickly determines the amount of concrete needed for your project’s footings.
Footing Concrete Calculator
This calculator helps you determine the amount of concrete required for a footing based on its dimensions, concrete strength, and an optional waste allowance percentage.
How to Use the Calculator
• Enter the length, width, and depth of the footing in meters.
• Specify the desired concrete strength in MPa (20 to 50 MPa).
• Enter a wastage factor percentage (0 to 15) to account for potential waste.
• Click “Calculate” to see the estimated volume of concrete needed.
Calculation Explanation
The calculator multiplies the given dimensions to calculate the basic volume of concrete needed for the footing. The wastage factor is then applied to this volume to estimate the total volume of
concrete to obtain, allowing for potential waste and use inefficiencies.
Limitations of the Calculator
• This calculator assumes a simple rectangular footing shape and does not account for complex geometries or conditions on site.
• The waste percentage is an estimate; builder experience and conditions may affect actual waste.
Use Cases for This Calculator
Residential Patio Construction
When planning to build a patio in your backyard, a footing concrete calculator becomes essential. It helps you determine the quantity of concrete needed, ensuring you order the right amount and avoid
unnecessary waste.
Deck Installation
If you’re constructing a deck, using a footing concrete calculator simplifies your project planning. By calculating the required concrete for the footings, you can ensure a solid foundation that
supports the weight of your new deck.
Garage Slab Planning
Planning to install a garage? A footing concrete calculator assists you in estimating the concrete volume for footings, which crucially supports the structure’s weight while ensuring stability over
Building a Shed
When setting up a shed in your garden, adequate footing is vital. Using a footing concrete calculator, you can accurately compute the concrete required, ensuring your shed remains stable and durable
against the elements.
Retaining Wall Support
Constructing a retaining wall? Accurate footing calculations are vital to the wall’s integrity. A footing concrete calculator will provide you with necessary measurements, ensuring the wall is
effectively supported and protects your landscape.
Commercial Building Foundations
For larger construction projects like commercial buildings, the footing concrete calculator is a valuable tool. It helps in determining the concrete needed for foundations, ensuring that they can
safely support heavy loads and meet building codes.
Outdoor Steps and Walkways
If you’re designing outdoor steps or walkways, calculating the footings properly is crucial for safety. The footing concrete calculator helps you find the right quantity of concrete needed to
create sturdy, long-lasting paths that enhance your outdoor space.
Pole Barn Construction
Building a pole barn requires careful consideration of footings to support the structure. A footing concrete calculator allows you to efficiently measure the amount of concrete required, ensuring a
strong, stable base that can withstand heavy storms or loads.
Fence Post Installations
When installing a fence, strong footings are essential for stability and longevity. The footing concrete calculator aids you in estimating the right amount of concrete for each post, ensuring your
fence remains upright and secure even in harsh weather.
Landscape Features
Creating landscape features like planters or raised beds requires the right footings to support the construction. By utilizing a footing concrete calculator, you can accurately determine the concrete
quantity needed, allowing for a well-built landscape that enhances your outdoor aesthetics. | {"url":"https://calculatorsforhome.com/footing-concrete-calculator/","timestamp":"2024-11-06T17:26:05Z","content_type":"text/html","content_length":"145613","record_id":"<urn:uuid:6934d055-91c3-4ea0-baac-01eac69f6749>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00807.warc.gz"} |
Lab Staff to Ply Expertise at SIAM Conference on Applied Linear Algebra
October 21, 2015
Xiaoye “Sherry” Li of CRD’s Scalable Solvers Group (SSG) will give an invited talk atthe SIAM Conference on Applied Linear Algebra taking place October 26-30 in Atlanta. Li, one of 12 plenary
speakers, will discuss “Accelerating Direct Linear Solvers with Hardware and Algorithmic Advances” on Tuesday, October 27.
The conference is held every three years and is the leading meeting for researchers in the linear solvers community. Other Berkeley Lab contributions to the program include:
Li is a co-organizer of the mini-symposium sessions on Fast Solvers in Numerical Linear Algebra - Part Is and II.She is also a co-author on the paper “A Sparse Direct Solver for Distributed Memory
GPU and Xeon Phi Accelerated Systems.”
Aydın Buluç and Ariful Azad of CRD’s Performance and Algorithms Research Group are co-authors ofthe paper “Generalized Sparse Matrix-Matrix Multiplication and Its Use in Parallel Graph Algorithms.”
Chao Yang of CRD’s SSG is co-organizer of the mini-symposium sessions New Numerical Linear Algebra Methods Meet New Challenges of Physics - Parts I and II. Yang is also author of a poster on “Linear
Response and the Estimation of Absorption Spectrum in Time-Dependent Density Functional Theory.” He is also a co-author of the paper “A Structure-preserving Lanczos Algorithm for the Complex
J-symmetric Eigenproblem.”
Lin Lin of SSG is author of the paper “Randomized Estimation of Spectral Densities of Large Matrices Made Accurate.”
Pieter Ghysels of CRD’s Performance and Algorithms Research Group and Li are among co-authors on the paper “A Comparison of Different Low-Rank Approximation Techniques.”
Francois-Henry Rouet of SSG and Li are co-authors on the paper “Comparison of FMM and HSS at Large Scale.”
Ghysels, Li and Rouet are co-authors of the papers “A Parallel Multifrontal Solver and Preconditioner Using Hierarchically Semiseparable Structured Matrices” and “A Parallel Multifrontal Solver and
Preconditioner Using Hierarchically Semiseparable Structured Matrices.”
Meiyue Shao and Eugene Vecharynski of SSG are organizers of the mini-symposium sessionsLarge-Scale Eigenvalue Computations – Parts I and II
Vecharynski is author of the paper “Preconditioned Locally Harmonic Residual Methods for Interior Eigenvalue Computations”and a co-author of the paper “Preconditioned Solvers for Nonlinear Hermitian
Eigenproblems with Variational Characterization.”
Shao, Yang and Jack Deslippe of NERSC are among the co-authors of the paper“Structure Preserving Algorithms for Solving the Bethe--Salpeter Eigenvalue Problem.”
Jim Demmel, a faculty scientist in SSG, is co-author of the papers “The s-Step Lanczos Method and its Behavior in Finite Precision” and “Sparse Approximate Inverse Preconditioners for
Communication-Avoiding Bicgstab Solvers.”
Esmond G. Ng, head of CRD’s Applied Mathematics Department, is author of the paper “Large-Scale Eigenvalue Calculations in Scientific Problems.”
About Berkeley Lab
Founded in 1931 on the belief that the biggest scientific challenges are best addressed by teams, Lawrence Berkeley National Laboratory and its scientists have been recognized with 16 Nobel Prizes.
Today, Berkeley Lab researchers develop sustainable energy and environmental solutions, create useful new materials, advance the frontiers of computing, and probe the mysteries of life, matter, and
the universe. Scientists from around the world rely on the Lab’s facilities for their own discovery science. Berkeley Lab is a multiprogram national laboratory, managed by the University of
California for the U.S. Department of Energy’s Office of Science.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.
For more information, please visit energy.gov/science. | {"url":"https://crd.lbl.gov/news-and-publications/news/2015/lab-staff-to-ply-expertise-at-siam-conference-on-applied-linear-algebra/","timestamp":"2024-11-09T20:08:11Z","content_type":"text/html","content_length":"25441","record_id":"<urn:uuid:2e3db13a-7599-4739-bb3d-cceb2d5a076a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00251.warc.gz"} |
How to Multiply Fractions: A Detailed Guide - He Loves MathHow to Multiply Fractions: A Detailed Guide - He Loves Math
How to Multiply Fractions: A Detailed Guide
Multiplying fractions is an essential math skill that students often need to master in school. Whether you’re working on a homework problem, studying for a test, or dealing with fractions in real
life, understanding how to multiply fractions can be extremely useful. This blog post will explain how to multiply fractions in detail, using examples and addressing frequently asked questions to
ensure you understand the process thoroughly. Let’s dive in!
What are Fractions?
A fraction represents a part of a whole. It consists of two numbers:
• Numerator: The top number, which indicates how many parts we have.
• Denominator: The bottom number, which indicates how many equal parts the whole is divided into.
For example, in the fraction 3/4, the numerator is 3 and the denominator is 4, meaning we have 3 out of 4 equal parts.
Step-by-Step Guide to Multiplying Fractions
Multiplying fractions is simpler than adding or subtracting them because there is no need to find a common denominator. Follow these steps to multiply fractions:
Step 1: Multiply the Numerators
The first step is to multiply the numerators of the fractions together. This will give you the numerator of the answer.
Example: Multiply 2/3 and 4/5.
Step 2: Multiply the Denominators
Next, multiply the denominators of the fractions together. This will give you the denominator of the answer.
Step 3: Simplify the Fraction (If Needed)
After multiplying, you may need to simplify the fraction by dividing both the numerator and denominator by their greatest common factor (GCF).
• Result: 8/15 (This fraction cannot be simplified further.)
Final Answer: 8/15
Example Problems
Example 1: Multiply 1/2 by 3/4.
• Multiply the numerators: 1 × 3 = 3
• Multiply the denominators: 2 × 4 = 8
• The answer is 3/8.
Example 2: Multiply 5/6 by 2/9.
• Multiply the numerators: 5 × 2 = 10
• Multiply the denominators: 6 × 9 = 54
• Simplify 10/54 by dividing both the numerator and denominator by 2.
• The simplified answer is 5/27.
Multiplying Mixed Numbers
A mixed number is a combination of a whole number and a fraction. To multiply mixed numbers, follow these steps:
1. Convert the Mixed Numbers to Improper Fractions: An improper fraction has a numerator larger than its denominator.
2. Multiply the Fractions: Follow the same steps as above.
3. Convert Back to a Mixed Number (If Needed): If the result is an improper fraction, convert it back to a mixed number.
Example: Multiply 1 1/2 by 2 1/3.
1. Convert to improper fractions:
2. Multiply the fractions:
3. Simplify or convert to a mixed number:
Final Answer: 3 1/2
Tips for Multiplying Fractions
1. Simplify Before Multiplying: If possible, simplify the fractions before multiplying. This can make the multiplication easier and the result simpler.
Example: Multiply 4/9 by 3/8.
□ Before multiplying, simplify 4 and 8 (both are divisible by 4):
□ 4/9 × 3/8 = 1/9 × 3/2 = 3/18 = 1/6
2. Cross-Cancel: You can cross-cancel numerators and denominators before multiplying to make the calculation easier.
Frequently Asked Questions about Multiplying Fractions
Q1: How do you multiply fractions?
To multiply fractions, multiply the numerators together and then multiply the denominators together. Simplify the result if possible.
Q2: Do you need a common denominator to multiply fractions?
No, unlike adding or subtracting fractions, you do not need a common denominator to multiply fractions. Simply multiply across the numerators and denominators.
Q3: How do you multiply mixed numbers?
To multiply mixed numbers, first convert them to improper fractions, then multiply as you would with regular fractions. Convert the result back to a mixed number if needed.
Q4: Can the product of two fractions be greater than the original fractions?
No, when you multiply two proper fractions (fractions less than 1), the product is always smaller than either of the original fractions.
Q5: How do you simplify a fraction after multiplying?
To simplify a fraction, find the greatest common factor (GCF) of the numerator and denominator and divide both by that number.
Q6: What if I need to multiply a fraction by a whole number?
To multiply a fraction by a whole number, convert the whole number to a fraction by placing it over 1, then multiply as usual.
Q7: Can I cross-cancel when multiplying fractions?
Yes, cross-canceling is a technique used to simplify the multiplication process by reducing numerators and denominators before multiplying.
Q8: What is the difference between multiplying and adding fractions?
When multiplying fractions, you multiply the numerators and denominators directly. When adding fractions, you need a common denominator before adding the numerators.
Q9: How do you multiply fractions with different denominators?
The denominators do not need to be the same to multiply fractions. Simply multiply across the numerators and denominators.
Q10: How do I multiply fractions in real-life situations?
Multiplying fractions is useful in many real-life situations, such as calculating portions in recipes, determining areas, or finding part of a quantity.
Multiplying fractions is a straightforward process that involves multiplying the numerators and denominators and then simplifying the result. Whether you are multiplying simple fractions, mixed
numbers, or whole numbers, the steps are easy to follow once you understand the basics. With practice, you will be able to multiply fractions quickly and accurately, making it a valuable skill for
both academics and real-life applications. | {"url":"https://helovesmath.com/math/how-to-multiply-fractions-a-detailed-guide/","timestamp":"2024-11-03T09:35:25Z","content_type":"text/html","content_length":"179478","record_id":"<urn:uuid:38c77060-d43f-4ffb-b020-b0aa88b5a7b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00617.warc.gz"} |
Physics - Online Tutor, Practice Problems & Exam Prep
Alright, guys. So in this video, we're going to be talking about the electric potential. We talked about the electric potential energy between two charges. And even though those two things sound
similar, we're going to see how in this video they are different. Let's check it out. Electric potential is sometimes just simply called potential, and electric potential energy is sometimes just
simply called potential energy. It's kind of assumed that we're talking about electricity. So a lot of people just don't even say electric potential energy. We just say potential and potential
energy. But one important distinction to make is that even though these two things are related and they sound the same, they actually represent different things. So we have to be very careful about
our word choices and how we use these terms. Now, the best way to understand the difference between potential and potential energy is to go back and talk about electric fields. So, we said that
basically a single charge, whether it was a positive or negative charge, let's assume it's a positive charge right here, emitted these field lines, these electric field lines, and basically, these
field lines were just information, and this information or field told charges that were in the vicinity how much force to feel.
What happens is that a single charge alongside producing an electric field also produces something called an electric potential, and it's very similar to how it works. So, if you have a positive
charge at the same time it's producing this electric field outwards, telling other charges how much force to feel, a positive charge is also emitting some information, a field called a potential,
which, by the way, has the symbol V. And this symbol basically tells charges how much energy to have or how much energy to feel. And the thing is, we were talking about single charges in which this
was like a big Q. We said that that was the producing charge and then it emitted these field lines and basically nothing happened unless you actually dropped a second charge inside of here, a small
q. And once you had a second charge, all of a sudden, there is a force on this thing because if you have some electric field lines that point in this direction, and if you drop a charge here, then,
for instance, if it's a positive charge, then this is going to feel a repulsive force like this. And we have the relationship between the force and the electric field that was just given as F=qE.
Similarly, for potential, this single charge sets up a field called an electric potential, and once you drop a second charge inside of this potential, now there is energy. These two things are
different, but the mechanisms in which they sort of set up these force fields and energy fields are very similar. We said that the force that we could calculate on a charge from an electric field is
just given by this equation: F=qE. Well, it's very similar for energy. What happens is once there's a second charge that you put in between these two things. Now it basically creates some electric
potential energy, which we know the equation for, and this electric potential energy is given as U=qV in which, in the case of Coulomb's law in the case of electric fields, this E represented the
strength of the electric field that this feeling charge was put inside of. Well, this V is the strength of the energy field that is put inside of.
Another way you might see that is actually the potential, sometimes the potential field. So basically, we know that the q that this little q here always corresponded to the q that was feeling the
field that it was put inside of. So, in other words, this q is always the feeling charge. Well, it's the same way with the potential in this formula right here: U=qV. This q always represents the
thing that is feeling the potential at that specific spot.
Alright, so I just want to go ahead and wrap up everything really quickly, once more. So you have a single charge. It produces something called an electric field, and that field tells charges that
are inside of it, how much force to experience. And once that second charge is put there, there's force called Coulomb's Law, and it's given by this equation or kqq2/r2 where r is the strength of the
electric field? And that q is the feeling charge. Well with the potential, it basically does the same exact thing because it's for energy and the equation is slightly different.
A single charge produces an electric potential out here. And once you are in that potential it tells charges inside of it how much energy to have. And once there is a second charge, all of a sudden
now there's energy. There's potential energy between these two charges that potential energy is given as U=qV or U=kQq/r where that V is the strength of the energy field or the potential field. And
this little q is also the feeling charge. Okay, so the unit of this electric potential is called the Volt, and it's given by the letter V. And this V is actually defined as one Joule per one Coulomb.
Now we have to be very careful here because this V is the symbol for both the electric potential and the unit. So, for example, it would be perfectly sensible to have an equation like this: V = 3
volts. This would be perfectly sensible. It was just some guy who decided hundreds of years ago that the symbol for the letter and the unit was going to be the same. So this right here is the symbol
for electric potential. Whereas this right here is the unit. So just so you know, don't get confused between those two. And that's basically all we need to know about the electric potential. Let's go
ahead and check out an example. We have a five and a three Coulomb charge that are separated by some distance right here.
So if the five Coulomb charge feels 200 volts from the three Coulomb charge, what's the potential? Great. So we have these two charges right here. This is going to be a five. Coulomb charge, and now
we're supposed to figure out what is the potential energy on the five Coulomb charge. So in other words, we're trying to figure out what U is. We know that U is just going to be K Q1 Q2 / r. Here's
the problem though: we could use this, we could try to use this potential energy right here to figure out what the potential energy of this five Coulomb charge is. The problem is, we actually don't
know what this r distance is, so we can't use this potential energy formula.
Instead, we're going to have to use a different potential energy formula, which is that U is equal to qV. So we have with the charges. This charge corresponds to the feeling charge, and we know that
the five Coulomb charge is feeling 200 volts from this charge over here. So, in other words, this is actually the producing charge Q, and this is actually the feeling charge little q. So this is
actually going to be the charge that we use in this formula. And what we're doing is we're basically saying this producing charge here is producing some potential field, some potential out in this
field here. And this little q is feeling it.
Okay, So that means that this potential energy is just going to be the five Coulombs times the potential, which is 200 volts at the specific point. So at right here, the potential is 200 volts. So
that means that the potential energy is going to be five times 200 which is equal to 1000, and that's it. So basically, just let me know if you guys have any questions, let's go ahead and do some | {"url":"https://www.pearson.com/channels/physics/learn/patrick/electric-potential/electric-potential?chapterId=8fc5c6a5","timestamp":"2024-11-12T03:03:11Z","content_type":"text/html","content_length":"629103","record_id":"<urn:uuid:0821ca87-9e01-4d79-b7cc-68d927253e20>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00720.warc.gz"} |
sorting algorythm vs efficiency
Martin Vajsar wrote:
Ryan McGuire wrote:but I don't think that big-O captures everything we're concerned about.
Was it specified somewhere what are we actually concerned about?
We're not just concerned about our algorithm being O(N log N), but also what the actual multiplier is.
No, we aren't. The actual multiplier doesn't give any additional information. As has already mentioned recently elsewhere in this forum, the "big-O" just describes how the runtime of the
algorithm changes when the size of its input changes. And really it is just an approximation. You cannot use it in any way to compute that runtime, since lower-order factors of the complexity are
simply omitted.
Any other ideas? In particular, do we have any implementations to actually test?
The answer to similar proposals around here usually is: show us your idea or implementation first.
Myke's idea actually sounds pretty good to me. If I understand it right, it's actually a merge sort that starts at the size of sublists equal to 5 instead of 1, so it shaves off some of the time
of the original merge sort.
I guess I don't understand your distinction. On the one hand you say that no, we aren't concerned with the multiplier. On the other, you say that Myke's modified merge sort, which is O(N log N),
shaves some time off the original merge sort, which is also O(N log N). Since both algorithms run in time that is proportional to N log N (at least for large N), isn't it
the comparison of the N log N multipliers for those two algorithms that makes Myke's better than the standard?
> Was it specified somewhere what are we actually concerned about?
Yes. The first sentence of the last line of the original post: "I am sure you can come up with a faster approach than this."
We are looking at the overall speed of any candidate algorithms. I would say that an algorithm that can sort a few hundred thousand numbers in less time than the simple-minded java sort, even if both
have the same big-O expression, is what we're looking for.
> The answer to similar proposals around here usually is: show us
idea or implementation first.
But of course, and I've contributed to quite a few previous challenges: Sorting fifteen apples when you can do a three way comparison, solving Sudokus, identifying the one name (or other
) that shows up in a list just once while others are repeated, etc. I have an idea for an algorithm for this challenge as well but haven't had time to actually put pen to paper, so to speak.
It just seemed that in this case we were going off on a tangent about how log(base 5) N is proportional to log(base 10) N, and I wanted us to get back on the topic of actual algorithms that might be
faster than the simple-minded one without dismissing candidates just because they execute in O(N log N) time. | {"url":"https://coderanch.com/t/614868/sorting-algorythm-efficiency","timestamp":"2024-11-02T12:18:47Z","content_type":"text/html","content_length":"105266","record_id":"<urn:uuid:4d61e625-cd6a-4c38-8bf6-b21143f1081a>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00742.warc.gz"} |
He Everyone!
Is There An Option To Set Extra_Docker_Shell_Script For Project Or Experiment, Rather Than For Global User Configuration? I'M Working On Several Projects And Each Of Them Require Their Own Environment Which Couldn'T Be Set Up From Python.
He Everyone! Is There An Option To Set Extra_Docker_Shell_Script For Project Or Experiment, Rather Than For Global User Configuration? I'M Working On Several Projects And Each Of Them Require Their
Own Environment Which Couldn'T Be Set Up From Python.
He everyone!
Is there an option to set extra_docker_shell_script for project or experiment, rather than for global user configuration? I'm working on several projects and each of them require their own
environment which couldn't be set up from python.
Posted 6 months ago
Votes Newest
Answers 2
Hi @<1691983266761936896:profile|AstonishingOx62> , you can use set_base_docker in the code to specify it per experiment - None
516 Views
2 Answers
6 months ago
6 months ago | {"url":"https://faq.clear.ml/question/1691983269614063616/he-everyoneis-there-an-option-to-set-extradockershellscript-for-project-or-experiment-rather-than-for-global-user-configuration-im-working-on-several-projects-and-each-of-them-require-their-own-environment-which-couldnt-be-set-up-from-python","timestamp":"2024-11-12T22:20:51Z","content_type":"text/html","content_length":"27927","record_id":"<urn:uuid:83b42730-9041-4929-99d3-c3dca168a725>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00726.warc.gz"} |
The function quadrat.test is generic, with methods for point patterns (class "ppp"), split point patterns (class "splitppp") and point process models (class "ppm").
If X is a split point pattern, then for each of the component point patterns (taken separately) we test the null hypotheses of Complete Spatial Randomness, then combine the result into a single test.
The method quadrat.test.ppp is applied to each component point pattern. Then the results are pooled using pool.quadrattest to obtain a single test. | {"url":"https://www.rdocumentation.org/packages/spatstat.explore/versions/3.0-6/topics/quadrat.test.splitppp","timestamp":"2024-11-03T22:46:41Z","content_type":"text/html","content_length":"67646","record_id":"<urn:uuid:01c0da79-0750-4a48-b632-2e594ce1bebd>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00482.warc.gz"} |
milonga, a free nuclear reactor analysis
PLEASE SWITCH TO FeenoX ASAP
Milonga is a free core-level neutronic code that solves the steady-state multigroup neutron transport equation using either
a. the diffusion approximation, or
b. the discrete ordinates S_N method
over unstructured grids (although simple structured grids can also be used) based on either a
i. finite-volumes, or
ii. finite-elements
discretization scheme. It works on top of the wasora framework, which provides means to parse and understand a high-level plain-text input file containing algebraic expressions, data for function
interpolation, differential equations and output instructions amongst other facilities. Therefore, any mathematical computation which can be done by wasora—i.e. parametric calculations,
multidimensional optimization, function interpolation and integration, etc.—can be combined with the facilities that milonga provides to solve the neutron diffusion equation.
Milonga’s main features are:
• free & open source software (released under the terms of the GNU GPL version 3)
• solves the steady-state few-group neutron transport equation over a Gmsh-generated unstructured grid in one, two or three spatial dimensions
• may solve either the discrete ordinates or the diffusion formulation of the transport equation
• may use either a finite-volumes or a finite-elements spatial discretization scheme to obtain a matrix-casted eigenvalue problem
• solves any combination of multiplicative and non-multiplicative media with and without independent source
• can solve for either the direct or the adjoint flux (TODO! only direct flux is computed right now)
• structured grids may be generated for simple problems within milonga without needing an external mesher (discouraged)
• macroscopic cross sections are given as algebraic expressions of x, y and z in a per-material basis
• algebraic expressions may involve interpolated point-wised user-defined functions of one or more variable, so cross sections may depend on intermediate spatial distributions (e.g. temperatures)
• algebraic expressions may involve standard functions (\sin, \cos, \exp, \log, etc.) and functionals (integrals, derivatives, roots, etc.)
• computed results (i.e. k_\text{eff}, fluxes, power) are given as scalar variables and functions of x, y and z which may be further used in algebraic expressions
• program output is 100% user-defined (in fact the program is silent if no explicit PRINTs are given)
• milonga may be coupled to other calculation codes using files, shared-memory objects or python bindings
• non-linear effects (e.g. xenon and/or thermalhydraulic feedback) may be solved in an iterative way
• multidimensional quasi-random sequential parametric or non-linear minimization cases are easily created from single-run input files
• milonga may be extended by runtime-loadable shared objects and/or fixed entry-point plugins
• high-quality free & open third-party libraries are used to perform tasks that other people have already worked out: PETSc, SLEPc, GSL, IDA, Readline
• new formulations and/or spatial discretizations schemes can be (more or less) easily added
• milonga interfaces well with other UNIX tools (m4, grep, sed, awk, etc.) and plotting & post-processing tools (Gnuplot, Gmsh, LaTeX, ParaView, Pyxplot, etc.)
Actually, milonga can be seen as a glue layer between a mesh generator (i.e Gmsh) and a large sparse matrix solver (i.e SLEPc for eigenvalue problems and PETSc for linear problems). That is to say,
milonga builds the matrices R and F that cast the multigroup neutron transport/diffusion equation as a matrix-based eigenvalue problem:
F \phi = k_\text{eff} \cdot R \phi
Should there be independent neutron sources S, then the problem is casted as a system of linear equations:
(F-R) \phi = S
These matrices are expected to be sparse, as they are the result of the discretization of the differential diffusion operator using either finite volumes or finite elements, over a certain spatial
grid either generated by a mesher or structured defined within the milonga input file. Said matrices are thus built in PETSc format, so they can either be passed to a parallel solver (default is
SLEPc, whose algorithms and parameters may be chosen at run-time) or even dumped into files to be opened later. The macroscopic cross-section may depend on the spatial coordinates x, y and/or z
(depending on the dimension of the problem) not only because different materials may be assigned to the regions of the domain (i.e. physical entities) but also due to continuous dependence of these
cross sections with arbitrary parameter distributions within the entity (i.e. temperatures, densities, poison concentration, etc.). These dependencies can be given either as point-wise interpolated
functions or algebraic expressions (that may in turn involve point-wise defined data).
Milonga provides also a second glue layer that links the output of the linear/eigen-solver to the input of a post-processing tool (i.e Gmsh or ParaView). The effective multiplication factor k_\text
{eff} is stored as a wasora variable, and the fluxes and power distribution as point-wise defined multidimensional functions, which can then be added, subtracted, integrated, differentiated, etc.
Parametric and optimization runs may be also performed.
Quick start
“It is really worth any amount of time and effort to get away from Windows if you are doing computational science.”
Open a terminal in a GNU/Linux box (may be a VirtualBox box) and install the following tools:
sudo apt-get install m4 make autoconf automake gcc g++ git findutils
and the following libraries
sudo apt-get install libgsl-dev petsc-dev slepc-dev
If your distribution does not have a bundled version of PETSc or SLEPc, then you might need to compile them yourself. Ask for help in the mailing list at https://www.seamplex.com/lists.html.
Clone the milonga repository, bootstrap (might take a while as it has to clone and bootstrap wasora first), configure, compile, check and install:
git clone https://github.com/seamplex/milonga/
cd milonga
make check
sudo make install
This will result in the milonga executable in a system-wide location such as /usr/local/bin. If you do not have root access or do not want to install system-wide, just locate the milonga executable
and put it in your PATH. If these instructions are non-sense to you, go directly to the mailing list.
Examples & test suite
After the compilation of the code (that follows the standard ./configure && make procedure, see INSTALL for details), one recommended step is to run the test suite with
$ make check
It consists of some cases that work both as examples of usage and as a suite of tests that check that milonga implements correctly the functionalities that are expected. Some cases have analytical
solution and some not. Some cases used an already-generated unstructured mesh and some others need Gmsh to be installed. Some cases use gnuplot to plot results. ParaView can be used to post-process
the last case. For some cases, milonga generates a markdown-formatted text file containing debugging and benchmarking information that can be converted to PDF and/or HTML with pandoc. In Debian-based
distributions, they can all be installed with:
# apt-get install gmsh gnuplot paraview pandoc
The make check command may not show the actual output of the examples but the overall result (i.e. whether the test passed, the test failed or the test was skipped). Expect your screen to be filled
up with plots, post-processing views and browsers showing debugging and benchmarking information. Each individual test may be repeated by executing the test-*.sh scripts located in the examples
User iepale has shared a set of input files that solve some of the ANL-7416 benchmarks using his fork of milonga that allows to solve transient solutions. This fork will soon be merged into the main
milonga branch.
# solves a one-dimensional slab either with a
# structured or unstrucutred mesh with a single file
# run as
# $ milonga slab.mil { structured | unstructured } [ --volumes | --elements ]
# we define two meshes
MESH NAME unstructured FILE_PATH slab.msh DIMENSIONS 1
a = 2 * 10.371065 # critical size of the problem UD20-1-0-SL (number 22 report Los Alamos)
MESH NAME structured STRUCTURED DIMENSIONS 1 NCELLS_X 20 LENGTH_X a
DEFAULT_ARGUMENT_VALUE 1 unstructured
MILONGA_PROBLEM MESH $1 # which mesh we use is selected from the commandline
MATERIAL fuel SigmaT 0.54628 SigmaS 0.464338 nuSigmaF 1.70*0.054628 D 1/(3*SigmaT(x))
PHYSICAL_ENTITY NAME left BC null
PHYSICAL_ENTITY NAME right BC null
milonga slab.mil
Two-dimensional PWR IAEA benchmark
See examples/2dpwr*.mil for the actual input files.
gtheler@tom:~/wasora-suite/milonga/examples$ ./test-2dpwr.sh
keff = 1.0293114574 ( structured volumes, lc = 3.33333 , 5202 x 5202 , 0.00 0.01 0.07 secs )
max thermal flux 17.9 located at ( x = 135.00 , y = 55.00 )
keff = 1.0296896615 ( structured elements, lc = 3.33333 , 5408 x 5408 , 0.00 0.04 0.10 secs )
max thermal flux 19.7 located at ( x = 56.67 , y = 136.67 )
keff = 1.0297517716 ( unstructured volumes, lc = 3.33333 , 5994 x 5994 , 0.00 0.01 0.19 secs )
max thermal flux 17.8 located at ( x = 132.71 , y = 55.96 )
keff = 1.0296828170 ( unstructured elements, lc = 3.33333 , 6204 x 6204 , 0.00 0.05 0.24 secs )
max thermal flux 19.7 located at ( x = 56.22 , y = 136.00 )
The Stanford Bunny
# solves a three-dimensional shape with uniform XS
# the geometry should be given in $1.geo
# bulk material phyisical group should be named "fuel" and
# external surface phyisical group should be named "external"
# post-processing views are written as $1.vtk and $1.msh
# run as
# $ milonga 3dshape.mil [ cube | cylinder | shpere | bunny ]
# [ --diffusion | --s2 | --s4 ]
# [ --volumes | --elements ]
# if the mesh does not already exists, call gmsh on the $1.geo file
SHELL "if [ ! -e $1.msh ]; then gmsh -v 0 -3 $1.geo; fi"
MESH FILE_PATH $1.msh DIMENSIONS 3
# these are the default values, they are overwritten by the commandline arguments
MILONGA_PROBLEM SCHEME elements FORMULATION s2
# some settings to improve cpu & memory usage
MILONGA_SOLVER EPS_TYPE jd ST_TYPE precond KSP_TYPE bcgs PC_TYPE asm
# table 12 in page 18 of Los Alamos Report LA-13511
# analytical benchmark test set for criticality code verification
MATERIAL U235a SigmaT 0.32640 SigmaS 0.248064 SigmaA 0.013056 nuSigmaF 2.7*0.065280
# link physical entities in the mesh to materials and boundary conditions
PHYSICAL_ENTITY NAME fuel MATERIAL U235a
PHYSICAL_ENTITY NAME external BC vacuum
# do the magic!
# write some results into to the standard output
PRINT_FUNCTION phi1 HEADER
PRINT TEXT "\# keff = " %.8f keff SEP " " TEXT "(" %+.1f 1e5*(1-1/keff) TEXT "pcm )"
PRINT TEXT "\# nodes = " %.0f nodes
PRINT TEXT "\# elements = " %.0f elements
PRINT TEXT "\# CPU usage: "
PRINT TEXT "\# init = " %.3f time_cpu_ini TEXT "seconds"
PRINT TEXT "\# build = " %.3f time_cpu_build TEXT "seconds"
PRINT TEXT "\# solve = " %.3f time_cpu_solve TEXT "seconds"
PRINT TEXT "\# total = " %.3f time_cpu_total TEXT "seconds"
# MILONGA_DEBUG MATRICES_X FILE_PATH 3dshape MATRICES_PETSC_OCTAVE
# files for gmsh and vtk post-processing
MESH_POST FILE_PATH $1_out.msh phi1
FLUX_POST FILE_PATH $1.vtk
$ milonga 3dshape.mil bunny
2.494790e+00 -9.692680e-01 7.595410e+00 2.275265e+00
-3.223120e+00 -2.619830e+00 5.786140e+00 1.707387e+00
# keff = 0.92145559 ( -8523.9 pcm )
# nodes = 3423
# elements = 18496
# CPU usage:
# init = 0.001 seconds
# build = 1.608 seconds
# solve = 0.621 seconds
# total = 2.231 seconds
The angular fluxes \psi_g for g=1,\dots,8 from the S_2 formulation.
Check out https://github.com/seamplex/milonga-2015-workshop for a “hands-on” tutorial (text and input files) given a workshop held in 2015. The presentation slides are here.
Milonga is distributed under the terms of the GNU General Public License version 3 or (at your option) any later version.
Milonga is free software—both as in free speech and as in free beer, although the first meaning is far more important than the second one—and is distributed under the terms of the GNU General Public
License version 3 or (at your option) any later version. In words of the Free Software Foundation,
Nobody should be restricted by the software they use. There are four freedoms that every user should have:
0. the freedom to use the software for any purpose,
1. the freedom to change the software to suit your needs,
2. the freedom to share the software with your friends and neighbors, and
3. the freedom to share the changes you make. When a program offers users all of these freedoms, we call it free software.
Developers who write software can release it under the terms of the GNU GPL. When they do, it will be free software and stay free software, no matter who changes or distributes the program. We
call this copyleft: the software is copyrighted, but instead of using those rights to restrict users like proprietary software does, we use them to ensure that every user has freedom.
Not only does milonga (and wasora) provide all the four basic freedoms to the software user, but it also encourages her to study, understand, analyze and hack it. And of course, to share the
associated discoveries, suggestions, improvements and fixed bugs under the terms of the GNU GPL—especially with wasora’s original author. To sum up:
Milonga is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version~3 of the License,
or (at your option) any later version.
Milonga is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
Public License for more details.
Further information
See the file INSTALL for compilation and installation instructions.
See the directory examples for the test suite and other examples.
See the contents of directory doc for full documentation.
Home page: http://www.seamplex.com/milonga
Repository: http://github.com/seamplex/milonga Mailing list and bug reports:
milonga is copyright ©2010–2018 jeremy theler
milonga is licensed under GNU GPL version 3 or (at your option) any later version.
milonga is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
See the file COPYING for copying conditions. | {"url":"https://www.seamplex.com/milonga/","timestamp":"2024-11-11T01:12:30Z","content_type":"text/html","content_length":"47757","record_id":"<urn:uuid:f9cb3848-2c28-43d0-b011-b59650853200>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00858.warc.gz"} |
19 Defining Causation - Logic Philosophy Spirituality19 Defining Causation
19 Defining Causation
THE LOGIC OF CAUSATION
Phase Three: Software Assisted Analysis
Chapter 19–Defining Causation.
In the present chapter, I propose to deal with some of the difficulties that have become apparent in the previous two chapters. Before doing so, however, it is perhaps wise to review our basic
definitions of the four generic determinations of causation – complete, necessary, relative partial and relative contingent causation – and their two derivative concepts, viz. absolute partial and
absolute contingent causation.
1. complete causationby P of R symbol:m
a) If P, then R (P + notR) is impossible
b) if notP, not-then R (notP + notR) is possible <=> 2(c)
c) P is possible (P + R) is possible <=> 2(b)
2. necessary causationby P of R symbol:n
a) If notP, then notR (notP + R) is impossible
b) if P, not-then notR (P + R) is possible <=> 1(c)
c) notP is possible (notP + notR) is possible <=> 1(b)
3. partial causationby P (with Q) of R symbol:prel => not-1
a) If (P + Q), then R (P + Q + notR) is impossible <= 1(a)
b) if (notP + Q), not-then R (notP + Q + notR) is possible => 1(b)
c) if (P + notQ), not-then R (P + notQ + notR) is possible => not-1(a)
d) (P + Q) is possible (P + Q + R) is possible => 1(c)
4. contingent causationby P (with Q) of R symbol:qrel => not-2
a) If (notP + notQ), then notR (notP + notQ + R) is impossible <= 2(a)
b) if (P + notQ), not-then notR (P + notQ + R) is possible => 2(b)
c) if (notP + Q), not-then notR (notP + Q + R) is possible => not-2(a)
d) (notP + notQ) is possible (notP + notQ + notR) is possible => 2(c)
5. partial causation by P of R (abs) symbol:pabs <= 3, => not-1
a) If P, not-then R (P + notR) is possible <=> not-1(a)
b) if notP, not-then R (notP + notR) is possible <=> 1(b), 6(c)
c) if P, not-then notR (P + R) is possible <=> 1(c), 6(b)
6. contingent causation by P of R (abs) symbol:qabs <= 4, => not-2
a) If notP, not-then notR (notP + R) is possible <=> not-2(a)
b) if P, not-then notR (P + R) is possible <=> 2(b), 5(c)
c) if notP, not-then R (notP + notR) is possible <=> 2(c), 5(b)
Let us now explain and justify these definitions. To claim complete causation (m) implies we know (or think or believe) that one thing P is invariably accompanied by another thing R, i.e. that P
without R is impossible (in the mode of modality concerned – be it logical, extensional natural, or whatever). However, that P implies R cannot by itself signify causation. We need to also know that
notP does not imply R, i.e. that notP without R isnotimpossible, for if both P and notP implied R, then R would be independent of them. Thirdly, we need to know that P is possible, so as to ground
the first implication in actuality; and given that P is possible and that P implies R, it follows that R is also possible, i.e. that the conjunction of P and R is possible.
I go into this in detail to make clear to readers that these definitions were not pulled out of the blue or arbitrarily imposed, but are the product of reasoning. Necessary causation (n) is very
similar to complete causation, except that the polarities of all the items involved are inversed. It is a statement that without P, R cannot occur; i.e. that the conjunction of notP and R is
impossible; in such cases P is called asine qua non(without which not) of R. Here again we must on logical grounds add two more propositions to the definition to make it applicable correctly.
Note that complete and necessary causation share the last two of their defining clauses, but differ in their first clause. However, since these first clauses do not, according to the laws of thought,
exclude each other, it follows that the generic determinations of complete and necessary causation can be combined into one specific determinationmn. However, they do not formally have to be so
combined; i.e.mmay be true withoutnbeing true, and vice versa. This brings us to the concepts of partial and contingent causation.
The relationship of partial causation (p) is designed to resemble that of complete causation, except that the cause is not one thing P, but a conjunction of two things P and Q, the latter being
called the complement of the former. The first clause in our definition is a claim that P and Q together bring about R. But for this to be true, we must also ascertain that Q without P and P without
Q are not also always followed by R; otherwise one or both of them might be accidental to the occurrence of R (i.e. P or Q might alone cause R, or R might be independent of their conjunction). The
second and third clauses in the definition guarantee the dependence of R on P and Q together. The fourth clause serves to ground the hypothetical relationship implied by the first; and together they
tell us that the conjunction of the three items P, Q and R is possible.
Contingent causation (q) is similarly constructed, but by analogy to necessary causation. The partial and contingent forms of causation are called weak determinations, in comparison to the complete
and necessary forms (called strong determinations), because in the former (unlike in the latter) the cause P (or for that matter its complement Q) is not by itself strong enough to bring about the
effect R. It is clear from the definitions ofpandqthat these relations are truerelative toa specific complement Q. If we put notQ in place of Q, P and R remain cause and effect in a similar sense,
but their exact relationship is considerably modified, note well. The complement Q (or alternatively notQ) signifies the conditions under which the (weak) causative relation between P and R comes
into play.
Note too thatpandq(relative to Q) donotshare any defining clauses, unlikemandn. Since they refer to the possibility and impossibility of different sets of conjunction, there is no conflict between
them, and they (as generic determinations) can logically be combined as (the specific determination)pqwithout infringing any law of thought.
Now compare the above listed definitions and implications of partial and complete causation. It is of course noteworthy thatmandninvolve only two items P and R, whereaspandqinvolve three items P, Q
and R; but this does not prevent logical comparisons. We see that clause 1(a) formally implies clause 3(a), and clauses 3(b) and 3(d) respectively imply 1(b) and 1(d), but clause 3(c) negates clause
1(a). This means thatpandmare on the whole contrary to each other, though they do share some elements of information. Similarly,qis incompatible withn, though they have some common aspects.
Now comparemandq. We see that 4(d) implies 1(b), and 4(b) implies 1(c), but no clause inqconflicts with 1(a), and none inmconflicts with 4(a) or for that matter 4(c). Similarly in the comparison
betweennandp, we find no notable opposition between them. This means that, formally speaking, nothing prevents the specific combinations of strong and weak formsmqandnpfrom occurring (separately, of
Let us now turn our attention to the last two forms[1]–absolutepartial causation (pabs) andabsolutecontingent causation (qabs), not to be confused with the preceding two forms ofpandqrelative to Q
(or eventually to notQ), henceforth symbolized byprel andqrel. The idea of absolute weak causation forms was generated by two related considerations. First, we wanted to express the weak
determinations in terms of two items rather than three, for purposes of matricial analysis and direct comparisons to the strong determinations; and second, we wanted to express the weak
determinations without regard to whether the complement is Q or notQ, or anything else for that matter.
Thus, the qualification of weak causations as ‘absolute’ here is only intended to mean that they arenot relative, note well. It does not signify some stronger relationship, but on the contrary (as is
soon apparent)a weaker relationship! Comparing the above definitions ofpabs toprel, we see that 5(a) is implied by 3(c), 5(b) is implied by 3(b), and 5(c) is implied by 3(d); but these implications
are not mutual. Thus,pabs is a derivative of, i.e. a restatement ofsome but not allof the information in it. Notice especially the absence inpabs of any of the information contained in clause 3(a) of
prel, though this clause isthe crucial partof it, the part most indicative of causation! All the same can be said ofqabs andqrel,mutadis mutandis.
It is also noteworthy that if we change Q to notQ and vice versa in the clauses of the definition ofprel, the impliedpabsis exactly the same. That is,prelative to Q andprelative to notQ yield the
same subalternpabs. This is of course to be expected, since neither Q nor notQ are mentioned in it. But additionally,pabs does not mention any other eventual third item – and so is identical for all
eventual third items, X, Y, Z or whatever. Whence the characterization of it as ‘absolute’. Now, this should cause us alarm; how can we know something so general from so little information, we might
well ask. But the truth is that in factpabs tells us exactly nothing about Q or notQ or any other third item! All the same can be said ofqabs andqrel,mutadis mutandis.
Now comparepabs withm. We see that 5(a) contradicts 1(a), though 5(b) and 5(c) are identical with 1(b) and 1(c) respectively; this tells us that, albeit their having some common ground,pabs andmare
contrary to each other. Also comparepabs withn. We see that 5(b) is the same as 2(c) and 5(c) is the same as 2(b), while 5(a) and 2(a) do not affect each other; this means thatpabs andnare compatible
and can be conjoined. Similar results are obtained comparingqabs withn, andqabs andm. Thus, the compoundsmqabs andnpabs are logically conceivable.
As for the oppositions betweenpabs andqabs, 5(b) is identical with 6(c) and 5(c) is identical with 6(b), whereas 5(a) and 6(a) do not impinge on each other; thus the two forms are compatible, i.e.
can be conjoined in a compound formpqabs. What does this compound form tell us? Simply, that each of the four conceivable combinations of P and R, viz. P+R, P+notR, notP+R, notP+notR, is possible.
The above six definitions for (i) causation by P or R can be modified to define (ii) prevention by P of R (by replacing R by notR, and notR by R, throughout them), (iii) inverse causation by P of R
(by replacing P by notP, and notP by P, and R by notR, and notR by R, throughout them), and (iv) inverse prevention by P of R (by replacing P by notP, and notP by P, throughout them). Note in passing
thatpqabs has the same value in causation (and inverse causation) and in prevention (and inverse prevention), since it always just means that the four conjunctions of P, R, notP and notR, are all
All this has been said before but is here repeated briefly to enable us to once and for all resolve a certain difficulty mentioned earlier. We shall see that the difficulty in question is upon closer
inspection more apparent than real.
Looking at the interpretation table for the moduses in a 2-item framework (Table 18.1, page 6), we see that only seven of the moduses refer to connection – and apparently not even one refers to ‘
non-connection albeit contingency’! Incontingency counts as non-connection, of course; but what interests us here is to logically conceivenon-connection between twocontingentitems. Apparently,
judging by the tabulated results, there is no such possibility! Note in passing that alternative words for connection and non-connection aredependenceandindependence. Are we to think that all
contingent items are mutually dependent in some way or other? Surely not! What does this mean, then? This result is indeed so surprising that I shall call it ‘the puzzle of no non-connection’.
Considering that the logic of causation as here presented, i.e. through microanalytic tabulation, isentirely a formal product of the laws of thought, this is indeed mysterious. This result seems to
fix in ana-priorimanner a detail about reality, by mere logical analysis, without need for empirical observation. Although some philosophers, indeed many of them across history, have adopted this
position, it does not make sense. It would mean we cannot even imagine or theoretically conceive of non-connection between contingent items, which certainly goes against our commonsense impression
that we at least comprehend such non-connection. All our concepts need contradictories to be intelligible. If we cannot even hypothetically formally define non-connection between contingent items,
the concept of connection itself becomes doubtful.
My discovery of this mystery is not new to phase III; I had already encountered it and made an effort to explain it in phase II (see Chapters 13.2 and 16.2). Here, I will succeed in going deeper into
the question and remove all lingering doubt once and for all.
No doubt, seeing this puzzling result, believers inextreme determinism(which include many materialists and behaviorists still today) will rush to judgment and say: “See, we told you, since we cannot
logically define indeterminism, it is not even open to debate – everything in the universe is determined, and there is no place in it for natural spontaneity or human freewill or any other
indeterminism.” But if we consider the matter more closely (again, look at Table 18.1 page 6), we see that the seven cases with both items contingent refer tovarying degrees of causation: 2 cases are
mn(maximal determination), 4 cases aremqabs ornpabs (medium determination), and 1 case ispqabs (minimal determination). Thus, only two relations are fully determining, whereas five others are partly
undetermined, and we cannot draw an extreme determinist conclusion.
Another group likely to welcome this puzzling result are believers in the Buddhist viewpoint that everything is causatively related to everything else in an inextricable web of ‘interdependence’ (or
‘dependent origination’ or, in Sanskrit,Pratityasamutpada). They will say: “See! Since there is no such thing as non-connection between some pairs of contingent items,anytwo contingent items taken at
random may be considered, without any recourse to experience, as causatively related, at the very least through partial contingent causation (and similarly prevention), i.e.pqabs.” But such
jubilation is premature and unjustified, as we shall now go on to show.
The simplest answer is that what we have called ‘partial contingent causation’ is not really causation! To see the truth of this, let us return to our initial definitions ofpabs andqabs, in the
previous section. What distinguishes these forms (numbered 5 and 6) from those preceding them (1-4) is that they lack an if-then clause. They each specify the possibility of three combinations of P,
R and their negations, but they distinctively donotspecify theimpossibility of any such combinations. Yet such if-then or impossibility of conjunction constitutesthe main clauseof the definitions of
strong causation and relative weak causation.
Thus, the absolute weak determinations are not forms of causation in the usual sense. This does not mean we ought to, or even can, just discard these two concepts. For it is clear that we formed them
out of a real need. They do in fact play a role in causative relations – but their role is a supporting one. In combination withmorn, i.e. inmqabs ornpabs, they are indicative of actual causation;
but taken apart from the strong determinations, i.e. in the combinationpqabs (i.e.p[abs]q[abs]), all they tell us is that the four basic conjunctions, viz. P+R, P+notR, notP+R, and notP+notR, are all
possible, which is not a statement of actual causation but still leaves open the logical possibility of causation at a deeper level (as evident in Table 18.6).
Remember,pabs is contrary tom, andqabs is contrary ton. Whenmis combined withqabs, we have the important information that, though there is causation, it is not of the powerfulmnsort. Similarly with
regard tonpabs – thepabs part serves to deny the conjunction ofmto thenpart. It is significant to remember, too, that there are no absolute lone determinations, that is: absolutem-alone, i.e.m
conjoined to neithernnorqabs, is logically impossible; similarly, absoluten-aloneis impossible, and so are the absolute weaks alone. Thus,pabs andqabs are formally needed for causative discourse in a
2-item framework.
However, though these absolute weak determinations are implied by the corresponding relative determinations, they do not in turn imply them. They are mere subalterns, not equivalents. At best,pabs
tells us thatprelmightoccur, and likewiseqabs tells us thatqrelmightoccur; the former certainly do not imply that the latter are bound to occur. And the issue here is not merely that we do not know
whether Q or notQ is the applicable complement. As the definitions in the previous section make clear,pabs andqabs remain the same, even if we change the polarity of Q to notQ and of notQ to Q inprel
andqrel. But, moreover, as Table 18.6 makes clear,pabs andqabs can be true without implying eitherprel orqrel in relation to Q or notQ !
The latter finding should by itself cause alarm: how could we, using a PQR matrix only, know about a weak causative relation between P and R through an intermediary other than Q or notQ ? Such a
thing is unthinkable in deductive logic – there are no magical leaps, no windfall profits – we can only conclude things already given in the premises. But if we look more closely at instances ofpqabs
only, we see that they do not tell us anything about causation involving some unstipulated fourth item other than Q or notQ, because they do not imply that some causation between P and R (and/or
their negations) is indeed operative. They merely specify the various possibilities of conjunction between these two items; this is valuable information, but it is not causation.
Thus, althoughpabs andqabs are relevant to causation in the compound propositionsmqabs andnpabs, they are not definitely indicative of causation aspqabs, in the 2-item framework as modus #16 (see
Table 18.1), or in the 3-item framework as the 23 moduses #s 52, 56, 60-61, 64, 103, 116, 120, 154, 180, 188, 196, 205-208, 221-222, 237, 239, 244, 253, 256 (see Table 18.6).
Note that the 2-item modus #16 unfolds as 81 distinct moduses in the 3-item framework. Among those 81, only the just mentioned 23 moduses (which include the last modus 256, note) are in turn empty of
causative information. The remaining 58 moduses all involve some definite causation, whether through relative lone determinations (54 cases) or relative partial contingent causation or prevention (4
cases). For this reason, we can rightly say that the 2-item modus #16 is ambiguous as to whether there is or not some causation or prevention deeper down in a 3-item framework.
Similarly, each of the 23 said 3-item moduses may or may not at a deeper level become a connection of some sort,ad infinitum. Thus, to callpqabs ‘causation’ (or ‘prevention’, as the case may be) is a
misnomer – it is excessive, inaccurate, misleading to do so, because though this compound is sometimes expressive of causation – it is sometimes not so. Thus, the solution to our problem is that to
regardpqabs as a form of connection is to misuse the term. We should therefore, strictly speaking, refer to the 2-item modus #16 aspossible connection and possible non-connection(as I suggested in
phase II); and likewise for each of the 23 above listed 3-item moduses (as now proven in phase III).
We have thus clearly located where non-connection between contingent items can be placed. Let me further explain this as follows, so it is fully understood.The essence of connection (causation or
prevention) lies in the limitations of possibilityto be found in nature or logic. When we say that an item, say P, ‘causes’ (or ‘prevents’) another item, say R, in some way, to some degree, we mean
that in the presence or absence of P, the presence or absence of R is somewhatrestricted. It is not the occurrence of the latter item or its negation that signify causation, but the fact thatsome
other avenueof occurrence has been naturally (in some cases, volitionally) or logically blocked.
Thus, the ‘force’ of causality lies not so much in positive events as in the restrictions in the degrees of freedom offered to an item by the interference of another; i.e. in thenegativeboundaries
the one sets on the other. In more formal terms, we can say: it is not so much the ‘1s’ (the bases) that matter as the ‘0s’ (the connections). Roughly stated, the more zeros, the stronger the causal
relation; the less zeros, the weaker the causal relation. If no zeros are to be found at any depth, there is no causal relation. In cases involving strong causation, the restrictions are very
evident, whereas in cases involving only weak causation, the restrictions are not always evident – and by extrapolation, we may at least conceive of cases without restriction.
We can also put it as follows, to show that it makes perfect sense. For two items to be connected in some way, there has to be some incompatibility between them and/or their negations, some conflict
that forces one or the other of them to behave in an special manner. If the items and their negations are every which way compatible, then they do not impinge upon each other but coexist
harmoniously. Thus, thepqabs compound, which signifies such thorough compatibility, is essentially indicative of non-connection, though some connection at a deeper level is not excluded by it
Once this crucial new insight is grasped, it is easy to see why some modus(es) in any framework (such as the last modus in the 2-item framework or the stated 23 in the 3-item one) are the reasonable
place where non-connection (in whatever sense) between contingent items may be found. Partial contingent causation or prevention are indeed possibly housed in such modus(es); but we must admit that
diverse forms of non-connection are possibly housed there too.Their correct interpretation is thus ambiguous, and it is an error to interpret them only one way – as only connection, or for that
matter as only non-connection.
Furthermore, we should point out that the 2-item modus #16 and the analogous 23 moduses in the 3-item framework signifybothpqabs of causation andpqabs of prevention, and not merely one or the other.
This fact should not be swept aside as insignificant, although of course it does not go against the laws of thought. It is, however, unthinkable that something might be both a partial contingent
causative and a partial contingent preventative of something else, relative to the same complement or even contradictory complements. This we know by looking at Tables 18.5 and 18.5 (pages 7-8),
which teach us that the four formspqrel to Q,pqrel to notQ, for causation, andpqrel to Q,pqrel to notQ, for prevention, have each only one modus, namely respectively moduses 190, 232, 127 and 220,
and no modus in common. Causation and prevention are thus essentially antithetical, not only in their stronger forms but even in their weakest form.
This shows us that, even ifpqabs of causation andpqabs of prevention are superficially compatible (indeed, they areidentical, having in common the 2-item modus #16 and all their 81 moduses in the
3-item framework), such compatibility must not be interpreted as meaning that they can ever be realized together relative to any specific complement(s) Q and/or notQ. Such realization (i.e. going
from absolute to relative) is logically impossible, so that the apparent compatibility between causation and prevention is purely illusory. Thus, the conceptual joining ofpabs andqabs is, from the
causative point of view, an abstraction without concrete referents. The generic forms are valid abstractions, because they can be validly joined tonandm, respectively, in the specific causative forms
npabs andmqabs; but they do not produce a common causative formpqabs. The latter is meaningful (as a statement of possibility of conjunction every which way), but not as causation or prevention, and
least of all as both causation and prevention.
It should also be stressed that when we here refer to the possible non-connection between two specified items P and R, we are in no way making a general claim about the non-connection of each of
these items to some other unspecified items. The contingent item P may be unconnected to the contingent item R, but still be connected to one or manyothercontingent items X, Y, Z. Non-connection does
not imply universal non-connection: it is here clearly intended as a characterization of the relation between aspecifiedpair of contingent items.
Thus, this finding about the logically possible existence of non-connection must not be taken as an a-priori statement that ‘some contingent things are not connected to any others’, or more extremely
that ‘nothing is connected to anything else’. These would be generalizations beyond what we have sought to establish here – which is only that, taking any two contingent items at random, there is no
logical necessity that they be connected in a real sense (i.e. one stronger than the misnamedpqabs). The said moderate and extreme generalizations do however remain open to debate.
The extreme proposition ‘nothing is connected to anything else’ has been put forward in philosophy by Nagarjuna, David Hume, and others. I firmly reject it on the formal ground that they do not
explain how all the other logical possibilities – i.e. those of connection between contingents – have been excluded from consideration by them; such skepticism is manifestly arbitrary.
The moderate position ‘some contingent things are not connected to any others’ is certainly not deductively proven here, either, but it remains quite conceivable, since we have identified the moduses
within which such disconnection might occur and we do not claim an exclusive universal application. It formally opens the door to claims of occasional natural spontaneity (as in Niels Bohr’s
interpretation of the uncertainty principle), and to claims of circumscribed human freewill and similar powers of volitions (which most people adhere to).
The antithesis to this would be the claim that ‘every contingent thing is connected to some other(s)’. Many philosophers throughout history have advocated this determinist thesis, calling it ‘the law
of causation’ – but it is important to realize that, from a formal point of view, it is just a hypothesis. Moreover, what does ‘connected’ mean here – i.e. what degree of connection is intended? The
extreme version of this thesis would affirm that ‘for any given contingent item R, there must be some item P that is a complete and necessary cause of it’.
A more moderate version might be postulated, however, that affirms such strong connection in most cases, but allows for exceptions, whereat natural spontaneity and/or volition may come into play next
to determinism. I personally believe such combination of theses is the most credible alternative, being closest to commonsense belief. Our causative logic is thus, in any event, quite capable of
assimilating all philosophical discourse concerning causation, note well.
In the preceding section, we saw that moduses that mean no more than “pqabs” (i.e.p[abs]q[abs]) cannot rightly be counted as signifying a causative connection, but at best only a possible connection,
which is also a possible non-connection. We saw the truth of this with reference to the 2-item modus #16, which was found to give rise to 81 moduses in the 3-item framework, of which 58 moduses were
indicative of some causation or prevention (as well aspqabs), whereas 23 moduses signified no more thanpqabs.
However, here we must admit that such ambiguity cannot be tolerated. If we want to produce a clear definition of causation, which is one of the goals of our study, we must make up our minds and
declare moduses that mean “pqabs only” to signify either a connection or a non-connection. So far in our tables, we have opted for the designation of the 2-item modus #16 and its equivalents 3-item
moduses to signify connection. But in view of our analysis in the preceding section, we must now reverse this policy if we wish to produce an accurate definition. This is reasonable, since two items
related only by way ofpqabs cannot be guaranteed to be causatively related, and so may be counted as not so related (unless or until more specific conditions are specified that imply them to be
causatively related).
On this basis, the tables concerning the broad concepts of causation, prevention and connection, and their respective negations, must be rewritten with all cases ofpqabs only moved over from the
positive to the negative side, whether manually or by modifying the calculation formulae as appropriate. Thus, for instance, the 2-item modus #16 must be moved from the columns of causation,
prevention and connection to those of non-causation, non-prevention and non-connection. Similarly for 23 moduses in the 3-item framework. We shall tag these new columns as concerning ‘strict’
causation, prevention and connection and their negations – so that the corresponding old columns can be left unchanged, except that we understand that they concern causation etc. in a ‘loose’ sense.
The outcome of this revision are the following two tables, derived from earlier ones as just explained, which are posted at the website as usual:
Table 19.1 – 2-item PR Moduses of Forms – Strict Moduses. (1 page in pdf file).
Table 19.2 – 3-item PQR Moduses of Forms – Strict Moduses. (5 pages in pdf file).
Having done this, we can now proceed with constructing definitions of the concepts of causation, prevention and connection in their strict sense (i.e. with ‘pqabs only’ not counted as causation,
etc.). The following extract from Table 19.1 suffices for this purpose:
Details from Table 19.1 – Causation, prevention and connection.
relation summary moduses notable features
strict causation 1001 1011 1101 outers both 1, inners one or both zero
strict non- causation all other moduses, except #1
strict prevention 0110 0111 0110 inners both 1, outers one or both zero
strict non-prevention all other moduses, except #1
strict connection strict causation or strict prevention features of both
strict non-connection all other moduses, except #1
We see here that, strictly speaking, causation is applicable to three moduses (Nos. 10, 12, 14, to be specific), whose common features are that their summary moduses start with a 1 (for P+R) and end
with a 1 (for notP+notR), and have one or two 0s in the middle (for P+notR or notP+R). Similarly, strict prevention concerns three moduses (Nos. 7, 8, 15), featuring two 1s on the inside and one or
two 0s on the outside. Connection accordingly covers these six moduses, and is thus definable by the sum of their features. The negations of these relations refer to all remaining moduses, except #1
(consisting of four 0s, which is universally impossible). Modus #16 (consisting of four 1s) always falls in the negative relation (strictly speaking) – its lack of any 0 puts in doubt any causative
relation in it.
We may express these results concerning strict causation in words as follows:causation is the relation between two items, if and only if they are found to have the following set of features: (a) the
first cannot occur without the second and/or the second cannot occur without the first, and in any case (b) the first and second can occur together and their negations can occur together.If these
conditions are satisfied, this first item is called cause and the second is called effect. The relation of prevention refers to causation of negation; and the relation of connection refers to either
causation or prevention. The negations of all these relations can accordingly be defined. Note well that if the two items and their negations arecompatible togetherevery which way, they cannot
strictly be said to be causatively related in any way; for such relation to be recognized,some incompatibilitybetween the items and/or their negations must be established.
Of course, the here stated definition of causation (and thence those of prevention and connection) could be argued to be rather rough, being based on Table 19.1 only, that is to say on the
configuration of ‘absolute’ causation between two items, comprising strong causation (mn) and its combinations with absolute weak causation (mqabs andnpabs). It ignores causation relative to a third
item, which is more complex and difficult to define. The simplest way to do it would be to say: ‘relative’ causation requires a more complicated and subtle definition, and rather than try and
formulate one I refer you to Table 19.2. Alternatively, we could try and construct a verbal definition with reference to the original forms listed in section 1 of the present chapter.
But I do not see the value of such a wordy and intricate definition in practice. Definitions should effectively lead us to the intended object, and not mystify us by their complexity. I think the
rough definition proposed here suffices for most purposes; and when we do need to get very precise, we can just point to the original forms or the said table, without attempting a formal summary. One
more thing needs doing, however – we need to explain the application of the proposed definition of causation (and its derivatives) in terms of generic ‘possibility’ and ‘impossibility’ to the
differentmodes of causation, and say more aboutthe way knowledge of causation is acquiredin them.
With regard to the logical or ‘de dicta’ mode of causation, the modal specifications of ‘possibility’ and ‘impossibility’ refer of course to some or no ‘contexts of knowledge’. In this domain, our
inductive practice is to assume modus #16 to be true, until and unless we manage to demonstrate another relation to be true. The truth of this principle can be seen in the theory of ‘opposition’,
where we assume two propositional forms to be fully compatible (i.e. neutral to each other) if we do not manage to specifically prove them (if only by some logical insight) to be contradictory or
contrary or subcontrary or implicant or subalternative.
Turning our attention now to the ‘de re’ modes, we can say: in extensional causation, ‘possibly’ means in some cases and ‘impossibly’ means in no cases; in natural, temporal and spatial causation,
these modalities refer respectively to some or no circumstances, some or no times, and some or no places. In these modes, our inductive practice is the exact opposite of that for the logical mode.
That is to say, here we assume the items concerned to be incompatible if we do not succeed in directly or indirectly finding empirical grounds to consider them as compatible. For example, we do not
affirm that ‘some X are Y’ if we have not directly observed any such cases, or at least (more indirectly) empirically confirmed a theory that implies this proposition.
Thus, modus #16 is not taken for granted as easily for the de re modes as it is for the logical mode. In the logical mode, it is used as thedefaultoption when no other option is established. Whereas,
in the de re modes we are not allowed to make such assumptions offhand, but rather remain in a state of ignorance until some good reason to accept modus #16, or any other modus, whether of causation
or of non-causation, is found. In this sense, the logical mode is more ‘a priori’ and the de re modes are more ‘a posteriori’. But as regards their formalities they differ little.
I think we need not belabor this topic further, except to point out, once again, how much more accurate our definitions are from those implied by David Hume and from other past attempts.
Once we have analyzed each and every possible form of causation and its sources and derivatives in matricial analysis, it is very easy to compare forms and determine their oppositions, eductions,
syllogisms and any other sorts of inference.
We can formulate general rules ofopposition, from which the oppositions between any pair of forms can be determined, as follows[2].
·Implicance: two formsallof whose alternative moduses are identical may be said to imply each other; i.e. they are implicants. For example,min causation andnin inverse causation are equivalent,
having the exact same moduses (2-item moduses #s 10, 12), no more and no less. It follows necessarily, note, that their negations are also implicants. For example,not-min causation andnot-nin inverse
causation are equivalent (2-item moduses #s 2-9, 11, 13-16).
·Subalternation: if one form has more moduses than another, and its list of moduses includesallthe moduses of that other andnoneof the moduses of its negation, the second form may be said to imply
but not be implied by the first; i.e. they are subalternatives: respectively, subalternant and subaltern. Note well that it is the (narrower ranging, more precise) form withlessmoduses that implies
the (broader ranging, vaguer) form withmoremoduses, and not vice versa. For example, “P is a complete cause of R” (2-item moduses #s 10, 12) subalternates “if P, not-then notR” (moduses 9-16). It
follows necessarily, note, that their negations are also subalternatives, though in the opposite direction. For example, “if P, then notR” (moduses 2-8) subalternates “P is a not complete cause of R”
(moduses 2-9, 11, 13-16).
·Contradiction: if two forms do not share any modus and if their moduses together make up the total number of moduses in the framework concerned (minus the universally impossible first modus), they
may each be said to imply the other’s negation (i.e. to be incompatible) and their negations each to imply the other’s affirmation (i.e. to be exhaustive); that is, they are contradictories. For
example,mhas 2-item moduses #s 10, 12 andnot-mhas moduses 2-9, 11, 13-16; therefore,mandnot-mare contradictory.
·Contrariety: if two formsdo nothave any modus in common, and if their moduses togetherdonotadd up to the total number of moduses in the framework concerned (minus the universally impossible first
modus), their affirmations may each be said to imply the other’s negation, though their negations do not each imply the other’s affirmation; that is, they are incompatible but not exhaustive, i.e.
contraries. For example,m(2-item moduses 10, 12) andpabs (moduses 14, 16) are contrary forms. Note that if two forms are contrary, their negations are necessarily subcontrary.
·Subcontrariety: if two formsdohave some modus(es) in common, and if their moduses togetherdoadd up to the total number of moduses in the framework concerned (minus the universally impossible first
modus), their negations may each be said to imply the other’s affirmation, though their affirmations do not each imply the other’s negation; that is, they are exhaustive but not incompatible, i.e.
subcontraries. For example,not-m(2-item moduses 2-9, 11, 13-16) andnot-pabs (moduses 2-13, 15) are subcontrary forms. Note that if two forms are subcontrary, their negations are necessarily contrary.
·Unconnectedness: if two forms have some modus(es) in common,andtheir negations have some modus(es) in common,andthe affirmation of each of them has some modus(es) in common with the negation of the
other, these forms may be said to be unconnected with each other, for this simply means that the four stated combinations are possible, i.e. that each form and its negation is compatible with the
other form and its negation. For example, “if P, then R” (2-item moduses #s 2-4, 9-12) and “if P, not-then R” (moduses 5-8, 13-16) are both unconnected to both “if notP, then notR” (moduses 2, 5-6,
9-10, 13-14) and “if notP, not-then notR” (moduses 3-4, 7-8, 11-12, 15-16).
Remember, this last category of opposition, viz. unconnectedness, also called ‘neutrality’, means that the forms concerned do not imply each other, and their negations do not imply each other, and
their affirmations do not imply their negations, and their negations do not imply their affirmations; i.e. the two forms are compatible in every which way and exhibit no incompatibility in any way –
that is why they are said to be unconnected or neutral. This covers all leftover cases, i.e. it applies when neither implicance, nor subalternation either way, nor contradiction, nor contrariety, nor
subcontrariety relate the two forms under scrutiny.
Let me remark here: the word ‘opposition’ was initially intended (in everyday parlance) to mean ‘conflict’ – i.e. it referred to contradiction or contrariety. The sense was then slightly enlarged by
logicians so as to include subcontrariety (which refers to contrariety of negations). Then, it was further enlarged to enable the inclusion of implicance and subalternation; this changed the meaning
of ‘opposition’ to ‘face-off’. Finally, the theory of opposition naturally called for a further concept, one denying all the preceding forms of opposition – i.e. a concept of ‘unconnectedness’ or
neutrality (see myFuture Logic, chapter 6.1). This relation too, though negative, can and must be regarded a form of ‘opposition’in an enlarged sense(i.e. face-off).[3]
Note that the above definition of unconnectedness in terms of moduses justifies my thesis earlier in the present chapter that there has to be room in causation theory for non-connection, since it
demonstrates that there is one more relation of ‘opposition’ than the six traditionally listed. For opposition theory (and more broadly, inference theory) is nothing other than causation theory in
the realm of logical modality[4]; it concerns causes in the special sense of ‘reasons’. What is true for this de dicta mode of modality is equally true for the de re modes, since there is no formal
difference between them in the present context.
Eductionis immediate inference from one (or more) forms with identical terms. When one form implies another, the latter can be educed from the former. When one form is incompatible with another, the
negation of either can be educed from the affirmation of the other. When two forms are exhaustive, the affirmation of either can be educed from the negation of the other. From these principles we can
likewise, with reference to moduses, determine all possible eductions.
We can similarly work out allsyllogism(i.e. mediate inference, through a middle term) with reference to moduses, as already explained in chapter 14.1 and demonstrated thereafter. If the premises have
no moduses in common, or if the premises do have some moduses in common but these moduses imply contradictory conclusions (i.e. some imply one conclusion and others the negation of it), they are
incompatible and therefore cannot make up a syllogism. But otherwise, the conclusion is generally the common ground of the premises, i.e. the moduses they have in common.
Thus, matricial analysis – more precisely, microanalysis – provides us with a practical way to correctly interpret all conceivable situations in causative logic.
[1]The definitions of complete and necessary causation are first given in chapter 2.1. Those of relative partial and relative contingent causation are introduced in chapter 2.3. The definitions here
put forward of absolute partial and absolute contingent causation are not found till chapter 13.4, although the concepts are developed much earlier, as of chapter 11.3.
[2]See chapter 13.3 for applications of this technique in phase II.
[3]I must in passing deride the couple of people who have written scholarly-looking articles where they seem to deny my concept of ‘unconnectedness’ to be a logically possible relation between
propositional forms and a needed category of ‘opposition’! This is not an issue open to choice, but (to repeat) a natural demand to exhaust the logical alternatives. Such people allow themselves to
be misled by mere words, thinking that opposition must needs signify conflict since that is the popular sense of the term. Or they are pettily annoyed that this additional category does not fit into
their pretty ‘squares of opposition’. This is the kind of silliness that focus on trivia produces.
[4]Implicance and subalternation each way are logical causation; and contradiction, contrariety and subcontrariety are logical prevention. In each case, the determinations are respectivelymn,mqandnp.
Avi Sion2023-01-05T11:41:08+02:00 | {"url":"https://thelogician.net/LOGIC-OF-CAUSATION/Defining-Causation-19.htm","timestamp":"2024-11-09T19:44:42Z","content_type":"text/html","content_length":"195516","record_id":"<urn:uuid:3e37f23d-ca70-41dd-86a9-334ec8b15816>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00334.warc.gz"} |
Explainable AI using expressive Boolean formulas | Amazon Web Services
Explainable AI using expressive Boolean formulas
This post was contributed by Gili Rosenberg, Kyle Brubaker, Martin Schuetz, Grant Salton, Jason Zhu, and Helmut Katzgraber from the Amazon Quantum Solutions Lab, and Elton Zhu, Serdar Kadıoğlu, Sima
Borujeni from the Fidelity Center for Applied Technology (FCAT) and the AI Centre of Excellence at Fidelity Investments.
Customers routinely develop large, sophisticated machine learning (ML) models to accomplish a variety of tasks. These models tend to be extremely complex by design (sometimes with billions of
parameters), and hence their inner workings are difficult to understand and interpret. However, in many customer applications explainability is desired or even required due to industry regulations,
especially in high-stakes situations such as in finance or healthcare. For example, models for the approval of credit requests, or a model that diagnoses a condition or disease need to be
interpretable, as important personal decisions are made based on these models.
In this post, we discuss recent, joint work by scientists from the Amazon Quantum Solutions Lab (QSL) and the Fidelity Center for Applied Technology (FCAT) to train interpretable ML models based on
expressive Boolean formulas. We start by introducing expressive Boolean formulas, continue by defining the problem that we solved, describe the local native solver we developed for this problem (with
and without non-local moves), and then explain how a quantum computer could be used to accelerate the training by fast proposal of non-local moves. We conclude by stating the main takeaways from our
work and this blog post.
Explainable AI
Explainable AI (XAI) is a branch of ML that aims to explain or interpret the decisions of ML models. Broadly speaking, there are two prevalent approaches to XAI:
1. Post hoc explanation of black-box models (Explainable ML)—Methods that attempt to explain the decisions of a model after they are made. These approaches (such as LIME and SHAP) are typically
model agnostic and can be applied to arbitrarily complex models, such as the ones commonly used in deep learning. Some issues that can occur with these methods are a lack of robustness, such as
being easily fooled by adversarial attacks, and ambiguity, where multiple “explanations” may exist.
2. Training interpretable models (Interpretable ML)—Models that are in some sense interpretable – it is possible to check and understand the inner workings of the model. Common examples include
decision trees and linear regression. Some ML models have a reputation of being interpretable, such as decision trees, but it’s easy to find simple examples for which a huge decision tree is
required (see Figure 1).
Our work falls under the latter approach — we introduce an interpretable ML model. Note that interpretable ML models can generally be used as standalone interpretable models, or they can be used to
explain black-box models.
Expressive Boolean Formulas
The advantages of interpretable ML models for high-stakes decision-making motivated FCAT to explore this area further. Quantum computers can be used to solve combinatorial optimization problems and
may one day outperform classical computers on such problems. Given the QSL’s interest and expertise in applying quantum computers, it was natural to look for interpretable ML models that might be
sped up by having access to a fast optimizer.
We settled on what we call “expressive Boolean formulas” (or just “formulas”). These formulas consist of literals and operators. Literals are variables f[i] or negated variables ∼f[i], where f[i]
is the i-th feature in the input data. Operators are operations that are performed on two or more literals, such as And(f[0],f[1],∼f[2]) (see Figure 2 for an illustrative example). Some operators
are parameterized; for example, AtLeast2(f[0],f[1],f[2]) returns true only if at least two of the literals are true.
A formula defines a rule that can be used as the basis for a binary classifier. Given a set of inputs, the rule is evaluated such that we get a binary result: the prediction of the ML model for that
set of inputs. We hypothesized that expressive Boolean formulas are more expressive than many other interpretable ML models, due to their flexible structure and relatively wide vocabulary (number of
operators). See Figure 3 for a comparison of the required complexity (for formulas – the number of operators and literals) for fully representing various rules. Of the methods compared, we see that
ours (denoted by “Rule”) requires a significantly lower complexity to fully represent the given rules, with various literals under the operator.
The Problem
The problem that we seek to solve is the following: given a set of binarized input data and binarized labels, how do we determine the rule that best fits the data (inputs) and labels (outputs)? This
should be thought of as training the ML model – an interpretable classifier. In this case, the training is done by solving a combinatorial optimization problem. Our main objective is thus to come up
with a solver that will do this efficiently (see Figure 4).
Native Local Solver
Below we describe how our native local solver works. Here, “native” refers to optimization in the natural search space for the problem. A natural search space for this problem is the space of
feasible expressive Boolean formulas. This is helpful since it makes sure that the solver does not spend precious time on sifting through infeasible solutions. This approach is in contrast to
reformulating the problem in a fixed-format, such as MaxSAT, ILP (integer linear programming), or QUBO (quadratic unconstrained binary optimization), which would be difficult (if not impossible), and
often requires searching a much larger space containing many infeasible solutions. “Local” refers to exploration of the search space via stochastic search, i.e., by performing a series of moves that
make relatively small changes to the current configuration.
Local moves in this context can be, for example, removing a literal (see Figure 5), adding a literal, and so on. The native local solver runs a series of local moves, until a good enough (ideally
near-optimal) solution is found.
Adding Non-local Moves
Imagine that you have access to a hardware accelerator, classical or quantum, that can solve combinatorial optimization problems extremely fast. How could you use it to potentially speed up the
solver? One option is to propose larger, “non-local” moves that are computationally expensive, but can be determined (proposed) by solving the same type of optimization problem that the accelerator
solves. If we can perform such non-local moves faster than getting the same improvement via local moves, then we may expect to see an advantage.
But what are non-local moves, as they apply to expressive Boolean formulas? The idea is to choose an operator from the existing formula, and then to optimize the subtree underneath it (see Figure 6).
It turns out that we can formulate the search for non-local moves as an ILP/QUBO, which can be solved by quantum computers (and which may outperform classical computers on this problem, one day). As
an example, we’ll briefly outline how to construct an ILP formulation for finding the best Or rule. We start by observing that the rule Or(f[0],f[1]) can be equivalently expressed as:
We can then define an optimization problem to find the smallest subset of features to include in the Or rule to achieve perfect accuracy:
where b is a vector of indicator variables, indicating whether each feature should be included in the rule (i.e., b[i]=1 if feature f[i] is included and b[i]=0 otherwise), X[P] is a matrix
containing only the rows labeled as “positive” (y=1), X[N] is a matrix containing only the rows labeled as “negative” (y[i]=0), and 0 and 1 are vectors containing only zeros and ones, respectively.
In our paper, we extend this formulation in various ways, which we outline below:
• Negated features – Adding the possibility to include negated features in the rule by adding an additional indicator vector ~b for negated features ~f[i].
• Imperfect classification – In practice, we typically do not expect to be able to achieve perfect accuracy. With this in mind, we introduce a vector of “error” indicator variables η indicating
whether each data row is misclassified. We change the constraints so that when the error variable corresponding to a particular sample is 1, the corresponding constraint is always true by
construction, effectively deactivating that constraint. Accordingly, we also change the objective function so that it minimizes the number of errors, given by ∣∣η∣∣[0]. To deal with unbalanced
datasets, we allow the positive and negative error terms to be weighted differently, by introducing the weights w[P] and w[N] (respectively).
• Controlling the complexity – To control the complexity of the rule, we add a regularization term, as well as an explicit constraint on the number of literals.
For more details on the formulation, formulations for other types of rules, and a recipe for converting ILPs to QUBOs, please see our paper. As an example, here’s the complete ILP formulation to find
the best Or rule:
In our paper, we benchmarked native local optimization with and without non-local moves on several well-known datasets. For example, rules found for each dataset, see Table 1. Since these datasets
are imbalanced, we chose balanced accuracy as the performance metric. Balanced accuracy is defined as the average of the true positive rate (sensitivity) and the true negative rate (specificity).
For a comparison of the balanced accuracy achieved with and without non-local moves, see Figure 7. We see that the inclusion of non-local moves provides a benefit at the higher complexity value of
30, but not at the lower value of 3. The complexity here is an upper bound on the total number of literals and operators. Note that increasing the complexity generally yields diminishing returns.
BoolXAI – Code Package
We will be open-sourcing part of our code. BoolXAI is a package that finds expressive Boolean formulas via native local optimization. Here’s a quick-start example that trains a binary classifier:
import numpy as np
from sklearn.metrics import balanced_accuracy_score
from boolxai import BoolXAI
# Create some random toy data for binary classification
# Note: X and y must both be binarized
rng = np.random.default_rng(seed=42)
X = rng.choice([0, 1], size=(100, 10))
y = rng.choice([0, 1], size=100)
# Rule classifier
rule_classifier = BoolXAI.RuleClassifier(random_state=42)
# Learn the best rule
rule_classifier.fit(X, y)
The rule can then be printed:
best_rule = rule_classifier.best_rule_
best_score = rule_classifier.best_score_
which gives the output:
'And(~[926.96<=worst area<1269.0], ~[worst radius>=23.682], ~[worst compactness>=0.4478], ~[1269.0<=worst area<1673.0], ~[781.18<=worst area<926.96])'
or plotted:
which displays:
We started this blog post by introducing the concept of expressive Boolean formulas and explained how they can be used to form an interpretable ML model for binary classification. We continued by
explaining how one can train such a model using native local optimization, and how a quantum computer could be used to accelerate the training by fast proposal of non-local moves. Finally, we’ll be
introducing an open-source package that can be used to do all this (but not the non-local moves, currently). For further technical details on this work, see our recently published paper.
If you work with complex models in regulated industries and/or involving high-stakes decisions, and would like to understand more about how explainable AI with expressive Boolean formulas could apply
to your business, reach out to the Amazon QSL to start a conversation. | {"url":"https://aws.amazon.com/blogs/quantum-computing/explainable-ai-using-expressive-boolean-formulas/","timestamp":"2024-11-05T23:57:14Z","content_type":"text/html","content_length":"98650","record_id":"<urn:uuid:a68f4862-721b-4c80-a30e-03515f8e6222>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00100.warc.gz"} |
In the third stage of the problem-solving process, learners are aiming for generalisation and possibly proof. (See the article
Mastering Mathematics: The Challenge of Generalising and Proof
.) A key skill in moving towards generalisation and proof is being able to transfer your thinking from one example to another new example. Similarities and differences between the two cases may
become apparent. Exploring further examples means that strategies might emerge which
work, which is the essence of a generalisation.
The tasks below provide opportunities for learners to get better at transferring their thinking.
You have a set of the digits from 0 to 9. Can you arrange these in the five boxes to make two-digit numbers as close to the targets as possible?
Dotty Six is a simple dice game that you can adapt in many ways.
Annie and Ben are playing a game with a calculator. What was Annie's secret number?
Design an arrangement of display boards in the school hall which fits the requirements of different people.
Can you order the digits from 1-3 to make a number which is divisible by 3 so when the last digit is removed it becomes a 2-figure number divisible by 2, and so on?
How could you arrange at least two dice in a stack so that the total of the visible spots is 18? | {"url":"https://nrich.maths.org/transferring","timestamp":"2024-11-15T00:20:22Z","content_type":"text/html","content_length":"46422","record_id":"<urn:uuid:e9c16fea-f37e-4471-8485-421b6b83bb83>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00641.warc.gz"} |
How do you simplify (5 3/5)/ (2 1/7)? | HIX Tutor
How do you simplify #(5 3/5)/ (2 1/7)#?
Answer 1
The first step is to change #color(blue)"mixed numbers to improper fractions"#
#rArr5 3/5=28/5" and "2 1/7=15/7#
#rArr28/5÷15/7" is to be calculated"#
To divide 2 fractions, use the following steps.
#• " leave the first fraction"#
#• " change division to multiplication"#
#• " invert (turn upside down) the second fraction"#
#• " cancel, if possible and simplify"#
#=28/5xx7/15larrcolor(red)" multiply and invert"#
There is no cancelling available, so
#=2 46/75larr" return as mixed number"#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To simplify (5 3/5) / (2 1/7), we first convert the mixed numbers to improper fractions.
(5 3/5) becomes (28/5) and (2 1/7) becomes (15/7).
Next, we invert the second fraction and change the division operation to multiplication.
So, (28/5) / (15/7) becomes (28/5) * (7/15).
We then multiply the numerators together and the denominators together.
(28 * 7) / (5 * 15) equals 196/75.
Therefore, (5 3/5) / (2 1/7) simplifies to 196/75.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-simplify-5-3-5-2-1-7-8f9af9c1c1","timestamp":"2024-11-05T20:20:37Z","content_type":"text/html","content_length":"570732","record_id":"<urn:uuid:3917dccd-c623-492c-87db-5950c477bbd7>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00246.warc.gz"} |
Time-Dependent Natural Convection in a Square Cavity
This post was updated on .
Thank for reporting the issue. Fluid flow problems without outflows, like your example, require the pressure to be unique (to converge). In order to do this an integral constraint
setting the mean pressure to zero is introduced per default. This modifies the system matrix which seems to conflict with the time dependent solver (this will be addressed in a future
Alternatively, you can remove the integral constraint (clear "Equation" menu > "Subdomain Integral Constraints..." > p), and instead fix the pressure in one point ("Boundary" menu >
"Point Constraints..." > p), this should work for all time stepping schemes.
EDIT: Update 1.15.4 is now available with a fix supporting integral constraints and time-stepping schemes. | {"url":"https://forum.featool.com/Time-Dependent-Natural-Convection-in-a-Square-Cavity-td1241.html#a1243","timestamp":"2024-11-07T09:54:26Z","content_type":"text/html","content_length":"41917","record_id":"<urn:uuid:b56222a5-d277-42a1-9288-f1a262da9e1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00594.warc.gz"} |
-nearest neighbor classifier
Margin of k-nearest neighbor classifier
m = margin(mdl,Tbl,ResponseVarName) returns the classification margins for mdl with data Tbl and classification Tbl.ResponseVarName. If Tbl contains the response variable used to train mdl, then you
do not need to specify ResponseVarName.
m is returned as a numeric vector of length size(Tbl,1). Each entry in m represents the margin for the corresponding row of Tbl and the corresponding true class label in Tbl.ResponseVarName, computed
using mdl.
m = margin(mdl,Tbl,Y) returns the classification margins for mdl with data Tbl and classification Y.
m = margin(mdl,X,Y) returns the classification margins for mdl with data X and classification Y. m is returned as a numeric vector of length size(X,1).
Margin Calculation
Create a k-nearest neighbor classifier for the Fisher iris data, where $k$ = 5.
Load the Fisher iris data set.
Create a classifier for five nearest neighbors.
mdl = fitcknn(meas,species,'NumNeighbors',5);
Examine the margin of the classifier for a mean observation classified as 'versicolor'.
X = mean(meas);
Y = {'versicolor'};
m = margin(mdl,X,Y)
All five nearest neighbors classify as 'versicolor'.
Input Arguments
More About
The classification margin for each observation is the difference between the classification score for the true class and the maximal classification score for the false classes.
The score of a classification is the posterior probability of the classification. The posterior probability is the number of neighbors with that classification divided by the number of neighbors. For
a more detailed definition that includes weights and prior probabilities, see Posterior Probability.
Extended Capabilities
Tall Arrays
Calculate with arrays that have more rows than fit in memory.
The margin function fully supports tall arrays. For more information, see Tall Arrays.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
Usage notes and limitations:
• margin does not support GPU arrays for ClassificationKNN models with the following specifications:
□ The 'NSMethod' property is specified as 'kdtree'.
□ The 'Distance' property is specified as a function handle.
□ The 'IncludeTies' property is specified as true.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2012a | {"url":"https://se.mathworks.com/help/stats/classificationknn.margin.html","timestamp":"2024-11-04T12:28:15Z","content_type":"text/html","content_length":"92043","record_id":"<urn:uuid:72f242b0-8350-4715-af21-0176a0008401>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00070.warc.gz"} |
[opentheory-users] defineTypeOp underspecified? / opentheory be used as a proof verifier
Ramana Kumar Ramana.Kumar at cl.cam.ac.uk
Sat Aug 8 01:02:12 UTC 2015
On 7 August 2015 at 00:30, Günter Rote <rote at inf.fu-berlin.de> wrote:
> I am trying to write an article reader in python, mainly to
> get an understanding how the steps of a formal proof in HOL look like.
Sounds like a good idea.
> My question concerns the definition of *defineTypeOp*
> in the article file format
> http://www.gilith.com/research/opentheory/article.html
> Since the new constants *abs* and *rep*
> appear in a term like "(⊦ (λa. abs (rep a)) = λa. a)",
> they must have some type:
> I guess the new type operator op with name n has
> type parameters α1 ... αk in this order, and
> the types of rep and abs are
> rep:: n α1 ... αk -> b
> abs:: b -> n α1 ... αk (using prefix notation for the type-operator)
> where b is the type of t
> Is that true?
That is correct.
> In the definition of *opType*, is it required that
> the length n of the list equals the k with which
> the type-operator "op" has been defined?
Kind of true, but not really. Type operators can be constructed by the
virtual machine even if they have not previously been defined in the same
article. In that case, the type operator will be one of the "external type
operators" (i.e., an input) to the resulting theory. (N.B. if you use
*typeOp* to create a type operator with the same name as one created by
*defineTypeOp*, they won't alias each other; i.e., if you want to use the
type defined by *defineTypeOp* you have to use the TypeOp object that it
returns. Some related previous mails on this point:
However, I think if you are using a type operator whose definition is known
and supply the wrong number of arguments, article processing will fail. Joe
can probably confirm and clarify this.
> (Maybe my question is based on a basic misunderstanding:
> Can opentheory be used as a proof verifier?
> I.e. If an article file goes through opentheory without
> errors, does it mean that the listed "theorems" can be
> derived from the "assumptions" (using the OpenTheory derivation rules)?
Yes. The opentheory tool can be used to verify that articles obey the
derivation rules of (OpenTheory's variant of) HOL.
> (I have seen talks about proof verification by transforming opentheory to
> dedukti (or rather HOL via opentheory to dedukti) and then verifying the
> proof in dedukti. What would be the point of this if this
> verification is also fine in opentheory?)
There are at least two points to make here:
1. Dedukti verifies proofs in a different logical system, lambda pi modulo
rewriting, in which HOL (e.g., OpenTheory) can be embedded. The idea is
that it could be a place where theories from different logics could be
combined. Thus it makes sense to import into Dedukti, even if you trust the
opentheory-checked version is correct, to allow the connection with other
things in Dedukti.
2. It's always good to have multiple implementations of a proof checker,
since proof checkers can contain bugs but the chances of two very different
implementations having the same bugs are low. (To be clear, proof checkers
don't typically have many bugs, but it's possible.) The opentheory tool
wasn't designed primarily as a proof checker (it has other design goals
too), so it's not the smallest and simplest piece of code doing that job,
hence maybe you would prefer to trust an independent, purpose-built
checker. (I don't think Dedukti is that either though.) The CakeML project
(of which I'm a developer) takes the idea of reducing the need to trust the
proof checker to an extreme: we are working on a proof checker whose
machine-code implementation is itself verified to only accept theories that
are true in all (standard) models of higher-order logic. (Verified in the
sense of having a proof in OpenTheory deriving that fact.)
> --
> G"unter Rote (Germany=49)30-838-75150 (office)
> Freie Universit"at Berlin
> Institut f"ur Informatik
> _______________________________________________
> opentheory-users mailing list
> opentheory-users at gilith.com
> http://www.gilith.com/opentheory/mailing-list
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gilith.com/opentheory/mailing-list/attachments/20150808/7435ed08/attachment.html>
More information about the opentheory-users mailing list | {"url":"https://gilith.com/opentheory/mailing-list/2015-August/000575.html","timestamp":"2024-11-01T20:16:04Z","content_type":"text/html","content_length":"8645","record_id":"<urn:uuid:e9b45a1c-36e8-4050-a7b5-4740dba72aa9>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00321.warc.gz"} |
Collapsing degrees (Tribute to Mahaney)
Collapsing Degrees
Guest post by Stuart Kurtz and Jim Royer.
Bill Gasarch asked us to write an article about Collapsing Degrees, in the memory and honor of our coauthor, Steve Mahaney.
In 1986, Alan Selman and Steve Mahaney created the Structure in Complexity Conference, now the Conference on Computational Complexity. But in 1986, it was about structure, a term that Paul Young
borrowed from computability theory, and which has passed into disuse, but in those days defined us.
The word structure embodied optimism about a particular approach to the P vs. NP problem—that its solution might be found in through exploring structural properties of sets and degrees. For example,
Berman and Hartmanis had shown that if all NP-complete sets are paddable, then all NP-complete sets were isomorphic under polynomial time computable and invertable reductions, and hence P ≠ NP. Their
result leveraged a structural property about specific sets (paddability) to a structural result about degrees (the complete polynomial time m-degree of NP consists of a single polynomial-time
isomorphism result), to obtain a complexity-theoretic result.
That summer, after the conference, Steve visited us in Chicago, beginning a long and productive collaboration. We beat around the isomorphism conjecture for several days, until Steve mentioned that
it wasn't even know that a collapse happened at any nontrivial degree. We smelled blood.
Relativization provided some guidance. Berman had proven that the EXP-complete degree consisted of a single 1-li degree. If P = NP, then 1-li degrees collapse. Of course, if P = NP, our rationale for
interest in the Isomorphism Conjecture was mooted, and what we really cared about was the “true” P ≠ NP case.
Our main result from that summer was that collapsing degrees existed, without requiring an additional complexity-theoretic hypothesis. Our proof involved a finite-injury priority argument, and seemed
to require it.
It was a joy and a privilege to have had Steve Mahaney as a colleague and friend. Until we meet again, peace. | {"url":"https://blog.computationalcomplexity.org/2007/07/collapsing-degrees-tribute-to-mahaney_03.html?m=1","timestamp":"2024-11-08T01:59:30Z","content_type":"application/xhtml+xml","content_length":"50426","record_id":"<urn:uuid:313dda4d-7ba4-4aa7-beeb-9ee386637534>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00592.warc.gz"} |
Figure 4: spatially resolved densities atomic oxygen
(a) Spatially resolved absolute densities of atomic oxygen obtained experimentally and computed from the numerical solution of eq. (8). (b) Peak and mean (spatially averaged) values of the atomic
oxygen density obtained from the measured data and (c) normalized constant of proportionality used in equation (10) to relate the time averaged atomic oxygen source to the time averaged measured
electron impact excitation rate as a function of the number consecutive driving harmonics, N, at y = 0.5 mm, z = -15 mm and for an O2 flow corresponding to an admixture concentration of 0.5 % for
“peaks”-waveforms. The normalization is performed by dividing by the proportionality constant for the N = 1 case. The powered electrode is located at x = 0, while the grounded electrode is at x = 1
mm. The base frequency is f0 = 13.56 MHz and φpp = 500 V.
Experimental data are marked as exp
Calculated data are marked as calc
Figure 4 a) exp
x (mm),y (exp atomic oxygen densities (cm-3)),
Harmonics used: 1(red), 2(olive green), 3(white) and 4(blue)
Figure 4 a) calc
x (mm),y (calc atomic oxygen densities (cm-3)),
Harmonics used: 1(red_dashed), 2(olive green_dashed), 3(white_dashed) and 4(blue_dashed)
Figure 4 b)
x (N),y (exp atomic oxygen densities (cm-3)),
Figure 4 c)
x (N),y (Normalized k)
Data Preview: Note that by default the preview only displays up to 100 records. Use the pager to flip through more records or adjust the start and end fields to display the number of records you wish
to see. | {"url":"https://rdpcidat.rub.de/dataset/atomic-oxygen-generation-atmospheric-pressure-rf-plasma-jets-driven-tailored-voltage-1","timestamp":"2024-11-15T04:22:35Z","content_type":"text/html","content_length":"28108","record_id":"<urn:uuid:b3c7a6a7-bd64-4f2e-aee8-519c812058ca>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00020.warc.gz"} |
Cell Worksheet Functions
Excel provides only very limited support for determining with a formula the name of the worksheet, workbook, and folder path. The one function it provides is the CELL function. However, with some
text manipulation, we can parse out information about the file name, folder name, sheet name, and drive letter.
The syntax of the CELL function is show below:
In the CELL function, you must include "filename" as literal text as shown above. You do not include the actual file name. The reference parameter is optional, but in nearly all cases you want to use
a cell on the same worksheet as the CELL function. The reference can refer to any cell -- which cell is irrelevant. The parameter is used to determine what workbook and worksheet is used and
returned. If present, the worksheet and workbook containing the the range R are used to generate the result. If omitted, the ActiveSheet is used when the formula is calculated. The ActiveSheet may
not be the sheet containing the CELL function when calculation occurs and misleading results may occur. Unless you have a very good reason for omitting the reference parameter, you should always
include it. All of the example formulas below include the reference parameter.
The result of the CELL function is the workbook name and sheet name in the following example format:
With some text manipulation, we can get several pieces of information about the file name and the sheet name.
The workbook must have been saved to disc for these functions to work. They will not work on a new workbook that has never been saved. The workbook may be in an unsaved state, but it must have been
saved to disc at least once.
The reason that that the formulas are as long as they are is because the CELL function is called several times within the function. It is not possible to call CELL once and reuse its result elsewhere
in the formula.
The following formula will return the full file name.
Using the example file name above, this will return C:\Folder\Folder2\FileName.xlsx
This function will return only the file name, without the folder path information.
Using the example file name above, this will return FileName.xlsx
This function will return the sheet name only:
Using the example file name above, this will return Sheet1.
This function will return the complete path name of the file, without the file name or sheet name.
Using the example file name above, this will return C:\Folder\Folder2.
These functions will return the name of the folder that contains the file, without any other path inforrmation. If you have a proper file name in cell D3, for example C:\One\Two\Three\FileName.xls,
you can use the following formula to get the containing (inner-most) folder name:
If cell D3 contains C:\One\Two\Three\FileName.xls, the formula will return Three. This formula will work as long as the file is not in the root directory of a drive (e.g., F:\MyFile.xls). If it is
possible that the file is directly in the root directory of a drive, you need to expand the formula to:
If you are using =CELL("filename",A1) to get the file name of the file that contains this formula, you can use the following formula:
This formula will work as long as the file is not directly under the root drive (e.g., F:\MyFile.xls). If the file might be under the root drive, you have to expand the formula to:
The reason these formulas are so long is that there is no worksheet function to search text right-to-left. All of Excel's text functions operate left-to-right. Another reason is that there is no
single function to count the number of occurrances of a specific character within a text string. Without these limitations, the formulas would be much simpler and shorter.
This function will return the drive letter (including the ':' character of the file.
Using the example file name above, this will return C:. This will support drives with a multi-letter name (e.g., AB:). While this condition is very rare, the formula can support it. (I once new a guy
who partitioned a hard drive into 30 partitions just to see how the drive letters behave.)
This page last updated: 17-August-2012. | {"url":"https://www.cpearson.com/Excel/CellWorksheetFunctions.aspx","timestamp":"2024-11-14T03:42:21Z","content_type":"text/html","content_length":"36433","record_id":"<urn:uuid:faf93db7-019f-48a0-8999-686e0cb54741>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00749.warc.gz"} |
robust standard errors logistic regression
Is there > any way to do it, either in car or in MASS? It handles the output of contrasts, estimates of … Now assume we want to generate a coefficient summary as provided by summary() but with robust
standard errors of the coefficient estimators, robust \(t\)-statistics and corresponding \(p\)-values for the regression model linear_model.This can be done using coeftest() from the package lmtest,
see ?coeftest.Further we specify in the argument vcov. accounting for the correlated errors at the same time, leading to efficient estimates of Even though there A better estimates along with the
asymptotic covariance matrix. 3. Logistic regression is used in various fields, including machine learning, most medical fields, and social sciences. 2. Get the formula sheet here: The idea behind
robust regression methods is to make adjustments in the estimates that take into account some of the flaws in the data itself. any way to do it, either in car or in MASS? But at least you know how
robust standard errors are calculated by STATA. Dear all, I use ”polr” command (library: MASS) to estimate an ordered logistic regression. Hi there, I've been asked to calculate white standard errors
for a logistic regression model for a work project. Our work is largely inspired by following two recent works [3, 13] on robust sparse regression. Here is the same regression as above using the acov
This is because that In this chapter we My conclusion would be that - since heteroskedasticity is the rule rather than the exception and with ML mostly being QML - the use of the sandwich estimator
is only sensible with OLS when I use real data. Estimating robust standard errors in Stata 4.0 resulted in ... . However, their performance under model misspecification is poorly understood. Is there
any way to do it, either in car or in MASS? glm returns an object of class inheriting from "glm" which inherits from the class "lm".See later in this section. The Elementary Statistics Formula Sheet
is a printable formula sheet that contains the formulas for the most common confidence intervals and hypothesis tests in Elementary Statistics, all neatly arranged on one page. *** on WAGE1.dta But
at least Default standard errors reported by computer programs assume that your regression errors are independently and identically distributed. It … Obtaining robust standard errors and odds ratios
for logistic regression in R PUBLISHED ON SEP 19, 2016 I’ve always found it frustrating how it’s so easy to produce robust standard errors in Stata and in R it’s so complicated. We are interested in
testing hypotheses that concern the parameter of a logistic regression model. It takes a formula and data much in the same was as lm does, and all auxiliary variables, such as clusters and weights,
can be passed either as quoted names of columns, as bare column names, or as a self-contained vector. elementary school academic performance index (elemapi2.dta) dataset. The regression without staâ
¦ Using a robust estimate of the varianceâ covariance matrix will not help me obtain correct inference. Robust standard errors. Applications. RCT data collected across 2 separate healthcare sites 2.
Clustered/Robust Standard Errors in SAS I was asked to get cluster my standard errors in SAS models. I am trying to get robust standard errors in a logistic regression. It takes a formula and data
much in the same was as lm does, and all auxiliary variables, such as clusters and weights, can be passed either as quoted names of columns, as bare column names, or as a self-contained vector. Here
are some specifics about the data set I'm using: 1. For example, the Trauma and Injury Severity Score (), which is widely used to predict mortality in injured patients, was originally developed by
Boyd et al. 2 S L i x i = ∂ ∂β () and the Hessian be H L j x i = ∂ ∂β 2 ()2 for the ith observation, i=1,.....,n. Suppose that we drop the ith observation from the model, then the estimates would
shift by the amount This person I am working with uses STATA and showed … I use industry and time dummies though. The standard errors changed. get with robust standard errors provided by STATA. An
Introduction to Robust and Clustered Standard Errors Linear Regression with Non-constant Variance Review: Errors and Residuals Errorsare the vertical distances between observations and the
unknownConditional Expectation Function. Log-binomial and robust (modified) Poisson regression models are popular approaches to estimate risk ratios for binary response variabl This class summarizes
the fit of a linear regression model. This function performs linear regression and provides a variety of standard errors. We are going to look at three robust methods: regression with robust standard
errors, regression with clustered data, robust regression, and quantile regression. estimating the following 3 models. Can we apply robust or cluster standard erros in multinomial logit model?
statsmodels.regression.linear_model.RegressionResults¶ class statsmodels.regression.linear_model.RegressionResults (model, params, normalized_cov_params = None, scale = 1.0, cov_type = 'nonrobust',
cov_kwds = None, use_t = None, ** kwargs) [source] ¶. . In this video you will learn performing regression when there is presence of heteroscedasticity in your data. One observation per row (eg
subjectid, age, race, cci, etc) 3. [R] Logistic regression and robust standard errors Achim Zeileis Achim.Zeileis at uibk.ac.at Fri Jul 1 14:57:38 CEST 2016. For randomly sampled data with
independent observations, PROC LOGISTIC is usually the best procedure to use. Since standard model testing methods rely on the assumption that there is no correlation between the independent
variables and the variance of the dependent variable, the usual standard errors are not very reliable in the presence of heteroskedasticity. Thanks for the help, Celso . It can't be because the
independent variables are related because they are all distinct ratings for an individual (i.e., interaction variables are out of the picture). You just need to use STATA command, “robust,” to get
robust standard errors (e.g., reg y x1 x2 x3 x4, robust). Log-binomial and robust (modified) Poisson regression models are popular approaches to estimate risk ratios for binary response variables. If
you have complex sample survey data, then use PROC SURVEYLOGISTIC. Logistic regression (from scratch) using matrices. The reason OLS is "least squares" is that the fitting process involves minimizing
the L2 distance (sum of squares of residuals) from the data to the line (or curve, or surface: I'll use line as a generic term from here on) being fit. Of course, you do not need to use matrix to
obtain robust standard errors. Notice that when we used robust standard errors, the standard errors for each of the coefficient estimates increased. Previous studies have shown that comparatively
they produce similar point estimates and standard errors. Details. Therefore, they are unknown. Logistic regression and robust standard errors. You can find out more on the CRAN taskview on Robust
statistical methods for a comprehensive overview of this topic in R, as well as the 'robust' & 'robustbase' packages. HETEROSKEDASTICITY-ROBUST STANDARD ERRORS FOR FIXED EFFECTS PANEL DATA REGRESSION
BY JAMES H. STOCK AND MARK W. W ATSON 1 The conventional heteroskedasticity-robust (HR) variance matrix estimator for cross-sectional regression (with or without a degrees-of-freedom adjustment),
applied Let’s continue using the hsb2 data file to illustrate the use of could have gone into even more detail. If your interest in robust standard errors is due to having data that are correlated in
clusters, then you can fit a logistic GEE (Generalized Estimating Equations) model using PROC GENMOD. 6logit— Logistic regression, reporting coefficients Say that we wish to predict the outcome on the
basis of the independent variable. Hello, I would like to calculate the R-S The outcome is always zero whenever the independent variable is one. Dear All, I have a question concerning Multinomial
Logistic Regression. We have decided that these data points are not data entry errors, neither they are from a different population than most of our data. Thanks for the help ... which computes the
standard Eicker-Huber-White estimate. Celso Barros wrote: > I am trying to get robust standard errors in a logistic regression. Cluster or Robust standard errors in Multinomial Logistic Regression 11
Aug 2017, 20:08. I am trying to get robust standard errors in a logistic regression. A robust Wald-type test based on a weighted Bianco and Yohai [ Bianco, A.M., Yohai, V.J., 1996. Logistic
regression with clustered standard errors in r. Logistic regression with robust clustered standard errors in R, You might want to look at the rms (regression modelling strategies) package. These
robust covariance matrices can be plugged into various inference functions such as linear.hypothesis() in car, or coeftest() and waldtest Robust Logistic Regression using Shift Parameters Julie
Tibshirani and Christopher D. Manning Stanford University Stanford, CA 94305, USA fjtibs, manningg@cs.stanford.edu Abstract Annotation errors can significantly hurt classifier performance, yet datasets
are only growing noisier with the increased use of Amazon Mechanical Turk and tech- Eg subjectid, age, race, cci, etc ) 3 CEST 2016 use PROC SURVEYLOGISTIC regression when is..., race, cci, etc ) 3
you will learn performing regression when there is presence heteroscedasticity! My standard errors in SAS I was asked to get cluster my errors! To get cluster my standard errors for each of the
independent variable is one resulted in... comparatively produce! ] on robust sparse regression: 1 cluster standard erros in Multinomial logit model in testing hypotheses concern. Eicker-Huber-White
estimate concerning Multinomial logistic regression model notice that when we used robust standard errors are calculated STATA. Matrix will not help me obtain correct inference under model
misspecification is poorly.. The varianceâ covariance matrix will not help me obtain correct inference for the help... computes! The help... which computes the standard errors, cci, etc ) 3 'm...
Regression without staâ ¦ using a robust estimate of the outlier point and capturing trend..., reporting coefficients Say that we wish to predict the outcome is always zero whenever the independent
is! The regression without staâ ¦ using a robust estimate of the outlier point and capturing trend... Observation per row ( eg subjectid, age, race, cci, etc 3. Bianco and Yohai [ Bianco, A.M.,
Yohai, V.J., 1996 help me obtain inference... ] logistic regression 2017, 20:08 whenever the independent variable is one the R-S get with robust standard errors or... Is usually the best procedure to
use on robust sparse regression models are approaches! Is largely inspired by following two recent works [ 3, 13 ] on robust regression. Following two recent works [ 3, 13 ] on robust sparse
regression the use of have... Covariance matrix will not help me obtain correct inference work is largely by... Use of could have gone into even more detail by computer programs assume that your
regression are. You know how robust standard errors Achim Zeileis Achim.Zeileis at uibk.ac.at Fri Jul 1 14:57:38 2016... ) 3 separate healthcare sites 2 SAS models a question concerning Multinomial
logistic regression 11 Aug 2017, 20:08 regression. Work is largely inspired by following two recent works [ 3, 13 ] robust. Resisting the influence of the independent variable best procedure to use a
least squares.... All, I use ” polr ” command ( library: MASS ) to estimate an ordered logistic,! Social sciences at uibk.ac.at Fri Jul 1 14:57:38 CEST 2016 would like to calculate R-S. Test based on
a weighted Bianco and Yohai [ Bianco, A.M., Yohai, V.J. 1996! ” command ( library: MASS ) to estimate risk ratios for binary response variables medical fields, including learning! Least you know how
robust standard errors capturing the trend in the remaining data observations, PROC is... Into even more detail misspecification is poorly understood ( library: MASS ) to estimate risk for... On
robust sparse regression the standard Eicker-Huber-White estimate 3, 13 ] on robust sparse regression observation row. School academic performance index ( elemapi2.dta ) dataset illustrate the use of
could have gone into even more detail use... Errors reported by computer programs assume that your regression errors are calculated by STATA, social! [ Bianco, A.M., Yohai, V.J., 1996 the parameter
of a logistic regression, reporting coefficients that... Cci, etc ) 3 is presence of heteroscedasticity in your data this section ``. Studies have shown that comparatively they produce similar point
estimates and standard errors in Multinomial logit model it Estimating... That your regression errors are independently and identically distributed [ Bianco, A.M., Yohai, V.J., 1996
Eicker-Huber-White! Help me obtain correct inference fit of a logistic regression and robust ( modified ) Poisson regression models succeed resisting... Usually the best procedure to use or cluster
standard erros in Multinomial logistic regression model the use of could gone. Basis of the varianceâ covariance matrix will not help me obtain correct inference squares regression computes the
standard Eicker-Huber-White.! The regression without staâ ¦ using a robust estimate of the varianceâ covariance matrix will not help me correct... [ Bianco, A.M., Yohai, V.J., 1996 in a logistic
regression outlier... In the remaining data summarizes the fit of a logistic regression the outcome the... When there is presence of heteroscedasticity in your data learn performing regression when
there is presence of heteroscedasticity in data. Performing regression when there is presence of heteroscedasticity in your data erros in Multinomial regression. Independently and identically
distributed of class inheriting from `` glm '' which inherits from class. A linear regression model if you have complex sample survey data, then PROC... Independent observations, PROC logistic is
usually the best procedure to use, 20:08 the best procedure to matrix. Similar point estimates and standard errors in SAS models clustered/robust standard errors for each of the varianceâ matrix. The
R-S get with robust standard errors in Multinomial logit model of have... Sites 2 a weighted Bianco and Yohai [ Bianco, A.M.,,. The class `` lm ''.See later in this video you will learn regression.
Learning, most medical fields, and social sciences ) 3 use matrix to obtain standard! To predict the outcome on the basis of the outlier point and capturing the trend in the remaining data cluster...
Across 2 separate healthcare sites 2 least squares regression, reporting coefficients Say that wish... Following two recent works [ 3, 13 ] on robust sparse regression robust... Hsb2 data file to
illustrate the use of could have gone into even more detail command library. ) 3 get with robust standard errors in a logistic regression 11 Aug 2017,.. With uses STATA and showed … Details zero
whenever the independent variable is one hsb2 data file to illustrate use. They produce similar point estimates and standard errors in SAS models errors reported by computer programs that. Standard
Eicker-Huber-White estimate estimates increased use matrix to obtain robust standard errors, the standard Eicker-Huber-White estimate Poisson! In a logistic regression and robust standard errors for
each of the independent variable is one Achim Zeileis Achim.Zeileis uibk.ac.at... Course, you do not need to use errors provided by STATA '' which inherits the. Will not help me obtain correct
inference are popular approaches to estimate risk ratios for binary variables... Robust estimate of the coefficient estimates increased I 'm using: 1 used in fields! Remaining data, the standard
errors in a logistic regression wish to predict the outcome on the basis the. Am trying to get cluster my standard errors provided by STATA Multinomial logit?... Data, then use PROC SURVEYLOGISTIC
when fitting a least squares regression, reporting coefficients Say that we wish predict. Errors reported by computer programs assume that your regression errors are independently and identically
distributed using: 1 observation. Per row ( eg subjectid, age, race, cci, etc ) 3 least Default standard errors Zeileis... Which you would use least squares regression, we might find some outliers or
high leverage data points modified Poisson... ” robust standard errors logistic regression ( library: MASS ) to estimate an ordered logistic regression model calculate the R-S get with standard.
Robust standard errors, the standard Eicker-Huber-White estimate would use least squares,! The class `` lm ''.See later in this section succeed in the. ) dataset calculated by STATA standard erros in
Multinomial logit model I have a question concerning Multinomial logistic.! They produce similar point estimates and standard errors coefficients Say that we wish predict... Cluster or robust standard
errors in SAS models programs assume that your regression errors calculated! Matrix to obtain robust standard errors in Multinomial logistic regression model inspired by following two works! The
parameter of a linear regression model to do it, either in car or in MASS which. In car or in MASS `` lm ''.See later in this video you will learn performing regression when is... That when we used
robust standard errors in SAS models shown that comparatively they produce similar estimates... Influence of the coefficient estimates increased leverage data points find some outliers or high
leverage points! Errors reported by computer programs assume that your regression errors are calculated STATA! Uibk.Ac.At Fri Jul 1 14:57:38 CEST 2016 is always zero whenever the independent variable
basis of the independent variable one. The trend in the remaining data data set I 'm using: 1 ordered logistic regression regression models in. Elementary school academic performance index (
elemapi2.dta ) dataset thanks for the help... which computes standard... Reporting coefficients Say that we wish to predict the outcome on the basis of the varianceâ matrix... Regression model and
social sciences elementary school academic performance index ( elemapi2.dta ) dataset and standard errors in I! You have complex sample survey data, then use PROC SURVEYLOGISTIC regression 11 Aug
2017, 20:08 any to! Age, race, cci, etc ) 3 let ’ s continue using the hsb2 file. Use ” polr ” command ( library: MASS ) to estimate risk ratios for binary response.. From `` glm '' which inherits
from the class `` lm ''.See later in this section response. Here are some specifics about the data set I 'm using: 1 Zeileis Achim.Zeileis at uibk.ac.at Fri Jul 14:57:38... Risk ratios for binary
response variables the hsb2 data file to illustrate the use could. Inherits from the class `` lm ''.See later in this video you will learn performing regression there! Standard errors linear
regression model with independent observations, PROC logistic is usually the best to! Data, then use PROC SURVEYLOGISTIC for the help... which computes the standard Eicker-Huber-White estimate a
regression! Command ( library: MASS ) to estimate risk ratios for binary variables! Have complex sample survey data, then use PROC SURVEYLOGISTIC point and the! To get robust standard errors, the
standard Eicker-Huber-White estimate have complex sample survey data, then PROC...
Do Dried Flowers Attract Cockroaches, How Far Is Jackson Tn From Memphis Tn, Spongebob Season 7 Review, Bill Lake Fishing Guide, Ighalo Fifa 20 Rating, Car Pick Up Lines Reddit, Family Guy Greased Up
Deaf Guy Gif, Muttiah Muralitharan Tamil, Bulwell Incident Today, First Pace Bowler To Take 600 Wickets, | {"url":"https://www.ibase.com.sg/wes0erd3/robust-standard-errors-logistic-regression-8be232","timestamp":"2024-11-06T01:52:14Z","content_type":"text/html","content_length":"79875","record_id":"<urn:uuid:e6a41e3e-a1c3-4b1e-9b45-a71be62918ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00498.warc.gz"} |
Availability-Cost Optimization of Butter Oil Processing System by Using Nature Inspired Optimization Algorithms
Availability-Cost Optimization of Butter Oil Processing System by Using Nature Inspired Optimization
Anuj Kumar1*, Ganga Negi2, Sangeeta Pant1*, Mangey Ram2, Sushil C. Dimri2
department of Mathematics, University of Petroleum & Energy Studies, Dehradun, India 2Department of Mathematics, Computer Science & Engineering, Graphic Era Deemed to be
University, Dehradun, India Pant.sangeet@gmail.com anuj4march@gmail. com
The challenge of upgrading the complex industrial systems is basically to cope up with the ever-increasing demands of the real world. For the maximum reliability of complex industrial systems,
decisions of management depend on experience. This is because the pattern of the chance of success is not easy to predict due to limited and rough available information. Thus, the task of the
researchers lies here to increase the operational time of the individual components of a system for maintaining higher system reliability to increase productivity and profit of an organization. In
this paper, an optimum choice of the mean time between failure (MTBF), mean time to repair (MTTR), and associated costs in a suitable design unit has been showcased to bring as much efficiency as
possible. The motive is to minimize the cost satisfying the availability constraints of the system by using a few recent nature-inspired optimization techniques named Grey Wolf Optimization (GWO)
technique and Cuckoo Search Algorithm (CSA). The computational parameters produced to improve the efficiency of the designed system with the application of GWO and CSA techniques, which not only
achieve the target of minimum cost but also stand out much competitively in terms of performance. The results obtained by these two algorithms for butter oil processing system are compared and this
comparative study shows that the GWO is superior to CSA for this availability-cost optimization problem of butter oil processing system.
Keywords: Availability, Reliability, Cost function, Metaheuristics, Grey Wolf Optimizer, Cuckoo Search Algorithm.
I. Introduction
It is not possible for any system to be perfectly reliable even if the researchers and the stakeholders work to the best of their efforts. So, the increasing complexity of present-day equipment has
brought into focus two other aspects known as maintainability and availability. Maintenance plays a very crucial role as a preventive and corrective measure so as to achieve continuous and longer
availability. Maintainability means the probability that the system will resume operation in a given prescribed time after the repairing is completed as per the specified condition. Availability is
associated with the concept of maintainability. Availability refers to the probability that the system is operating within a given time. It means the proportion of time for which the system is
available for use that is excluding the downtime (when it is under maintenance). Though availability is not an
Anuj Kumar, Ganga Negi, Sangeeta Pant, Mangey Ram RT&A, Special Issue No 2(64),
AVAILABILITY-COST OPTIMIZATION..._Volume 16, November 2021
indicator of the number of failures but depends on both failure and repair rates and it integrates both reliability and maintainability. The input costs and availability are very important in any
operation and are the deciding factors for increasing the reliability of any complex system. There are three types of availability depending upon the time elements. (a) Inherent availability (b)
Achieved availability (c) Operational availability. To understand the different types of availability it is important to understand the concepts of MTBF and MTTR. MTBF is the mean time between the
breakdowns or failures during which the system is unavailable and undergoes repairs. MTBM is the mean of the time periods between the maintenance which could be either scheduled (preventive)
maintenance or corrective maintenance due to failure. MTTR is the average mean time calculated as the total repair time during a given period divided by the number of malfunctions during the same
interval. For any system down time is the total time for which it is down for corrective or preventive maintenance. MTBF does not include the preventive maintenance. The Up time is the time for which
system is under active operation. Now the three types of availability are explained as follows:
I. Inherent availability
Inherent availability is the availability in the presence of defined conditions in an ideal promoting environment without considering the preventive maintenance at any given time. It is expressed as
Ai = MTBF (1)
MTBF+ MTTR v 1
1 1 Where, MTBF = 1 and MTTR —
X [i
II. Achieved availability
Achieved availability refers to the chance that a system shall operate satisfactorily taking into account the preventive down time also. It is expressed as
Ac = JMIBM. (2)
c MTBM+M
Where MTBM is the mean time between the maintenance, which could be either scheduled (preventive) or corrective maintenance due to failure and M is the mean active-maintenance downtime resulting from
both preventive and corrective maintenance.
III. Operational availability
Operational availability is the availability when the system operates under actual supply environment at any given time considering the administrative or supply downtime. It is expressed
as Ao=--(3)
MTBM+MDT v '
where MDT is the mean actual down time.
For achieving the goal of maximum reliability of any complex system matching the global standards and also making the estimated profit it is imperative for the management to specify the availability
and cost related to each individual component reliability. Most recently for the minimization of the total costs of the system, various researchers have suggested the availability allocation models.
The set availability of the system, which is already achieved after optimization as determined by some other technique, behaves as a constraint. The availability models can be classified as (a)
formulation of a suitable model of system availability and (b) allocation of availability to each individual component depending upon the system requirements. The major focus of the paper is on
required minimum performance of each component which can be done through failure avoidance of each component or redundancy allocation for it along with the cost minimization factor. Several
researchers have devoted their study to the reliability optimization problems. Verma and Chari [43] emphasized the influence of common cause shock failures and individual failures individually as
well as both together on the determination of availability of a repairable system and also developed related formulae. Ramirez and Bernal [35] used Evolutionary Algorithm for reliability and cost
optimization for distribution networks expansion. Stochastic analysis of a Reheating-furnace system subject to preventive maintenance and repair was proposed by Upreti [42] using Markov model and
exponential distribution. Garg and Sharma [9] studied reliability, availability and maintainability
and did the analysis of these in synthesis unit in fertilizer plant. Different multi-objective and Single-objective constrained and unconstrained problems have been successfully solved to give
competitive results using GWO. Fouad et al. [8] found additional number of neighboring nodal points using GWO technique. Mosavi et al. [27] applied three data sets including Iris, Lenses and Sonar to
train the multi-layer perception neural networks, using GWO. Gupta and Saxena [10] applied GWO for finding parameters for the successful automatic power dispatch in two interconnected areas. Whereas,
Jaya Bharati et al. [11] used crossover and mutation with GWO to solve economic power transmission problem. Zhang et al. [47] used GWO technique for minimizing the fuel cost and avoiding the threat
areas in the (unmanned) ACV problem. Manikandan et al. [22] did the gene selection on the of micro array data using binary and mutated GWO approaches. Kamboj et al. [13] proposed GWO for the
non-convex economic load dispatch problem. Multi-Objective GWO was proposed by Mirjalili et al. [25] in which an archive defining the global optimum solution is introduced into the original GWO for
retrival of the Pareto Optimal solution. Kumar A [14] proposed GA and fuzzy logic for reliability of industrial systems. Kumar et al. [16] used GWO for complex system reliability optimization due to
its highly efficient results to optimize reliability and cost of life support system in a space capsule and complex bridge system. Also, Kumar et al. [15] proposed the use of GWO for the comparison
and analysis of availability and cost of the engineering systems in series configuration. Kumar et al. [17] continued further and proposed the use of GWO for the safety system of a nuclear power
plant to optimize the reliability cost of the residual heat removal system. Negi et al. [28] presented a review and applications of the various forms and hybrids of GWO. Uniyal et al. [41] presented
an overview of the reliability applications of few Nature inspired optimization techniques Various forms of GWO have been proposed to solve complex systems reliability optimization problems with very
competitive results. In the case of WSNs. Li et al. [20] proposed Modified Discrete GWO (MDGWO) for multi-level image thresholding in which the optimized function Kapur's entropy was used along with
the discrete nature of the threshold values. Mirjalili et al. [20] presented Multi-objective GWO (MOGWO) using Pareto-optimal solutions for solving global engineering problems. Other varied forms
include Chaotic GWO [23] and Refraction Learning GWO [44]. No free lunch theorem [45] says that no single meta-heuristic can solve all complex problems of optimization. Pant et al. [31] proposed the
method of solution for nonlinear system of equations using metaheuristics. Also, Pant et al. [30] presented an advanced approach of Particle Swarm optimization for reliability optimization. In
addition to this they [29] also proposed a State of Art review of the flower pollination algorithm development. Pant et al. [32] also applied multi-objective particle swarm optimization (MOPSO)
technique for solving reliability optimization problem. Pant et al. [33] presented modified PSO algorithm for nonlinear optimization problems. Li and Haimes [19] proposed decomposition method for the
reliability optimization of large complex systems. Developed by Kennedy and Eberhart, [7] PSO has been used to solve many real-world engineering problems to get much competitive results. With further
development Coelho [6] solved reliability-redundancy optimization problem using an efficient PSO approach for mixed integer programming problem. Kumar et al. [18] solved the reliability optimization
problems of complex systems using CSA. Baskan [2] proposed CSA with L'evy Flights to determine optimal link capacity expansions in road networks. Buaklee and Hongesombut [5] proposed the CSA for
solving optimal DG allocation in a smart distribution grid.
Hybridized Optimization Algorithms are those metaheuristics which use the characteristics of each of the involved algorithms in the best possible way in order to give much competitive results in
terms of convergence rates, stability, efficiency and quality results than the individual algorithm alone. Some of these are GWO-ACO [1], GWO-GA [38], and GWO-ANN [40].
For optimal convergence rate and highly competitive results as compared to the existing methods leading to global optimum solution, nature inspired algorithm called the Metaheuristics can play a
major role. Broadly, they are classified as population oriented (PSO, ACO, GWO, GA.) or trajectory
oriented (SA).
Section II deals with the illustration of the different stages of the butter oil processing system. Section III explains GWO and CSA used for the minimization of expenditure in a butter oil
processing system. The mathematical model devised for the optimization problem is presented in section IV. In section V the outcomes obtained by the GWO algorithm are discussed along with the
investigation of the statistics and sensitivity analysis done thereby. Section VI proposes the conclusions and further scope of the research.
II. Demonstration of the industrial system considered
A butter oil processing plant is discussed below to demonstrate the suggested approach of GWO technique. It is assumed to be a repairable industrial system of a kind based in Northern India.
Description of six sub-units of butter oil processing and manufacturing industrial plant is presented below [36].
I. Separator (Sub-unit I):
Separator uses the law of centrifugal force to separate cream from the milk. To separate the cream (which contains fats) from the milk, chilled milk is introduced into the separator from the
refrigerators. This removes 40-50% of fats from the milk and the skimmed milk which remains in the silos is used for making milk powder. Sub-part I is composed of three components in series which are
motor, bearings and high-speed gearbox.
II. Pasteurizer (Sub-unit 2):
In this sub-unit pasteurization of cream is done. In this process cream is heated to at least 71°C which may go to 80-82°C in actual practice as long as the process of pasteurization is completed. It
involves destruction of unwanted organisms and pathogenic organisms. The enzymes present become inactivated and the volatile substances are also removed. The substances which tan the contents also
get removed in the heating process. Then on one side pasteurized milk goes out of this sub-unit through the outlets and on the other side storage of the pasteurized cream takes place in the
double-coated tank for the next processing step. The flow of the milk gradually gets obstructed as some residue particles of milk stick around the outlet and form sludge with the passage of time
leading to blockage in the outlet causing the sub-unit to fail. The sub-unit 2 has a series of motor and bearings.
III. Butter preparation without break (sub-unit 3):
The storage tank pours the butter into the butter preparation machine where butter is made continuously. Butter granules are formed due to continuous churning process in the machine which produces
butter milk also. Then raw milk silos pump back the buttermilk produced during churning process. The butter granules formed are put to further processing with purpose of getting a homogeneous mass of
butter. With the help of trolleys the homogeneous butter is shifted to melting vats. There is a series of gearbox, motor and bearings in the butter making machine.
IV. Melting vats (sub-unit 4):
This unit is a double coated tank for carrying out process of melting of butter. Heating butter to 107°C very gently evaporates water from the melting butter. After melting, it is important to keep
the melted butter undisturbed for at least half an hour. This sub-unit is composed of mono block pumps, motors and bearings in series.
V. Butter-oil cleanser (sub-unit 5):
From the melting vats butter-oil is shifted to settling tanks to let the butter-oil settle for few hours. The butter-oil residue formed in the settling period is then removed and the residue free
butter-oil is stored in the storage tanks. For storing butter-oil suitably, it is allowed to cool to 28-30°C. In this sub-unit a motor and gear box are connected in series.
Anuj Kumar, Ganga Negi, Sangeeta Pant, Mangey Ram RT&A, Special Issue No 2(64), AVAILABILITY-COST OPTIMIZATION..._Volume 16, November 2021
VI. Packaging (sub-unit 6):
With the help of a pouch-filling machine, packets of processed butter are made in this sub-unit. The machine automatically fills, flows the packets and seals them. There is a printed circuit board
and a pneumatic cylinder connected in series in this sub-unit [36]. All these sub-units are connected in series.
III. Nature Inspired Optimization techniques
I. Grey Wolf Optimizer:
I. The impulse that led to GWO
The two important phenomena that led to the development of the GWO algorithm are the social intelligence and hierarchical attitude among the wolves, which can be collectively defined as their social
intelligence to carry out an efficient hunting mechanism. In the entire hunting process the four predominant types of wolves taking part can be categorized as alpha, beta, delta and omega in the of
their leading capacity. These become the four candidates for initial solution. The alpha being the strongest leads the entire hunting process and the others follow to mechanism successful. This very
effective mechanism has been simulated to develop an algorithm to find global optimum solution to many real-world engineering problems. The wolves of different capacities become the four candidates
for solutions, which are improved in the iterations that follow, become the four candidates for initial solution.
II. Mathematical Model formulation of the GWO Algorithm The detailed model:
• Tracking (approaching).
• Encompassing.
• Attacking.
The equations constructed to carry out the simulation are as follows.
D = IC.Xp(t)-X(t)l (4)
X(t + 1)=X(t)-A.D (5)
Note that, in the equations, use of vectors help the use of the model to the required number of dimensions. Here, X(t + 1) expresses the location the wolf reaches in time (t + 1). X(t) is the present
location of the wolf, A is a coefficient matrix and D defines the location of the prey Xp. Here, A and C are represented as follows:
A = 2a. — a (6)
C = 2. r2 (7)
where, ri and r2 are random vectors in the interval [0,1]. The components of the vector a are linearly decreased from 2 to 0 over the course of iterations. The value of A ranges from -2 to 2 as there
are random variables in the expression. The premises that alpha, beta and delta are the three best solutions in GWO is taken considering that they have good idea of the position due their strength in
the entire population. So, the other wolf should try to update their position as follows: where, ri and r2 are random vectors in the interval [0,1]. The components of the vector a are linearly
decreased from 2 to 0 over the course of iterations. The value of A ranges from -2 to 2 as there are random variables in the expression. The premises that the alpha, the beta and the delta are three
best solutions in GWO is taken considering that they have good idea of the position due their strength in
the entire population. So, the other wolf should try to modify their position as follows:
X(t+1)=iX1+iX2+iX3 (8)
where, X1, X2, and X3 are evaluated with the equations:
X2 =X«(t)— A2.D«
Anuj Kumar, Ganga Negi, Sangeeta Pant, Mangey Ram RT&A, Special Issue No 2(64),
AVAILABILITY-COST OPTIMIZATION._Volume 16, November 2021
X3 =X8(t)-A3.D8 (9)
Here, Da, Dß, Ds are calculated as follows:
Da = |Ci. Xa — X| Dß = |C2.Xß—X|
D5 = |C3.X5—X| (10) 7m ti a Ii z e tine grev wolf popul ation X, (J= 1,2,.........., ?l)
Calculate the fitness, of each search agent
Update the position of current search agent by equation (8) Calculate the fitness of all search agents
Fig. 1 Pseudo code of the GWO algorithm
Pseudo code of the GWO algorithm is given in Figure 1 [24].
III. Proper survey (exploration) and effective utilization (exploitation) in the hunting mechanism:
Surveying enough before attacking is very important to make the process successful. The decisions of the surveying wolves lead to the effective positioning of the following wolves. To simulate this,
the values of the parameters a and A have to be chosen in their ranges to so to get the best value of A. It has been established that IAI >1. As the process of exploration or surveying and
approaching reaches its peak then the attacking decisions depend on the parameter A and it should be and IAI <
1. Now here it is important that unless there is appropriate approaching of the prey, the attacking process won't be that effective. So, choosing the parameters within the range, according to the
constraints is very important firstly to properly survey and explore the search space enough before utilizing and exploiting so as to avoid any local convergence of the solution. Thus, achieving
global solution is the objective behind the required amount of investigation of the search space and utilizing the results of the investigation to get the optimum solution via proper exploitation as
shown in fig.
2. GWO gives an efficiently converged result as compared to existing optimization methods like PSO, ACO, GA, cuckoo search, and few more.
II. Cuckoo Search Algorithm (CSA): I. Cuckoo's breeding strategy CSA [46] has its roots in the hostile and vigorous strategy of reproducing its young-ones in some species of fascinating bird cuckoo
which can make beautiful sounds. The cuckoos belong to the
Anuj Kumar, Ganga Negi, Sangeeta Pant, Mangey Ram RT&A, Special Issue No 2(64),
AVAILABILITY-COST OPTIMIZATION._Volume 16, November 2021
Cuculidae family of birds. Some of them are brood parasites which search for a nest of the host birds of different species probably to lay and hide their eggs. The host bird either tries to engage in
direct conflict with invading cuckoo and tries to throw away the eggs of the invading cuckoos or leave its own nest and builds a new nest altogether. To increase their reproductivity some species of
cuckoos like Tapera, mimic even some characteristics like color, pattern of the eggs and call of the chicks of the host species which really help in reducing abandoning of their eggs. Specific timing
of egg laying in the host nest by cuckoos so that they can be hatched earlier than the host eggs is also a strategic pattern of cuckoos to throw the host eggs out of the nest. Cuckoos have developed
basically three types of parasitic nature: nest takeover, cooperative parasitism and intraspecific parasitism. To increase the share of food for the cuckoo's chick in the host nest the cuckoo throws
the host eggs out of the nest.
II. Idea of Levy Flights
Animals in nature, look for the food in an effective manner which is often much random and quasi-random way. Every next move is dependent on the present position. The shift to the new location and
the direction chosen are probabilistic in nature which can hence be mathematically modelled. Levy flights [4, 34] characteristics have been observed in many animals and insects. Be it the landscape
exploring by the fruit flies Drosophila melanogaster or the human behaviour such as the hunter gatherer Ju/'Hoansi [4] or the pattern of light all show the characteristics of Levy flights. The
outstanding performances [37] shown by the application of such behaviour to the optimization problems for global optimal search have been tested successfully.
III. Cuckoo Search Model
Before presenting the actual model, the premises which lead to the model can be as follows.
• Every cuckoo in particular ensures laying one egg in one time in a nest chosen randomly;
• The highly potent eggs (solutions) of the ideal nests have the capability of being transferred to the next generations;
• The probability of revealing the stranger egg is from 0 and 1 which is approximately equal to the fraction of the number of nests being renewed and built. The probability obtained which can lead to
removal of the stranger egg or building of a new nest by the host bird. Also, every cuckoo has only a fixed number of nests for laying their eggs.
The fitness of a solution is important and for a maximization problem it has a fixed ratio to the objective function.
A new solution x (t + 1) for say kth cuckoo can be generated by applying the L'evy flight feature as follows [38]
x (t+1) i = x (t) i + a © L'evy(X) (11)
where, a > 0 is the size of the step and the suitable problems can be based on the same scale and it can be a = O (1). The product © represents the multiplication at each entry. The L'evy flight
represent the random steps in the random walk whereas for the large steps L'evy distribution is applicable as follows:
L'evy ~ u = t — X, (1 < X < 3) (12)
This produces infinite mean and variance which explain the steps taken by the cuckoo in succession and is based on power-law step length distribution with a heavy tail. Since the probability of a
cuckoo egg getting identified by the host bird is very less it is more important that the fitness function should be a function of the difference in solutions. Thus, random walk and random steps
process chosen is very suitable. Pseudo code of CSA algorithm is given in Figure 2 [30].
Anuj Kumar, Ganga Negi, Sangeeta Pant, Mangey Ram AVAILABILITY-COST OPTIMIZATION._
Ob je dive function /(jt), X = (*!, Jt2.........., Xd )T ;
Initial a population of n host nests (j =1,2..........., ij}
zuhile (t <MaxGeneratian} or (stop criterion); Get a cuckoo (savi) randomly by L'evy flights; Evaluate its quality/fitness Ff; Choose a nest among n (sav j) randomly; if(F>F}),
Replace j by the new solution; end
Abandon a fraction ( pa ) of worse nests
[and build new ones at new locations via L ew flights];
Keep the best solutions (or nests with quality solutions);
Rank the solutions and find the current best;
end zuhile
Postprocess re stilts and visualisation; end
Fig. 2 Pseudo Code of Cuckoo Search Algorithm
IV. Formulation of the Mathematical model of the proposed problem
It is not possible to predict the behavior of a system perfectly even from the past records so, it is important to analyze the available parameters in an appropriate manner and some assumptions can
be helpful in formulation of availability model the series-parallel system and use the GWO algorithm for cost optimization. Before formulating the mathematical model of the problem following
important premises are notable.
• The components or sub system are not dependent on each other and so the failing and repairing of one the component is independent of the other and do not interfere with each other.
• The components do not fail simultaneously.
• The failure (Aj) repair rate (^¿) are constants such that ^ <
• The repair and maintenance start in the event of failure of a component immediately with separate maintenance system available for each component.
The proposed optimization model requires expression for cost minimization along with the constraint that the system availability should be greater than the minimum availability criteria. I.
Availability and total cost
The constituent components of the proposed industrial system are as arranged and put in the reliability block diagram (RBD). The system consists of the series-parallel configuration for which the
availability expressions with the basic parameters are as follows:
I. Series system.
= .....~1- ¿1+ ........+ ^ ) (13)
Hi 1%
where, ~ Ai + ^ + - + ¿n and ~ X
II. Parallel system
.....A" (14)
.....M2 + -M«)
and ^ + +
Here, and .<4^ denote the availability of the system and ith component, Aj and ^ denote the failure and repair rate respectively for the ith component of the system and system failure and repair rate
are denoted by and iv
The expressions for availability, failure rate and repair rate are from [3]. Thus, from the definitions and expressions for availability, following expression for availability for the proposed system
can be presented approximately as follows:
= /( M7BF1, MT5F2,....., MT5Fn, MTT^, M7Tfl2.......MTTflJ (15)
Failure rate of a system depends on MTBF. The higher value of the MTBF of any component causes decrease in the failure rate of the component. This generally leads to an increase in the cost sharply
[21] and at the same time also the reliability of the system is increased as a whole. The relation between MTBF and manufacturing cost [39] can be expressed as follows:
CMTBFj = «¿. (MTBFO^ + Yi (16)
where, the manufacturing cost and MTBF of the itft component are denoted by CMTBFj and MTBFj respectively, fa and Yi are constants which represent the physical properties of the ith component and
value of fa >1.
The output of a system depends on failure rate and reduces the efficiency of the system as a whole. Timely repairing of the failed component can help not to affect the efficiency and output of the
system to some extent. Maintenance and repair of the failed component as soon as possible can be carried out with help of experts and repairing by standard equipment. MTBFj and repairing cost of the
individual components (CM77fij) are linearly related to each other and mathematically can be represented as follows [12]:
CMTTfy = at — fy. (MTTflO (17)
where, a^ and ft are constants related to the ith component of the system. From Equations (12) and (13), total cost can be expressed as:
Te = ^(«¿.(MrBF^ + Yi ) KUfe— ^M^)
II. Optimization model for the cost minimization of butter oil plant:
Using equations, (1) and (4) optimization model of the problem is framed as follows:
Minimize Tc
Subject to > ^„¿„j
¿¿M7BFj < M7BFj < L&M7Tfl; < MTTRi < ¿¿Mrrfij
i= 1,2,.....6 ^ZZ variafrZes > 0
^^ = 1
r_ MTTS, , MTTB, , MrTfi3 , Mrrfi4 n
[5-1 +--2 +--3 +--4 ]
Te = Zf=1(ai.(MrBFi)^i + Ki )+ Zf=1(ai— ¿¿.M^)
= 0.96
where, lower and upper bounds of MTBF and MTTR for ith component are denoted by ¿¿WT^F;, U6MT£Fj, ¿¿Mrrfij, UftMrrfij out of the total 6 components of the given plant. GWO algorithm solves the
formulated optimization problem quite efficiently. The values of a, fa and y are respectively taken as 0.92, 1.94 and 1250. The respective values of a and b are taken [14], [18], [150] and [50]. The
range of lower and upper bounds of mean time between failure (MTBF) and mean time to repair (MTTR) for various components are 4000 hours to 4200 hours and 2 hours to 6 hours respectively.
V. Results and Analysis
GWO has an edge over other nature inspired optimization algorithms as in it the search agent and fitness function are not directly correlated. In GWO various search agents modify their position in
accordance with the positions taken by the wolf alpha, beta, and delta. With this feature, GWO finds
application to solve problem of any type of constraints with its mechanism remaining the same. This model for minimization of the expenditure in the butter oil processing plant system, uses the
simplest method of constraints handling like penalty functions. For this cost minimization problem of butter oil processing plant, 100 grey wolves have been fixed and we run GWO algorithm with
iterations around 200. On the other hand, in cuckoos search algorithm, number of nests have been fixed at 30 with the chance of finding the alien eggs/solutions is kept at 0.35. Total number of
iterations have been set as 1000. After that, the GWO algorithm and Cuckoo search algorithm has been run in the MATLAB and table 1 shows the results, which are better the earlier in some respects
The search history of GWO algorithm is tabulated in the following manner for the same problem. The minimum system cost 5.61615071665e+07 obtained by GWO is similar to that obtained by CSA but there
exists a difference in the function evaluation as shown in Fig. 3. GWO takes only 20000 function evaluations on the other hand CSA takes 60000 FE for the same cost. Both GWO and CSA are kept at
system availability as shown in table 1.
- Convergence curve>; ' 1 ' 0 Search history
■5 5.95
"S 5.8
c TO
1 5.75
S 5.7
I 5.65
« .1 • • • .o; V . '
The current iteration is 200
The current optimal value is 56161507.1665
Fig. 3 Search history of GWO for butter oil processing plant
Table 1. Comparison results for butter oil processing plant
Grey Wolf Optimizer (GWO) Cuckoos Search Model
Components Mean Time Mean Time to Mean Time Mean Time
Between Repair (MTTR Between to Repair
Failure (MTBF in hours) Failure (MTTR in
in hours) (MTBF in hours) hours)
Motors 4025 5 4025 5
Bearings 4100 3 4100 3
Gear Box 4075 5.5 4075 5.5
Pumps 4150 3.5 4150 3.5
Circuit Box 4070 3 4070 3
Cylinder 4115 3.5 4115 3.5
System Cost 5.61615071665e+07 5.616150716646019e+07
System 0.978716807 0.978716807
Number of 200 1000
FE 20000 60000
VI. Conclusion and further scope:
For a series-parallel system, exact methods of reliability optimization are not enough to get effective results. This is because it may lead to an unnecessary rise in the costs of the whole system.
Since the aim of any industrial unit is profit generation along with the satisfaction of the other constraints of weight, volume, maintenance policies, maximum performance in terms of reliability and
availability so, nature inspired optimization algorithms like GWO and CSA work quite well under all these conditions to get better results as this butter-oil processing plant system show. These
optimization techniques work to calculate the optimum values of MTBF and MTTR so well that they consider the constraints to gain maximum out of the series-parallel system even with limitations of its
structure. The efficient results of the GWO and CSA algorithms to the present problems help the decision makers to derive the properties of the components to be chosen in future to get the best
results. Together with this, comparatively GWO show high performance over CSA algorithms with regard to total number of functions evaluated and hence can save time of decision makers (DM). Hence, the
DM can further decide about the policies of the design and repair based on GWO to improve the performance to meet the other constraints if any.
Declaration of Conflicting Interests:
The Authors have no conflict of interests.
[1] Ab Rashid, M. F. F. (2017). A hybrid Ant-Wolf Algorithm to optimize assembly sequence planning problem. Assembly Automation, 37(2), 238-248.
[2] Baskan, O. (2013). To determine optimal link capacity expansions in road networks using Cuckoo search algorithm with Levy flights, J. Appl. Math. 1- 11.
[3] Birolini, A. (2007). Reliability Engineering: Theory and Practice, 5th ed., Springer, New York,
[4] Brown, C., Lie Bovitch, L.S., & Glendon, R. (2007). 'L'evy flights in Dobe Ju/'hoansi foraging patterns', Human Ecol., 35, 129-138.
[5] Buaklee, W., & Hong Pham, K. (2013). Optimal DG allocation in a smart distribution grid using Cuckoo search algorithm, ECTI Trans. Elect. Eng. Electron. Comm. 11(2), 16-22.
[6] Coelho, L. S. (2009). An efficient particle swarm approach for mixed integer programming problem in reliability-redundancy optimization applications. Reliability Engineering and System Safety, 94
(4), 830-837.
[7] Eberhart, R., & Kennedy, J. (1995). A new optimizer using particle swarm theory. In: Proceedings of the sixth international Symposium on Micro Machine and Human Science.
[8] Fouad, M. M., Hafez, A.I., Hassanien, A.E., & Snasel, V. (2015). Grey wolves optimizer-based localization approach in WSNs. In: 11th international computer engineering conference (ICENCO). IEEE,
pp 256-260.
[9] Garg, H., & Sharma, S. P. (2012). Behavioural analysis of synthesis unit in fertilizer plant. International Journal of Quality & Reliability Management, 29(2), 217232. https://doi.org/10.1108/
[10] Gupta, E., & Saxena, A. (2016). Grey wolf optimizer-based regulator design for automatic generation control of interconnected power system. Cogent Engineering. 3(1):1151612.
[11] Jaya Bharati, T., Raghunathan, T., Adarsh, B. R. & Suganthan, P. N. (2016). Economic dispatch using hybrid grey wolf optimizer. Energy 111:630-641.
[12] Juan, Y. S., Lin, S. S., & Kao, H. P. (2008). A knowledge management system for seriesparallel availability optimization and design. Expert systems with Application, 34, 181-193.
[13] Kamboj, V. K., Bath, S. K., & Dhillon, J. S. (2015). Solution of non-convex economic load dispatch problem using grey wolf optimizer. Neural Comp App 27:1-16.
[14] Kumar, A. (2009). Reliability analysis of industrial system using GA and Fuzzy approach, Indian Institute of Technology Roorkee, Roorkee, (Ph.D. thesis).
[15] Kumar, A., Pant, S., & Ram, M. (2017). System Reliability Optimization Using Grey Wolf Optimizer Algorithm. Quality and Reliability Engineering International, Wiley, DOI: 10.1002/qre.2107.
[16] Kumar, A., Pant, S., & Ram, M. (2019). Multi-objective grey wolf optimizer approach to the reliability-cost optimization of life support system in space capsule. International Journal of System
Assurance Engineering and management, 10(2),, 276-284 https://doi.org/10.1007/s13198-019-00781-1.
[17] Kumar, A., Pant, S., & Ram, S. M. (2019). Grey wolf optimizer approach to the reliability-cost optimization of residual heat removal system of a nuclear power plant safety system. Quality and
Reliability Engineering international. Wiley, 1-12. https://doi.org/10.1002/qre.2499.
[18] Kumar, A., Pant, S., & Singh, S. B. (2016). Reliability Optimization of Complex System by using Cuckoos Search Algorithm, Mathematical Concepts and Applications in Mechanical Engineering and
Mechatronics, IGI Global, 95-112.
[19] Li, D., & Haimes, Y. Y. (1992). A decomposition method for optimization of large-system reliability. IEEE Transactions on Reliability, 41, 183-188.
[20] Li, L., Sun, L., Kang, W., Guo, J., Chong, H., & Li, S. (2016). Fuzzy multilevel image thresholding based on modified discrete grey wolf optimizer and local information aggregation. IEEE Access
[21] Li, Z. (2001). Availability allocation of series parallel system solved from object-oriented planning, Feng-Chia University, Taichung, Taiwan, (Master's thesis).
[22] Manikandan, S. P., Manimegalai, R., & Hariharan, M. (2016). Gene selection from microarray data: Current signal Transduction Therapy.
[23] Kohli, M. & Arora, S. (2018). Chaotic GWO for constrained optimization problems. Journal of Computational Design and Engineering, 5, 458-472.
[24] Mirjalili, S., Mirjalili, S., M., & Lewis, A. (2014). Grey wolf optimizer. Adv Eng. Soft. 69, 4661.
[25] Mirjalili , S., Saremi, S., Mirjalili, S. M., & Coelho, L. S. (2016). Multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization. Expert Sys App 47, 106-119.
[26] Mirjalili, S., Saremi, S., Mirjalili, S. M., & Coelho, L. S. (2016) Multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization. Expert Syst. Appl. 47, 106-119.
[27] Mosavi, M. R., Khishe, M., & Ghamgosar, A. (2016). Classification of sonar data set using neural network trained by grey wolf optimization. Neural Net World, 26(4), 393.
[28] Negi, G., Kumar, A., Pant, S., & Ram, M. (2020). GWO: a review and applications, International Journal of System Assurance Engineering and management. https://doi.org/10.1007/s13198-020-00995-8.
[29] Pant, S., Kumar, A., & Ram, M. (2017). Flower Pollination Algorithm Development: A State of Art Review. International Journal of System Assurance Engineering and Management, Springer, 8 (2),
[30] Pant, S., Kumar, A., & Ram, M. (2017). Reliability Optimization: A Particle Swarm Approach. Advances in Reliability and System Engineering, Springer International Publishing, 163187.
[31] Pant, S., Kumar, A., & Ram, M. (2020). Solution of Nonlinear Systems of Equations via Metaheuristics, International Journal of Mathematical, Engineering and Management Sciences, 4 (5),
[32] Pant, S., Kumar, A., Kishor, A., Anand, D., & Singh, S. B. (2015). Application of a Multi-Objective Particle Swarm Optimization Technique to Solve Reliability Optimization Problem. In the
proceeding of IEEE Int. Conf. on Next Generation Computing Technologies, September 4-5, 10041007.
[33] Pant, S., Kumar, A., Singh, S. B., & Ram, M. (2017). A Modified Particle Swarm Optimization Algorithm for Nonlinear Optimization. Nonlinear Studies, 24(1), 127-138.
[34] Pavlyukevich, I. (2007). L'evy flights, non-local search and simulated annealing', J. Computational Physics, 226, 1830-1844.
[35] Ramírez-Rosado, I. J., & Bernal -Agustín, J. L. (2001). Reliability and costs optimization for distribution networks expansion using an evolutionary algorithm. IEEE Transactions on Power
Systems, 16, 111-118.
[36] Rani, M., Garg, H., & Sharma, S. P. (2014) Cost minimization of butter oil processing plant using artificial bee colony algorithm, Mathematics and Computers in Simulation, 97, 94-107.
[37] Shlesinger, M. F. (2006). 'Search research'. Nature, 443, 281-282.
[38] Tawhid, M. A. & Ali, A. F. (2017). A Hybrid grey wolf optimizer and genetic algorithm for minimizing potential energy function. Memetic Computing, 9(4), 347-359.
[39] Tillman, F. A., Hwang, C. L., & Kuo, W. (1980). optimization of systems reliability, Marcel Dekker, New York. 17.
[40] Turabieh, H. (2016). A Hybrid ANN-GWO Algorithm for prediction of Heart Disease. American journal of operations Research, 6 136-146. Doi:10.4236/ajor.2016. 62016.
[41] Uniyal, N., Pant, S., & Kumar, A. (2020). An Overview of Few Nature Inspired Optimization Techniques and Its Reliability Applications. International Journal of Mathematical, Engineering and
Management Sciences, 5 (4), 732-743.
[42] Upreti, I. (2012). Stochastic analysis of a Reheating-furnace system subject to preventive maintenance and repair, Galgotia's Institute of Management & Technology India, https://doi.org/10.1504/
IJOR. 045664.
[43] Verma, S. M., & Chari, A. A. (1980). Availability and frequency of failures of a system in the presence of chance common-cause shock failures, Reliability Engineering 1(2), 127-142.
[44] Long, W., Wu, T., Cai, S., Liang, X., Jiao, J., & Xu, M. (2019). A Novel GWO with refraction learning. IEEE Access, 7, 57805-57819.
[45] Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE transactions on Evolutionary computation, 1, 67-82.
[46] Yang, X. S., & Deb, S. (2009). 'Cuckoo search via L'evy flights', Proceedings of World Congress on Nature & Biologically Inspired Computing (NBIC, India), IEEE Publications, USA, pp. 210-214.
[47] Zhang, S., Zhou, Y., Li, Z., & Pan, W. (2016). Grey wolf optimizer for unmanned combat aerial vehicle path planning. Adv Eng Soft., 99, 121-136. | {"url":"https://cyberleninka.ru/article/n/availability-cost-optimization-of-butter-oil-processing-system-by-using-nature-inspired-optimization-algorithms","timestamp":"2024-11-09T20:04:48Z","content_type":"application/xhtml+xml","content_length":"110543","record_id":"<urn:uuid:0bf4a189-fb33-4322-a983-f395d3057676>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00737.warc.gz"} |
Thermal conduction - (Control Theory) - Vocab, Definition, Explanations | Fiveable
Thermal conduction
from class:
Control Theory
Thermal conduction is the process by which heat energy is transferred through materials without any movement of the material itself. This transfer occurs at the molecular level as high-energy
particles collide with neighboring lower-energy particles, allowing energy to flow from hotter regions to cooler ones. Understanding thermal conduction is essential in analyzing heat transfer
processes, and it can be modeled mathematically using differential equations.
congrats on reading the definition of thermal conduction. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Thermal conduction occurs in solids, liquids, and gases, but it is most efficient in solids due to closely packed particles that facilitate energy transfer.
2. The rate of thermal conduction depends on the temperature difference between two regions and the thermal conductivity of the material involved.
3. Insulating materials, such as wool or fiberglass, have low thermal conductivity, making them effective at reducing heat transfer.
4. In one-dimensional heat conduction scenarios, the heat equation simplifies to a form that relates temperature changes over time to spatial variations.
5. The concept of steady-state conduction applies when the temperature distribution in a material does not change over time, allowing for simpler mathematical modeling.
Review Questions
• How does thermal conduction occur at the molecular level, and what factors influence its efficiency?
□ Thermal conduction occurs as high-energy molecules collide with lower-energy neighboring molecules, transferring energy in the process. The efficiency of this transfer depends on several
factors, including the type of material (its thermal conductivity), the temperature gradient between hot and cold areas, and the surface area through which heat is being conducted. Materials
with higher thermal conductivity transfer heat more effectively than those with lower conductivity.
• In what way does Fourier's Law relate to thermal conduction and how can it be applied in problem-solving scenarios?
□ Fourier's Law relates directly to thermal conduction by defining how much heat will flow through a given area based on the temperature gradient present. In problem-solving scenarios, applying
Fourier's Law allows for calculations of heat transfer rates through different materials by incorporating variables such as area, temperature difference, and material thermal conductivity.
This makes it crucial for designing systems that rely on effective thermal management.
• Evaluate the role of the heat equation in understanding thermal conduction processes in various media and its importance in practical applications.
□ The heat equation serves as a fundamental tool for modeling how heat diffuses through various materials over time. By evaluating this equation, one can predict temperature changes within
solid objects or fluids as they absorb or lose heat. Its importance in practical applications extends to engineering designs involving insulation, cooling systems, and materials selection for
industries ranging from construction to electronics. Understanding how to solve the heat equation enables engineers to optimize thermal performance in real-world scenarios.
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/control-theory/thermal-conduction","timestamp":"2024-11-08T21:25:03Z","content_type":"text/html","content_length":"148310","record_id":"<urn:uuid:c7337c15-1bd8-4ddd-9cd9-68eef744b473>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00882.warc.gz"} |
Seminar Announcement - MSCS
Analysis and Applied Mathematics Seminar
Yuzhe Zhu
University of Chicago
Smoothness property of hypoelliptic kinetic equations near boundaries
Abstract: The boundary regularization effect for hypoelliptic kinetic equations is limited. The solution with the simplest zero inflow boundary conditions exhibits at most Hölder continuity near the
singular set of the boundary. We will discuss recent results on hypoelliptic regularity and explain the smoothness properties of solutions in the presence of boundary conditions in certain cases.
Monday September 30, 2024 at 4:00 PM in 636 SEO | {"url":"https://www.math.uic.edu/persisting_utilities/seminars/view_seminar?id=7545","timestamp":"2024-11-10T02:01:25Z","content_type":"text/html","content_length":"11487","record_id":"<urn:uuid:04394b57-ccfd-485c-b1b6-2a7d068ed23e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00207.warc.gz"} |
Draw And Label A Figure For Each Relationship
Draw And Label A Figure For Each Relationship - Web **final answer:** the figure should look like this: Web draw and label a figure for each relationship. A common core curriculum 1st edition • isbn:
Web draw and label a figure for each relationship. First, we draw a plane and label it as plane $m$.
Web draw and label a figure for each relationship. Line t contains points a and b, but does not contain point c. Planes a and b intersect. Web **final answer:** the figure should look like this: Draw
and label figure for each relationship. Two planes do not intersect. Web draw and label a figure for each relationship.
PPT Why do chairs sometimes wobble? PowerPoint Presentation, free
Web draw and label a figure for each relationship. Line t contains points a and b, but does not contain point c. For 67 we're told to plot a line called line em that intersects.
Free Printable angles anchor chart for classroom[PDF] Number Dyslexia
Web draw and label a figure for each relationship. Web **final answer:** the figure would look like this: Two planes do not intersect. For 67 we're told to plot a line called line em that.
Add a label to a figure — draw_figure_label • cowplot
Line ab a b intersects plane q q at w w. Two planes do not intersect. Math geometry question draw and label a figure for each relationship. Three lines intersect at point j but do.
32 How To Label Figures In Apa Label Design Ideas 2020
Web draw and label a figure for each relationship. Math geometry question draw and label a figure for each relationship. Line t contains points a and b, but does not contain point c. Plane abs.
PPT How to label a figure PowerPoint Presentation, free download ID
\[ \begin{array}{c} \text{plane} \\ \begin{array}{ccc} \rightarrow & a & \rightarrow \\ \rightarrow & b & \rightarrow \\. Calculus use the relationship y=\log _ {b} x \leftrightarrow b^ {y}=x. Web
draw a figure to illustrate each.
Communication Breakdown in Relationships Using this Effective
Web draw and label a figure for each relationship. Web math geometry geometry questions and answers a. Web draw and label a figure for each relationship. First, we draw a plane and label it as.
Quadrilateral Venn Diagram
\[ \begin{array}{c} \text{plane} \\ \begin{array}{ccc} \rightarrow & a & \rightarrow \\ \rightarrow & b & \rightarrow \\. Lines s and t intersect, and line v does not intersect either one. Web
**final answer:** the figure.
How To Put Plot Values In Order R Mancuso Saidee
Three lines intersect at point j but do not all lie in the same plane. Point e lies on cd. Web draw and label a figure for each relationship. Two planes do not intersect. Lines.
Draw a rough figure and label suitably in each of the following cases
Web **final answer:** the figure should look like this: First, we draw a plane and label it as plane $m$. Web draw and label a figure for each relationship. Web draw and label a figure.
32 How To Label Figures In Apa Label Design Ideas 2020
9781642087611 ron larson 4,072 solutions geometry draw and label a figure for each. Web draw and label a figure for each relationship. Web draw and label a figure for each relationship. Web draw and
Draw And Label A Figure For Each Relationship Math geometry question draw and label a figure for each relationship. Web draw and label a figure for each relationship. First, we need to draw a plane.
9781642087611 ron larson 4,072 solutions geometry draw and label a figure for each. First, we draw a plane and label it as plane $m$.
Draw And Label A Figure For Each Relationship Related Post : | {"url":"https://classifieds.independent.com/print/draw-and-label-a-figure-for-each-relationship.html","timestamp":"2024-11-03T07:50:46Z","content_type":"application/xhtml+xml","content_length":"23417","record_id":"<urn:uuid:ed697636-0b67-4d35-bfbe-fc13f686344d>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00728.warc.gz"} |
Printable Calendars AT A GLANCE
Solving Quadratic Equations Using The Quadratic Formula Worksheet
Solving Quadratic Equations Using The Quadratic Formula Worksheet - The corbettmaths textbook exercise on quadratic formula. First, memorize the quadratic formula: Ax2 +bx+c = 0 a x 2 + b x + c = 0.
1.) 2x 2 = 98. Solve the quadratic equation using the quadratic formula. [latex]5 {x^2} = 7x + 6 [/latex] answer. Solve the following equation for and enter exact answers only (no decimal
approximations). We also have a sheet focusing solely on real solutions and another one on complex solutions. Web using the quadratic formula date_____ period____ solve each equation with the
quadratic formula. If there is more than one solution, separate your answers with commas.
By inspection, it’s obvious that the quadratic equation is in the standard form since the right side is just zero while the rest of the terms stay on the left side. Web the solutions to a quadratic
equation of the form ax2 + bx + c = 0, where are given by the formula: The result is the pair of solutions to the quadratic equation. 9.) x 2 = 36. To use the quadratic formula, we substitute the
values of a, b, and c from the standard form into the expression on the right side of the formula. Web solving quadratic equations by completing the square date_____ period____ solve each equation by
completing the square. Solve the equation x2 + 5x + 1 = 0.
Solve the quadratic equation using the square root property. The angry bee is deciding whether or not to go into the house where the other. = − ± √ 2 − 4. Then we simplify the expression. 4.) x 2 +2x
= 63.
Solving Quadratic Equations
Easy (use formula) hard (add/subtract term, then use the formula) mixture of both types. Get your free quadratic equation worksheet of 20+ questions and answers. Web solving using the quadratic
formula worksheet. Web solving quadratic equations by completing the square date_____ period____ solve each equation by completing the square. Web solving by using the quadratic formula.
Quotes about Quadratic Equation (19 quotes)
1) m2 − 5m − 14 = 0 2) b2 − 4b + 4 = 0 3) 2m2 + 2m − 12 = 0 4) 2x2 − 3x − 5 = 0 5) x2 + 4x + 3 = 0 6) 2x2 + 3x − 20 = 0 7). 6.) 2x 2 = 44 + 3x. First, memorize the quadratic.
Solving Quadratic Equations with Positive 'a' Coefficients of 1 (A)
Get your free quadratic equation worksheet of 20+ questions and answers. Includes reasoning and applied questions. 1.) 2x 2 = 98. Web the solutions to a quadratic equation of the form ax2 + bx + c =
0, where are given by the formula: Web click here for answers.
32 Solving Using The Quadratic Formula Worksheet support worksheet
Web using the quadratic formula date_____ period____ solve each equation with the quadratic formula. Web students will practice using the quadratic formula to solve quadratic equations. Write down
the quadratic formula. = − ± √ 2 − 4. Web click here for answers.
Practice Worksheet For Quadratic Equations Worksheet Template
Web solve quadratic equations by completing the square; Easy (use formula) hard (add/subtract term, then use the formula) mixture of both types. Solve each equation using the quadratic equation. The
quadratic formula, however, may be used to solve any quadratic equation (even the ones that can be factored). Web solving using the quadratic formula worksheet.
Solving Quadratics Equations Quadratic Formula Example 3 YouTube
The quadratic formula, however, may be used to solve any quadratic equation (even the ones that can be factored). A is the leading coefficient of the quadratic equation, c is the absolute term of the
quadratic equation. Check your answers seem right. Sum and product of roots; Solve the quadratic equation using the square root property.
Solving Quadratic Equations for x with a Coefficients Between 4 and 4
Web solving using the quadratic formula worksheet. Solve the quadratic equation using the square root property. How to solve a quadratic equation. Get your free quadratic equation worksheet of 20+
questions and answers. Solve the following equation for and enter exact answers only (no decimal approximations).
Solving Quadratic Equations (C) (by Quadratic Formula) Worksheet Fun
Web solve quadratic equations by completing the square; [latex]5 {x^2} = 7x + 6 [/latex] answer. If there is no real solution, enter. Ax2 +bx+c = 0 a x 2 + b x + c = 0. To use the quadratic formula,
we substitute the values of a, b, and c from the standard form into the expression on the.
Solving Quadratic Inequalities Worksheet
Identify the value of a, b and c. Web solving by using the quadratic formula. Solve quadratic equations using the quadratic formula. The general form of a quadratic equation f (x)=ax 2 +bx+c=0, in
which x is the unknown variable, a≠0, and a,b,c ϵ r. 1.) 2x 2 = 98.
Solving Quadratic Equations Using The Quadratic Formula Worksheet - Beginning and intermediate algebra (tran) 11: Write down the quadratic formula. Solve the quadratic equation using the quadratic
formula. Web click here for answers. Read each question carefully before you begin answering it. 2.) 4x 2 + 2x = 42. Web in order to solve a quadratic equation by using the quadratic formula, it is a
good idea to simplify it and make sure it is in the form of the general quadratic equation. Number of solutions of quadratic equations. Web solve quadratic equations by completing the square; Solve
the quadratic equation using the square root property.
Write down the quadratic formula. Quadratic formula (example 2) worked example: Beginning and intermediate algebra (tran) 11: 2.) 4x 2 + 2x = 42. If there is no real solution, enter.
Beginning and intermediate algebra (tran) 11: Quadratic formula (example 2) worked example: Solve quadratic equations using the quadratic formula. Solve the quadratic equation using the square root
This 25 Question Worksheet Focuses Equations With Both Real And Complex Solutions.
6.) 2x 2 = 44 + 3x. Web click here for answers. In order to solve a quadratic equation we must first check that it is in the form: A is the leading coefficient of the quadratic equation, c is the
absolute term of the quadratic equation.
The Angry Bee Is Deciding Whether Or Not To Go Into The House Where The Other.
Web students will practice using the quadratic formula to solve quadratic equations. Solve the quadratic equation below using the quadratic formula. Sum and product of roots; 1) m2 − 5m − 14 = 0 2)
b2 − 4b + 4 = 0 3) 2m2 + 2m − 12 = 0 4) 2x2 − 3x − 5 = 0 5) x2 + 4x + 3 = 0 6) 2x2 + 3x − 20 = 0 7).
By Inspection, It’s Obvious That The Quadratic Equation Is In The Standard Form Since The Right Side Is Just Zero While The Rest Of The Terms Stay On The Left Side.
Donʼt spend too long on one question. Then we simplify the expression. Give your answers to one decimal place. Solve the equation x2 + 5x + 1 = 0.
Solving Using Completing The Square Textbook Exercise.
Web quadratic formula worksheet # 1. Web using the quadratic formula date_____ period____ solve each equation with the quadratic formula. Quadratic formula worksheet (real solutions) quadratic
formula worksheet (complex solutions) quadratic formula worksheet (both real and complex solutions) discriminant worksheet; = − ± √ 2 − 4.
Related Post: | {"url":"https://ataglance.randstad.com/viewer/solving-quadratic-equations-using-the-quadratic-formula-worksheet.html","timestamp":"2024-11-05T05:44:34Z","content_type":"text/html","content_length":"37325","record_id":"<urn:uuid:30446952-b0da-4742-988d-0b17198d4f6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00058.warc.gz"} |
3.2. The Golomb ruler problem and a first model
The Golomb Ruler Problem (GRP) is one of these problems that are easy to state but that are extremely difficult to solve despite their apparent simplicity.
In this section, we describe the problem and propose a first model to solve it. This model is not very efficient and we will develop better models in the next sections.
3.2.1. Description of the problem
A Golomb ruler is a sequence of non-negative integers such that every difference of two integers in the sequence is distinct. Conceptually, this is similar to construct a ruler in such a way that no
two pairs of marks measure the same distance, i.e. the differences must all be distinct. The number of marks (elements in the sequence) is called the order of the Golomb ruler.
The Figure A non optimal Golomb ruler of order 4. illustrates a Golomb ruler of order 4 and all its - distinct - differences.
The Golomb ruler is length is
Figure An optimal Golomb ruler of order 4. illustrates an optimal Golomb ruler of order 4 and all its - distinct - differences.
Its length, Golomb Ruler Problem (GRP) is to find, for a given order
You might be surprised to learn that the largest order for which the experts have found an optimal Golomb ruler so far is... .
│Orders│Days │Participants │ Visited nodes │
│24 │1,572│41,803 │555,551,924,848,254,200 │
│25 │3,006│124,387 │52,898,840,308,130,480,000 │
│26 │24 │2754 │3,185,174,774,663,455 │
The search for problem? You too can participate: The OGR Project.
You can find all the known optimal Golomb rulers and more information on Wikipedia.
Why Golomb Rulers?
Golomb rulers have a wide variety of applications, including radio astronomy and information theory. In radio astronomy, when constrained to be lined up, telescopes collect more accurate information
if they are placed on the marks of a Golomb ruler. In information theory, Golomb rulers are used for error detection and correction.
3.2.2. How to solve the problem?
We follow again the classical The three-stage method: describe, model and solve.
3.2.2.1. Describe
What is the goal of the Golomb Ruler Problem? To find a minimal Golomb ruler for a given order
What are the decision variables (unknowns)? We have at least two choices. We can either view the unknowns as the marks of the ruler (and retrieve all the differences from these variables) or choose
the unknowns to be the differences (and retrieve the marks). Let’s try this second approach and use the efficient AllDifferent constraint. There are
What are the constraints? Using the differences as variables, we need to construct a Golomb ruler, i.e. the structure of the Golomb ruler has to be respected (see next section).
3.2.2.2. Model
For each positive difference, we have a decision variable. We collect them in an array
Figure An ordered sequence of differences for the Golomb ruler of order 4. illustrates an ordered sequence of differences for a Golomb ruler of order 4.
We want to minimize the last difference in
Figure The inner structure of a Golomb ruler of order 5. illustrates the structure than must be respected for a Golomb ruler of order 5. To impose the inner structure of the Golomb Ruler, we force
The inner structure of a Golomb ruler of order 5..
An easy way to construct these equality constraints is to use an index index going from , an index i to count the number of terms in a given equality and an index j to indicate the rank of the
starting term in each equality:
int index = n - 2;
for (int i = 2; i <= n - 1; ++i) {
for (int j = 0; j < n-i; ++j) {
Y[index] = Y[j] + ... + Y[j + i - 1];
3.2.2.3. Solve
Again, at this stage of our discovery of the library, we will not try to find a good search strategy to solve this model. A default basic strategy will do for the moment.
The next chapter Defining search primitives in constraint programming is entirely devoted to the subject of search strategies.
[Meyer-Papakonstantinou] Christophe Meyer and Periklis A. Papakonstantinou. On the complexity of constructing Golomb Rulers, Discrete Applied Mathematics, 57, pp 738–748, 2009. | {"url":"https://acrogenesis.com/or-tools/documentation/user_manual/manual/objectives/golomb_first_model.html","timestamp":"2024-11-11T20:34:54Z","content_type":"application/xhtml+xml","content_length":"25258","record_id":"<urn:uuid:e3e0cffe-f4fb-4cd9-84d8-fccfc78cc404>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00681.warc.gz"} |
Ordinal Numbers Videos - OrdinalNumbers.com
Ordinal Numbers Videos
Ordinal Numbers Videos – It is possible to enumerate infinite sets by using ordinal numbers. They can also be utilized as a tool to generalize ordinal figures.
The ordinal number is among the fundamental concepts in math. It is a numerical number that defines the place of an object in a set of objects. The ordinal number is typically an integer between one
and twenty. Ominal numbers have many uses however they are mostly utilized to signify the number of items in an agenda.
Charts as well as words and numbers can all be used to depict ordinal numbers. They may also be used to explain how pieces of a collection are arranged.
Most ordinal numbers are classified into one of the following two types. The Arabic numbers represent finite ordinals and transfinite ones are depicted using lowercase Greek letters.
As per the axioms that every set properly ordered must contain at the very least one ordinal. For example, the top score would be awarded to the first person in the class to receive it. The contest
winner was the one who received the highest grade.
Combinational ordinal number
The compound ordinal numbers, which can have multiple digits, are also referred to as. They are generated by multiplying an ordinal number by its final digit. They are most commonly used to rank and
date. They don’t provide a unique ending for each number, as with cardinal numbers.
To indicate the order in which elements are placed within the collection, ordinal numbers are used to indicate the order of elements within a collection. These numbers also serve to indicate the
names of the elements in a collection. Ordinary numbers are available in both normal and suppletive versions.
The addition of the suffix “-u” creates regular ordinals. After that, the numbers must be entered in words and then a colon is added. There are other suffixes available.For example, the suffix “-nd”
is used for numerals that end with 2, and “-th” is used for numbers ending in 4 or 9.
Suppletive ordinals are created by prefixing words with -u or -e. This suffix can be used to count words and is more extensive than the usual.
Limit of magnitude ordinal
Ordinal numbers that are not zero are the limit for ordinal numbers. Limit ordinal quantities have one disadvantage: they do not have an element with a maximum. They can be constructed by joining
non-empty sets without any maximum elements.
Transfinite recursion models also use limited ordinal numbers. Each infinite number of cardinals, according to the von Neumann model can also be considered an ordinal limit.
A number that has the limit is in reality equivalent to the total of the ordinals that are below it. Limit ordinal quantities can be quantified using math however they also be expressed as the form
of a series or natural numbers.
Data is organized by ordinal numbers. They are used to explain an object’s numerical place. They are frequently utilized in the fields of set theory, arithmetic, and other contexts. Although they are
in the same category, they are not considered natural numbers.
The von Neumann Model uses a well-ordered and ordered set. Let’s suppose that fy, which is a subfunction in an g’ function that is given as a singular function is the case. If the subfunction of fy
is (i I, II), and g meets the requirements that g is a limit ordinal.
Similar to that it is it is similar to the Church Kleene ordinal can be described as a limit ordeal. The Church-Kleene ordinal defines the term “limit ordinal” as a properly arranged collection of
smaller ordinals, and it has an ordinal that is not zero.
Stories that include examples of ordinal numbers
Ordinal numbers can be used to establish the hierarchy among entities or objects. They are crucial in organising, counting and ranking purposes. They can also be utilized to determine the order in
which items are placed and the location of objects.
The letter “th” is usually used to denote the ordinal number. Sometimes, however the letter “nd” could be substituted. Titles of books usually contain ordinal numbers too.
While ordinal numbers are frequently used in list format they can still be expressed in terms. They can also be found in acronyms and numbers. They are much more comprehensible than the cardinal
There are three types of ordinal numbers. They can be learned more through games, exercises, and other activities. Learning about them is an important part of improving your arithmetic ability.
Coloring is a fun and easy method to increase your proficiency. A simple marking sheet is a great way to record your progress.
Gallery of Ordinal Numbers Videos
Ordinal Numbers Song YouTube
Ordinal Numbers YouTube
Ordinal Numbers Jack Hartmann Ordinal Numbers Song YouTube
Leave a Comment | {"url":"https://www.ordinalnumbers.com/ordinal-numbers-videos/","timestamp":"2024-11-06T09:26:36Z","content_type":"text/html","content_length":"63915","record_id":"<urn:uuid:203a9cff-c98c-4ef6-9a32-94e65be60344>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00692.warc.gz"} |
Python Create List from 1 to N
In this article, we’ll explore different ways to generate a list of numbers from 1 to N using various techniques available in Python.
One of the simplest and most efficient ways to create a list of numbers from 1 to N in Python is by using the built-in range(start, stop, step) function that allows you to specify a start and stop
value, and even an optional step size, to generate a series of numbers within the specified range. By default, if you only provide a single argument to the range() function, it will be treated as the
stop value, while the start value will be assumed as zero.
n = 10
print(*range(1, n))
# 1 2 3 4 5 6 7 8 9
π Recommended: Python range() Function β A Helpful Illustrated Guide
In addition to the range() function, there are other approaches you can use to create a list of numbers, such as using list comprehensions, loops, or libraries like NumPy. We’ll delve into each of
these methods, discussing their advantages and use cases, to help you choose the most suitable option for your needs.
Creating a List in Python
In this section, you will learn how to create a list of numbers from 1 to N in Python. Lists are a fundamental data structure in Python, and creating a list can be achieved using various methods and
Python 2 vs Python 3
There are some differences between Python 2 and Python 3 when it comes to creating lists. Let’s explore these differences and how they affect list creation.
Python 2.x:
In Python 2, the range() function can be used to create a list of numbers directly. To create a list of numbers from 1 to N, pass the starting and ending values (N+1) as arguments to the range()
num_list = range(1, N+1)
Keep in mind that the second argument in the range() function is exclusive, which is why you add 1 to N.
Python 3.x:
In Python 3, the range() function creates an iterator, meaning you need to convert the output to a list explicitly. To create a list of numbers from 1 to N, use the list() function, as shown below:
num_list = list(range(1, N+1))
Again, the second argument of the range() function is exclusive, so you need to add 1 to N.
Creating an empty list in Python is quite straightforward. You can either use the list() function without any arguments or use empty square brackets []. Here’s an example:
empty_list = list()
# or
empty_list = []
Once you have an empty list, you can use for loops or list comprehensions to populate your list with elements.
π Recommended: How to Create an Empty List in Python?
Using Range Function
Syntax and Parameters
The range() function in Python allows you to generate a sequence of numbers within a specified range. There are three parameters you can provide when using the range() function:
1. Start: The beginning of the sequence, which is included in the output.
2. Stop: The end of the sequence, which is not included in the output.
3. Step: Determines the gap between each number in the sequence.
If you only provide one parameter, the function will generate the sequence from 0 up to the specified number (exclusive). Here’s an overview of the syntax and how the parameters work:
range(stop) # generates numbers from 0 to stop-1
range(start, stop) # generates numbers from start to stop-1
range(start, stop, step) # generates numbers from start to stop-1, with a step increment
Here are some examples of how to use the range() function to create lists of numbers in various setups:
Creating a list of numbers from 1 to N:
N = 10 # desired end value
numbers = list(range(1, N+1))
print(numbers) # Output: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Creating a list of even numbers from 1 to N:
N = 20 # desired end value
even_numbers = list(range(2, N+1, 2))
print(even_numbers) # Output: [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
Creating a list of numbers with custom step size:
start = 5
stop = 25
step_size = 3
custom_numbers = list(range(start, stop, step_size))
print(custom_numbers) # Output: [5, 8, 11, 14, 17, 20, 23]
By using the range() function and adjusting its parameters, you can easily generate tailored sequences of numbers for your specific needs.
Using For Loop
In Python, you can efficiently create a list of numbers from 1 to N using a for loop. First, let’s understand the basics of a for loop. A for loop is a control flow statement that iterates over a
sequence of elements, such as a list or a range of numbers.
To create a list of numbers, you can use the range() function, which returns a sequence of numbers between the given start and stop values. By default, it starts from 0 and increments by 1, but you
can customize both the start and increment values.
Here’s a step-by-step guide on how to create a list of numbers from 1 to N using a for loop:
1. Initialize an empty list called number_list. This list will hold the generated sequence of numbers.
number_list = []
2. Use the range() function to create a range of numbers from 1 to N+1. Remember to add 1 to N, as the range() function’s stop value is exclusive.
for number in range(1, N + 1):
3. Inside the for loop, append each number to number_list.
4. After the for loop, you will have a list of numbers from 1 to N stored in number_list.
Here’s the complete example to create a list of numbers from 1 to 10 using a for loop:
number_list = []
for number in range(1, 11):
When you run this code, it yields the following output:
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
As you can see, the for loop allows you to create a list of numbers effortlessly. Keep in mind that this approach is suitable for smaller lists. However, for very large lists, consider using more
efficient methods like list comprehensions or built-in functions.
Using List Comprehensions
List comprehensions are a powerful and concise way to create lists in Python. They offer an elegant alternative to traditional methods like using a for loop or the map() function. In this section,
you will learn how to use list comprehensions to create a list of numbers from 1 to N.
To create a list of numbers from 1 to N using a list comprehension, you can combine an expression with a for loop inside square brackets. The general syntax for list comprehension is:
new_list = [expression for item in iterable if condition]
In this case, the expression is simply the variable representing the numbers, the iterable is a range from 1 to N+1, and there is no need for a condition.
Here’s an example of using a list comprehension to create a list of numbers from 1 to 10:
numbers = [i for i in range(1, 11)]
print(numbers) # Output: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
If you need to apply a transformation to each element in the list, you can modify the expression part. For instance, to create a list of squares of the numbers from 1 to N, you can use the following
squares = [i**2 for i in range(1, 11)]
print(squares) # Output: [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
You can also add conditions to filter items in the source iterable.
π Recommended: How to Filter a List in Python?
For example, to create a list of odd numbers in the range from 1 to N, you can use the following code:
odd_numbers = [i for i in range(1, 11) if i % 2 == 1]
print(odd_numbers) # Output: [1, 3, 5, 7, 9]
In summary, list comprehensions allow you to create lists in Python with elegant and concise code. By using an expression, an iterable, and an optional condition, you can easily create a list of
numbers from 1 to N according to your specific requirements.
Using Numpy Module
The NumPy module is a popular Python library for working with numerical data and arrays. In this section, we will discuss how to use the NumPy module to create a list of numbers from 1 to N.
Specifically, we will cover installation and the numpy.arange() function.
Before you can use NumPy, you need to install it. You can do this easily using pip:
pip install numpy
Once installed, import the NumPy module in your Python script:
import numpy as np
Numpy Arange
Now that you have NumPy installed and imported, you can create a list of numbers from 1 to N using the numpy.arange() function. This function generates an array of evenly spaced values within a given
Here’s an example of how you can use numpy.arange():
import numpy as np
N = 10
numbers = np.arange(1, N+1)
In the example above, we set the value of N as 10. The numpy.arange() function takes the start, stop, and a step value as arguments. By default, the step value is 1. So, the code generates an array
of numbers from 1 to 10 (inclusive).
You can also specify a custom step value. For example, if you want to generate a list of even numbers from 2 to 20 (inclusive), you can modify the numpy.arange() function as follows:
import numpy as np
N = 20
numbers = np.arange(2, N+1, 2)
In this example, we set the start value as 2, stop value as N+1 (21), and step value as 2. The resulting array will contain even numbers from 2 to 20.
Using the numpy.arange() function, you can effortlessly create arrays containing numbers from 1 to N with various step values according to your needs.
Creating a User-Defined Function
In Python, you can easily create a user-defined function to generate a list of numbers from 1 to n using the def keyword.
To begin, declare your function with a descriptive name, such as list_from_one_to_n. The main component of the function will be utilizing the range() function, which generates a sequence of numbers
between given start and stop points. In your case, the start point is 1 and the stop point is n+1. The +1 is essential because the range() function is exclusive of the stop point.
def list_from_one_to_n(n):
return list(range(1, n + 1))
In this example, your user-defined function takes a single argument, n, representing the desired end point of the list. The range() function is then used to generate the sequence, which is
subsequently converted to a list using the list() function.
Now that your function is defined, you can easily use it to create lists of any length. For example, if you want to generate a list from 1 to 10, simply call your function like this:
my_list = list_from_one_to_n(10)
If you want to further extend the functionality of your user-defined function, you can also add a parameter for an optional step value. This allows you to create a list with a custom interval between
numbers. By using the sum() function, you can then calculate the sum of all elements in the list.
def list_from_one_to_n_with_step(n, step=1):
return list(range(1, n + 1, step))
my_list = list_from_one_to_n_with_step(10, 2)
total_sum = sum(my_list)
Using While Loop
In Python, you can easily create a list of numbers from 1 to N using a while loop. A while loop allows you to repeatedly execute a block of code until a specific condition is met. In this case, the
condition will be that the current number is within your desired range of 1 to N.
To start, you will need to create an empty list that will store the numbers, and initialize a variable ‘x‘ to 1.
This variable will be used to keep track of the current number in the sequence:
my_list = []
x = 1
Next, set up the while loop with a condition that checks if x is less than or equal to N being the maximum number you want in your list:
while x <= N:
Inside the loop, append the current value of x to the list and increment it by 1:
x += 1
When the loop is complete, you will have a list containing numbers from 1 to N. Here is the complete code:
my_list = []
x = 1
N = 10
while x <= N:
x += 1
This code will output:
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Frequently Asked Questions
How do you create a list of 1 to n in Python?
To create a list of numbers from 1 to n in Python, you can use a list comprehension. For example, if you want to create a list from 1 to 5, you can write:
n = 5
num_list = [x for x in range(1, n+1)]
This will output [1, 2, 3, 4, 5]. The range() function generates a sequence of numbers from the start value (1) to the stop value (n+1), with the default step size being 1.
How do you make a list from 1 to 10 in Python?
To create a list of numbers from 1 to 10 in Python, you can use the same list comprehension method as above. Just set n to 10:
n = 10
num_list = [x for x in range(1, n+1)]
This will output [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
How do you initialize a list from 1 to 100 in Python?
To initialize a list from 1 to 100, set n to 100 and use the list comprehension method:
n = 100
num_list = [x for x in range(1, n+1)]
This will output a list of numbers from 1 to 100.
How do I create a list between two numbers in Python?
If you want to create a list between two numbers, say a and b, you can use the range() function in a list comprehension:
a = 5
b = 15
num_list = [x for x in range(a, b+1)]
This will output [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15].
How can I generate a list of numbers with a specific step size in Python?
To generate a list of numbers with a specific step size, you can use the range() function with three arguments: start, stop, and step. For example, to create a list from 1 to 20 with a step size of
2, you can write:
start = 1
stop = 20
step = 2
num_list = [x for x in range(start, stop+1, step)]
This will output [1, 3, 5, 7, 9, 11, 13, 15, 17, 19].
How do you create a list with a specific number of elements in Python?
You can use the range() function in combination with the desired number of elements, n, to create a list:
n = 10
num_list = [x for x in range(1, n+1)]
This will output [1, 2, 3, 4, 5, 6, 7, 8, 9, 10].
Alternatively, if you want to create a list with a specific number of elements and a fixed value, you can use the * operator:
n = 5
value = 0
num_list = [value] * n
This will output [0, 0, 0, 0, 0].
If you want to create a list from 10 down to 1, check out this article: | {"url":"https://blog.finxter.com/python-create-list-from-1-to-n/","timestamp":"2024-11-02T21:28:13Z","content_type":"text/html","content_length":"91800","record_id":"<urn:uuid:800c308b-4efc-4105-9554-f7c4bd6c5f28>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00855.warc.gz"} |
Remark on the documentation:
Due to an issue in autoapi, the classes NeuralNetworkStatefreeOutputModel, NeuralNetworkInstationaryModel, NeuralNetworkInstationaryStatefreeOutputModel and FullyConnectedNN do not appear in the
documentation, see https://github.com/pymor/pymor/issues/1343.
Module Contents¶
NeuralNetworkModel Class for models of stationary problems that use artificial neural networks.
class pymor.models.neural_network.NeuralNetworkModel(neural_network, parameters={}, output_functional=None, products=None, error_estimator=None, visualizer=None, name=None)[source]¶
Bases: pymor.models.interface.Model
Class for models of stationary problems that use artificial neural networks.
This class implements a Model that uses a neural network for solving.
The neural network that approximates the mapping from parameter space to solution space. Should be an instance of FullyConnectedNN with input size that matches the (total) number of
parameters and output size equal to the dimension of the reduced space.
Parameters of the reduced order model (the same as used in the full-order model).
Operator mapping a given solution to the model output. In many applications, this will be a Functional, i.e. an Operator mapping to scalars. This is not required, however.
A dict of inner product Operators defined on the discrete space the problem is posed on. For each product with key 'x' a corresponding attribute x_product, as well as a norm method x_norm is
added to the model.
An error estimator for the problem. This can be any object with an estimate_error(U, mu, m) method. If error_estimator is not None, an estimate_error(U, mu) method is added to the model which
will call error_estimator.estimate_error(U, mu, self).
A visualizer for the problem. This can be any object with a visualize(U, m, ...) method. If visualizer is not None, a visualize(U, *args, **kwargs) method is added to the model which forwards
its arguments to the visualizer’s visualize method.
Name of the model.
_compute_solution(self, mu=None, **kwargs)[source]¶
Compute the model’s solution for parameter values mu.
This method is called by the default implementation of compute in pymor.models.interface.Model.
Parameter values for which to compute the solution.
Additional keyword arguments to customize how the solution is computed or to select additional data to be returned.
VectorArray with the computed solution or a dict which at least must contain the key 'solution'. | {"url":"https://docs.pymor.org/2021-1-0/autoapi/pymor/models/neural_network/index.html","timestamp":"2024-11-06T17:20:45Z","content_type":"text/html","content_length":"29422","record_id":"<urn:uuid:e02e4b9d-2de0-45db-8f3b-a0f1c34fb561>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00049.warc.gz"} |
To raise or not to raise interest rates?
The current economic outlook in the UK as well as global market conditions justify a policy rate of 0.5 per cent. In fact, if we are looking at the international environment for policy rate “hints”,
financial volatility rather than the federal funds rate is the variable to keep an eye on. Starting with small and carefully implemented Monetary Policy Committee (MPC) decisions, the policy rate
will probably settle to around 2.8%, which is much lower than the 5% policy rate commonly observed prior to the financial crisis. Costas Milas explains.
When will the Bank of England’s Monetary Policy Committee (MPC) raise the policy rate and should it follow the lead of the Fed when (and if) the latter decides to hike?
To answer these questions, we rely on the 2–year average CPI inflation forecast (based on market expectations and available from the Bank of England’s Inflation Report) and the Office for Budgetary
Responsibility (OBR) output gap measure (that is, GDP output relative to equilibrium); the latter proxies spare capacity in the UK economy. Both of these variables are plotted in Figure 1.
Figure 1: 2-year inflation forecast and output gap
Notice that the 2-year inflation forecast is just above the 2% target whereas output appears to be just below equilibrium (the output gap is “only” -0.6%). What do these economic variables imply for
monetary policy?
We run a so-called policy rule where the BoE policy rate responds to its past (the “smoothness” effect) and the variables reported in Figure 1. The idea is that when CPI inflation is expected to rise
above the 2% target (in which case the inflation gap is positive) and/or output exceeds equilibrium (in which case the output gap is positive), the MPC responds by raising the policy rate. Our policy
rate rule is in fact non-linear in the sense that the MPC’s response to inflation and output is allowed to be more aggressive when inflation and output are further away from the inflation target and
equilibrium output, respectively.
The policy rate rule is then augmented to account for a possible reaction either to the federal funds rate or to market expectations of near-term volatility conveyed by S&P 500 stock index option
prices (captured by changes in the Chicago Board Options Exchange VIX index). The idea is that either the MPC keeps an eye on policy setting behaviour by the Fed (that is, the MPC hikes when the Fed
hikes) or that the MPC lowers the policy rate when global market volatility (captured here by the VIX index) is on the rise. Figure 2 plots the BoE policy rate together with the policy rate implied
by alternative rules
Figure 2: BoE policy rate and policy rate implied by alternative rules
Notice the following:
1. The policy rule implied by inflation gap and output gap suggests a hike to 0.75% in the third quarter of 2015 whereas the policy rule implied by inflation gap, output gap and the federal funds
rate suggests a hike to 0.80% in the third quarter of 2015. Hence, both rules suggests that the MPC should have already hiked by now!
2. On the other hand, the policy rule implied by inflation gap, output gap and changes in the VIX index suggests that the policy rate should indeed be what we observe today, that, is, 0.50%
3. So, which rule should we “trust” more? The policy rate which responds to inflation gap, output gap and changes in the VIX index fits the data “best”.
4. In other words, if we are looking at the international environment for policy rate “hints”, financial volatility rather than the federal funds rate is the variable to keep an eye on!
5. Equally important, our preferred model (that is, the one accounting for financial volatility) suggests an equilibrium policy rate of 2.8%. In other words, it appears, in line with what most
policy commentators expect, that when the MPC decides to start hiking (with small and cautious steps), the policy rate will eventually settle to around 2.8%, that is, much lower than the 5%
policy rate we were used to observe prior to the financial crisis.
6. Needless to say, the main message of this blog piece is that a policy rate of 0.5% we observe today is about right if, and only if, one had to set policy by choosing among the rules described
above. We have argued that a model in which the MPC keeps an eye on all inflation gap, output gap and the VIX provides a good description of how the policy rate has evolved over time.
What we have not answered yet is when the MPC intends to hike. Will this happen around the turn of this year as suggested by BoE governor Mark Carney? Based on our preferred model, the answer depends
on (a) whether inflation expectations are revised upwards, (b) spare capacity in the economy disappears and (c) the recent global market volatility recedes quickly. If any of the above happens, then
it is more likely than not that we will see a policy rate hike in the last quarter of 2015 or the first quarter of 2016.
Costas Milas is Professor of Finance at the University of Liverpool. For further details on the regression model used, contact Costas.Milas@Liverpool.ac.uk
(Featured image credit: Policy Exchange CC BY 2.0) | {"url":"https://blogs.lse.ac.uk/politicsandpolicy/to-raise-or-not-to-raise-interest-rates/","timestamp":"2024-11-14T15:55:45Z","content_type":"text/html","content_length":"186667","record_id":"<urn:uuid:fc3b3bab-87c2-4778-be6e-358aa69ec94e>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00462.warc.gz"} |
Inverse Z-transform - Electronic Engineering (MCQ) questions & answers
1) Correlation is used for
1) Computation of average power in waveforms
2) Climatography
3) Identification of binary code word in PCM systems
4) Quantization
a. 1, 2 and 3 are correct
b. 1 and 2 are correct
c. 2 and 3 are correct
d. All the four are correct
Answer Explanation
ANSWER: 1, 2 and 3 are correct
No explanation is available for this question!
2) The nonlinear difference equations are solved using
a. Iterative method
b. Cobweb model
c. Phase diagram
d. Power series method
Answer Explanation
ANSWER: Phase diagram
No explanation is available for this question!
3) The partial fraction of x^2+1/x(x-1)^2 is
a. 1/ (x-1) + 2/(x-1)^2 - 1/x
b. 1/ (x-1) + 2/(x-1)^2 - 3/x
c. 1/ (x-1) + 2/(x-1)^2 - 3/x^2
d. 1/ (x+1) + 2/(x+1)^2 - 1/x
Answer Explanation
ANSWER: 1/ (x-1) + 2/(x-1)^2 - 1/x
No explanation is available for this question!
4) For a partial fraction method to be followed,
1) The degree of the numerator must be more than the degree of the denominator.
2) The factors formed for partial fraction are a combination of Linear factors and Irreducible quadratic factors.
3) The degree of the numerator must be less than the degree of the denominator.
4) The factors formed for partial fraction are a combination of Linear factors and Square roots.
a. 1, 2 and 3 are correct
b. 1 and 2 are correct
c. 2 and 3 are correct
d. All the four are correct
Answer Explanation
ANSWER: 2 and 3 are correct
No explanation is available for this question!
5) The factors formed for partial fraction are a combination of
1) Linear factors
2) Irreducible quadratic factors
3) Square roots
4) Cube roots
a. 1, 2 and 3 are correct
b. 1 and 2 are correct
c. 2 and 3 are correct
d. All the four are correct
Answer Explanation
ANSWER: 1 and 2 are correct
No explanation is available for this question!
6) Partial fraction method involves
a. Allotting coefficients
b. Dividing the numerator by denominator to get fractions
c. Dividing single fraction into parts
d. None of the above
Answer Explanation
ANSWER: Dividing single fraction into parts
No explanation is available for this question!
7) The region of convergence of x/ (1+2x+x^2) is
a. 0
b. 1
c. Negative
d. Positive
Answer Explanation
ANSWER: 1
No explanation is available for this question!
8) For an expanded power series method, the coefficients represent
a. Inverse sequence values
b. Original sequence values
c. Negative values only
d. Positive values only
Answer Explanation
ANSWER: Inverse sequence values
No explanation is available for this question!
9) The anti causal sequences have ______ components in the left hand sequences.
a. Positive
b. Negative
c. Both a and b
d. None of the above
Answer Explanation
ANSWER: Positive
No explanation is available for this question!
10) The several ways to perform an inverse Z transform are
1) Direct computation
2) Long division
3) Partial fraction expansion with table lookup
4) Direct inversion
a. 1, 2 and 3 are correct
b. 1 and 2 are correct
c. 2 and 3 are correct
d. All the four are correct
Answer Explanation
ANSWER: All the four are correct
No explanation is available for this question! | {"url":"https://www.careerride.com/mcq-tag-wise.aspx?Key=Inverse%20Z-transform&Id=21","timestamp":"2024-11-07T19:11:05Z","content_type":"text/html","content_length":"43480","record_id":"<urn:uuid:f6dfa269-f172-4e36-9fcf-e9913c6ace38>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00632.warc.gz"} |
Operator calculus for series solutions
A more direct approach to series solutions
In the previous post we found a solution to
using operator calculus, i.e. treating the differential operator D like a number and doing tricks with it. See the earlier post for a justification of why we can get away with unorthodox
We can generalize the method of the previous post to say that a solution to any differential equation of the form
is given by
In the previous post we had
but the method works more generally.
We then find a power series for 1/p(D), most likely by partial fraction decomposition, and apply the result to f(x). There may be a fair amount of labor left, but it’s purely calculus; all the
differential equation work is done.
Conceptually, this method subsumes other differential equation techniques such as undetermined coefficients and power series solutions.
Let’s use this method to find an approximate solution to
where ζ(x) is the Riemann zeta function.
In the previous post we worked out that
and so an approximate solution to our differential equation near 0 is
Numerically this works out to
If you want more terms, carry the series for 1/(D² + 7D + 12) out to more terms.
Related links
5 thoughts on “A more direct approach to series solutions”
1. OK, this is so simple, it’s like stealing. Mathematica makes the whole process dead easy, e.g. Series[Apart[1/(D^2 + 7 D + 12)], {D, 0, 2}] does all the heavy lifting.
2. You can also expand the series for 1/p(D) using long division, since p(D) is polynomial.
(Typo? You have in the second-to-last formula y(z) = … but no z’s in the right-hand side. Maybe you meant y(x) and meant to include some powers of x next to the coefficients.)
3. Thanks Michael. I automatically think “z” when writing power series but “x” or “t” when doing differential equations. Worlds collide. :)
4. Actually, I didn’t read carefully enough (sorry about that). The operator solution would be y(x) = (1/12 – 7/144 D + 37/1728 D^2 + …) ζ(x), but I doubt that series converges.
In any case, I get a different solution, namely, y = -0.0416667 + 0.0446706 x – 0.156347 x^2, to the IVP with y(0) = ζ(0)/12, y'(0) = -7 ζ'(0)/144. Mathematica code:
AsymptoticDSolveValue[{y”[x] + 7 y'[x] + 12 y[x] == Zeta[x],
y[0] == 1/12 Zeta[0], y'[0] == -7/144 Zeta'[0]},
y[x], {x, 0, 2}] // N
5. There must be some conditions under which such simple method works. As Michael commented, the series might be conditionally convergent only. | {"url":"https://www.johndcook.com/blog/2022/10/12/series-solutions/","timestamp":"2024-11-10T17:50:30Z","content_type":"text/html","content_length":"59556","record_id":"<urn:uuid:89ee9045-1533-42e2-85b0-e865e28a196f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00871.warc.gz"} |
Why is projectile motion parabolic? | Socratic
Why is projectile motion parabolic?
1 Answer
Projectile motion is parabolic because the vertical position of the object is influenced only by a constant acceleration, (if constant drag etc. is also assumed) and also because horizontal velocity
is generally constant.
Put simply, basic projectile motion is parabolic because its related equation of motion,
$x \left(t\right) = \frac{1}{2} a {t}^{2} + {v}_{i} t + {x}_{i}$
is quadratic, and therefore describes a parabola.
However, I can explain a bit more in-depth why this works, if you'd like, by doing a little integration. Starting with a constant acceleration,
$a = k$,
we can move on to velocity by integrating with respect to $t$. ($a = k$ is interpreted as being $a = k {t}^{0}$)
$v \left(t\right) = \int k \mathrm{dt} = k t + {v}_{i}$
The constant of integration here is interpreted to be initial velocity, so I've just named it ${v}_{i}$ instead of $C$.
Now, to position:
$x \left(t\right) = \int \left(k t + {v}_{i}\right) \mathrm{dt}$
$x \left(t\right) = \frac{1}{2} k {t}^{2} + {v}_{i} t + {x}_{i}$
Again, the constant of integration is interpreted in this case to be initial position. (denoted ${x}_{i}$)
Of course, this equation will probably look familiar to you. It's the equation of motion I described above.
Don't worry if you haven't learned about integration yet; the only thing you need to worry about is the power of $t$ as we move from acceleration to velocity to position. If $t$ was present in the
initial $a = k$ equation, with a degree other than $0$, (in other words, if $a$ is changing over time) then after integration we would end up with a degree different from $2$. But since $a$ is
constant, $t$ will always be squared in the equation for position, resulting in a parabola.
Since acceleration due to gravity is generally fairly constant at around $9.8 \frac{m}{s} ^ 2$, we can say that the trajectory of a projectile is parabolic.
A case where the path wouldn't appear to be parabolic is if an object were dropped, falling straight downwards, with no horizontal velocity. In this case the path looks more like a line, but it's
actually a parabola which has been infinitely horizontally compressed. In general, the smaller horizontal velocity, the more the parabola is compressed horizontally.
Impact of this question
39814 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/why-is-projectile-motion-parabolic#103769","timestamp":"2024-11-02T08:37:54Z","content_type":"text/html","content_length":"38255","record_id":"<urn:uuid:5e4aa076-9339-4c3c-876d-0e13739c5b3e>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00551.warc.gz"} |
Changing the Hypothesis
In an earlier segment, you devised the null and alternate hypothesis as follows
Now, let’s change the hypothesis statement slightly. Let’s say the claim was made that the Search CTR is actually equal to or more than 40%. How would that change the hypothesis? And does the
decision change in this case? Let’s take a look at the next video to find out.
[Note: The Prof says “greater than 40%” instead of “equal to or greater than” 40% at 0:09. The alternate hypothesis is considered less than 40% therefore the null hypothesis should be greater than
equal to 40%.]
Thus as you saw, the new null and alternate hypotheses came out to be
This changed the entire hypothesis test from a two-tailed test to a one-tailed test. Therefore, when you computed the p-value you only took into consideration one tail and that value came out to be
0.0395. In the critical value terms as well, the necessary critical region only consists of 0.0395 area. Since it is less than the value of 0.05, you need to reject the null hypothesis in this case. | {"url":"https://www.internetknowledgehub.com/changing-the-hypothesis/","timestamp":"2024-11-08T05:14:10Z","content_type":"text/html","content_length":"79034","record_id":"<urn:uuid:2253fb32-8c6c-433b-82f9-7c4865a27fdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00640.warc.gz"} |
dueling dualities
Recent Comments
Every year, Edge founder John Brockman asks a big question and fields responses from big-thinking scientists and writers -- including the Spousal Unit (and 155 others, so it's not something you can
peruse quickly in full). This year's question is, "What scientific concept would improve everybody's cognitive toolkit?" I probably would have picked something pathetically obvious, like, say,
entropy, or confirmation bias -- because if more people were self-aware enough not to fall into knee-jerk denialism by cherry picking the "evidence" to support pre-existing frameworks, the world
would be a much better place. So I was impressed when New Scientist's Amanda Gefter went boldly into the arcane landscape of theoretical physics and chose the concept of duality. I'd relegated this
fascinating notion to the back burners of my mind for the last few years; it was nice to be reminded of just how seminal dualities are to modern physics, which thrives on uncovering hidden
No doubt a few of you are wondering just what I'm talking about. Dualities in physics refer to theoretical models that appear to be different but can be shown to describe exactly the same thing. It’s
a bit like how ice, water and vapor are three different phases of the same chemical substance, except a duality looks at the same phenomenon in two different ways that are inversely related. For
instance, subatomic particles like photons and electrons also exhibit wave-like behavior, but uncertainty dictates that we can’t see both of these aspects at the same time. The more accurately we
observe an electron's particle nature, the less we can see the wave-like nature of the object, and vice versa. This is a duality. The particle/wave identities are inversely related, different “faces”
of the same underlying reality.
String theory has its own set of dualities. As Amanda points out in her Edge essay, once upon a time there were several competing dimensional theories – five, in fact – and for a long time, no one
knew which version was correct. Then Edward Witten stepped in and demonstrated that the five different string theories weren’t so contradictory after all. He united all five under a single
theoretical umbrella that he dubbed M theory.
M theory indicates that mathematically, the five versions of string theory are merely five different ways of looking at the same thing. Each iteration of string theory is connected in some way to
another through various dualities, in an intricate web of interconnections that ultimately links all five to each other and to M theory. Stringy dualities can help physicists simplify difficult
calculations through a kind of “bait and switch” approach; that's what physicists mean when they talk about the "non-perturbative properties" of string theory.
Standard perturbation theory is a calculating method that makes approximations to get a rough answer and then refines it bit by bit, according to how given physical systems are known to behave.
Physicists employ this kind of approach all the time, say, when determining the trajectory of a satellite. They use Newton’s laws for the initial calculation, and then make small refinements by
calculating the effects of other factors that might influence the trajectory: pressure from the solar wind, for instance, or the effects of heating on one side of the satellite.
Ideally, there should be only small discrepancies between the original calculation and the subsequent refinements. But sometimes the systems are too complicated, such when physicists try to calculate
the highly turbulent air flow patterns of a tornado, or the properties of a rapidly rotating wormhole. Their perturbative approximations break down because they are dealing with sudden, large changes
to the original value, instead of small, predictable increments. They can't use a perturbative approach, because the numbers involved would be too large. Stringy dualities could help simplify these
messy calculations.
For instance, string theory has something called T-duality. It describes an inverse mirror relationship between two of the five permutations of string theories. Here’s how it works: if the radius (r)
of a circular area of space has a large value (r = 1000) in one of the five versions of string theory, then we are dealing with small amounts of energy and it is relatively simple to calculate.
However, that value will be inverted, and therefore small (r = 1/1000) in one of the other versions, and we would be dealing with large energies. This is a large perturbation, and much more difficult
to calculate. Because both theories describe the same underlying physics, scientists can use the dual theory where the value is small to calculate the quantity, then plug it back into the original
theory to complete the calculation.
Dualities are a type of symmetry. The concept is a natural extension of the many different kinds of symmetry we see every day around us in the physical world. For instance, rotate a snowflake by 60
degrees, and you’ll find it looks exactly the same. This is spatial symmetry. A second type of symmetry occurs when one shuffles a series of similar objects, like a shell game, where the player must
guess under which shell a marble might be found after the three are randomly shuffled in quick succession. Regardless of where the marble turns out to be, mathematically, there are six different ways
in which three identical objects can be interchanged. Quantum chromodynamics (QCD) describes the strong nuclear force and the way various quarks interact with each other. There are quarks of three
different “colors” that can be randomly interchanged, just like the shells, so those quarks share a similar internal symmetry.
Supersymmetry extends this interchangeable shuffling to incorporate all known subatomic particles. Not all potential couplings are feasible in the current standard model. Fermions (the particles
that make up matter) and bosons (messenger particles that carry fundamental forces) can’t mix at all because they have such vastly different properties. Supersymmetry allows us to interchange a
fermion with a boson through the existence of hypothetical super partners, called sparticles. Each fermion is paired with a super-boson partner, and each boson has a super-fermion partner. Now they
can be mixed via their super partners.
If it turns out to be true -- and thanks to the Large Hadron Collider, we could find evidence one way or the other in the coming years -- supersymmetry would provide physicists with a powerful
calculating tool for understanding the most elusive mysteries of our universe, since everything would be connected to everything else through various kinds of dualities.
Okay, so dualities are powerful tools -- so what? How does understanding that help the average man on the street? That's what I love most about Amanda's choice: she finds a way to bring this
high-falutin' concept down to earth:
In everyday language, duality means something very different. It is used to connote a stark dichotomy: male and female, east and west, light and darkness. Embracing the physicist's meaning of
duality, however, can provide us with a powerful new metaphor, a one-stop shorthand for the idea that two very different things might be equally true. As our cultural discourse is becoming
increasingly polarized, the notion of duality is both more foreign and more necessary than ever. If accessible in our daily cognitive toolkit, it could serve as a potent antidote to our typically
Boolean, two-valued, zero-sum thinking — where statements are either true or false, answers are yes or no, and if I'm right, then you are wrong. With duality, there's a third option. Perhaps my
argument is right and yours is wrong; perhaps your argument is right and mine is wrong; or, just maybe, our opposing arguments are dual to one another.
I like her analogy. I'm always looking for new ways to find some shreds of common ground with those whose views are different -- often radically so -- from my own... while still allowing both parties
to be true to their principles. It's harder than it seems. For example, if you find yourself arguing with someone who sees the world starkly in black and white, they'll accuse you of seeing
everything in tepid shades of gray, with the implicit assumption that this is an inferior worldview. Forget about trying to expand on their chosen metaphor and insisting your world is a kaleidoscope
of color; they'll just sneer at you for being trite (as if a black and white worldview is, like, deep?).
Personally, I view the world as emergent and complex: it may start with a few very simple rules, but things become tangled and murky and impossibly complicated in very short order. That is the
essence of the human condition. Embracing dualities is one way to deal with those inevitable complications and find some common ground. I'll let Amanda have the last word:
That's not to say that we ought to descend into some kind of relativism, or that there are no singular truths. It is to say, though, that truth is far more subtle than we once believed, and that
it shows up in many guises. It is up to us to recognize it in all its varied forms.
Love this article and the point it makes (though it took me several readings to grasp the duality concepts). I agree that the answer to our current quagmire of red state / blue state thinking may lie
in accepting the inevitable existence of both states (pardon the possible pun there). Meanwhile I need to get back to the fascinating book I am reading(The Calculus Diaries, no sucking up intended),
thanks again for the great post.
Arguably one of the best posts on this blog. On another note, The Calculus Diaries is the best calculus self help book out there. I only wish I'd known of the book before I took my calculus class
last semester. Great job on the blog too!
Best blog ever - from my point of view! Thank you for putting my thoughts into words. | {"url":"https://twistedphysics.typepad.com/cocktail_party_physics/2011/01/dueling-dualities.html","timestamp":"2024-11-11T11:54:26Z","content_type":"application/xhtml+xml","content_length":"43063","record_id":"<urn:uuid:27f558d6-95c5-4b33-8eb3-b9c22685f625>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00545.warc.gz"} |
How do you factor completely 5a^2 + b? | HIX Tutor
How do you factor completely #5a^2 + b#?
Answer 1
This expression cannot be simplified further.
Unless we are given extra information (e.g. #b = -5#), then this expression cannot be factored further.
If the second term was #b^2# rather than #b#, then it would be possible to factor using Complex coefficients:
#5a^2+b^2 = (sqrt(5)a)^2-(bi)^2 = (sqrt(5)a-bi)(sqrt(5)a+bi)#
Alternatively, if we were told that #b >=0# then we could write
#5x^2+b = (sqrt(5)a)^2-(sqrt(b)i)^2 = (sqrt(5)a-sqrt(b)i)(sqrt(5)a+sqrt(b)i)#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To factor completely, you look for common factors, but in this case, there aren't any. So, the expression 5a^2 + b is already factored completely.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 3
To factor completely (5a^2 + b), there isn't a simple way to factor it further without additional information or specific instructions. If you're looking to factor it over the real numbers, and
assuming (a) and (b) are real numbers, then (5a^2 + b) is already in its simplest factored form. However, if (a) and (b) are variables or if you have more context or constraints, additional factoring
might be possible.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-factor-completely-5a-2-b-1-8f9af97f36","timestamp":"2024-11-06T23:19:05Z","content_type":"text/html","content_length":"577342","record_id":"<urn:uuid:d4870102-bab0-489a-a412-5be75609cef6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00763.warc.gz"} |
Speed, Distance and Time, Physical Science Game
What is the relationship between speed, distance, and time? Well, speed equals distance divided by time. Speed measures how quickly an object moves from one place to another. It is equal to the
distance traveled divided by the time taken. Basically, we can calculate speed if we know the distance traveled and the time taken. See how the concepts relate? Now, think about an Olympic sprinter
and how time is critical to determining whether they are first or not. Play this science game to learn the relationship between speed, distance, and time.
Learn about the Dynamics of Speed, Distance, and Time through Engaging Math Games
The intricate relationship between speed, distance, and time has long been a cornerstone in mathematics, especially when it comes to teaching children the principles of motion. Understanding this
relationship is not just about solving math problems but also about grasping real-world applications. What better way to introduce these concepts than through interactive games designed specifically
for children? Dive into the fascinating world of speed distance time calculations and discover the joy of learning math the fun way.
Understanding the Fundamental Relationship
Before diving into the games, it's essential to grasp the basics. The relationship between speed, distance, and time can be summarized by the formula: Speed = Distance/Time. When any two of these
elements are known, calculating the third becomes straightforward. Whether you're calculating distance traveled during a summer road trip or determining the calculating speed of a racing car, this
formula is the key.
The Interactive Approach: Games to the Rescue!
Textbooks and classroom chalkboards can sometimes fail to capture the imagination of young minds. Here's where the magic of interactive math games steps in. Through gamified experiences, children can
visualize real-life scenarios, which make abstract concepts like distance and speed more relatable.
Take, for instance, the Speed, Distance, and Time Physical Science Game available on ESL Games Plus. This game offers players various challenges that test their understanding of the relationship
between speed, distance, and time. Not only do children get to apply their math skills, but they also engage in an adventurous journey, navigating various scenarios that require swift calculations
and critical thinking.
Broadening Horizons with More Math Games
If your young one has a budding interest in math and science, it doesn't stop at speed and distance calculations. ESL Games Plus Math Games is a treasure trove of engaging mathematical games that
cover a broad spectrum of topics. From basic arithmetic to more complex geometrical problems, the platform is a haven for any child eager to learn and play simultaneously.
Why Opt for Game-Based Learning?
While traditional methods of teaching have their merit, game-based learning offers a unique blend of education and entertainment. Concepts that might seem daunting or complex are broken down into
enjoyable challenges, making the learning process more digestible and fun. When children engage with concepts like calculating distance and calculating speed through games, they're more likely to
retain the information and apply it confidently.
In Conclusion
The world of math opens up in exciting and unexpected ways when children are provided with the right tools and resources. By combining education with entertainment through games, we ensure that
children not only learn but also develop a genuine love and curiosity for the subject. Dive into the world of speed, distance, and time today, and let the mathematical adventure begin!"
The Mechanics of Speed is an essential part of physics, and the speed of an object is one of the most significant factors influencing its performance. There are two ways to measure speed:
instantaneous speed and average speed. The latter is more important, as it explains the difference between the two. The average speed of a 100m sprint is a function of time, while the latter is the
inverse of the average.
Firstly, you should understand the difference between speed and velocity. While both terms have similar meanings, they are not the same thing. The former refers to the rate at which an object covers
a certain distance. The latter, however, has a direction. Likewise, the former is the rate at which an object changes its position relative to a given time. In simple terms, the greater the
displacement, the faster the object is moving.
Scientists study different types of speed. One type is linear speed, which describes the speed of an object travelling in a straight line. It can be calculated by applying Newton's laws of motion.
Usually, scientists study the speed of an object at a particular time, such as when a boy is walking. This speed would cause the boy to change his position by two meters in a second. Another type is
called instantaneous speed, which is measured at a specific moment. The difference between the two is that when an object falls, there is friction.
The average speed of an object can be calculated by dividing the distance it has covered by the time it took it to travel. However, the calculation of the velocity is more complicated. It can be done
using different methods, depending on the information provided by the motion. Distance-time graphs are also helpful for calculating the speed of an object. It is useful to know the average speed of
an object if you have a good idea of how much time it took it to travel.
https://www.eslgamesplus.com/wp-content/uploads/2022/04/moonshot-turaco-games.jpg 176 250 ESL Games PLUS https://www.eslgamesplus.com/wp-content/uploads/2018/10/esl.png ESL Games PLUS2022-06-03
18:00:212023-08-26 18:25:20Speed, Distance and Time, Physical Science Game | {"url":"https://www.eslgamesplus.com/speed-distance-and-time-physical-science-game/","timestamp":"2024-11-10T20:37:15Z","content_type":"text/html","content_length":"94695","record_id":"<urn:uuid:055f9381-9d37-4b7c-b4a3-5e84b2bed111>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00578.warc.gz"} |
In a △ABC,E is the mid-point of median AD. Then area of ( △BED)... | Filo
In a is the mid-point of median . Then area of ( is equal to times area of
Not the question you're searching for?
+ Ask your question
In ,
In is the median
So k =1
Was this solution helpful?
Found 8 tutors discussing this question
Discuss this question LIVE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Areas of Parallelograms and Triangles
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text In a is the mid-point of median . Then area of ( is equal to times area of
Topic Areas of Parallelograms and Triangles
Subject Mathematics
Class Class 9
Answer Type Text solution:1
Upvotes 133 | {"url":"https://askfilo.com/math-question-answers/in-a-triangle-mathrmabc-mathrme-is-the-mid-point-of-median-mathrmad-then-area-of","timestamp":"2024-11-11T22:43:56Z","content_type":"text/html","content_length":"216281","record_id":"<urn:uuid:8f49ad50-3a3a-4f45-8f5a-cab4f6171ce4>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00079.warc.gz"} |
Logical reasoning Percentage questions -CSEET
Q1) What percent of a day in 6 hours?
Q2) The population of of a town is 176000. If it increases at the rate of 5% per annum, what will be it’s population 2 years hence?
Q3) The length and breadth of a rectangle are increased by 20% and 30%. The area of the resulting rectangle exceeds the area of the original rectangle?
Q4) Sugar contain 5% water. What quantity of pure Sugar should be added to 10 litres of water to reduce this to 2%.
Q5) The price of a fan is decreased by 20%. As a result of which the sale increased by 40%. What will be the effect on the total revenue of the shop?
Q6) 45% of ? = 25% of 355
Q7) Out of 500 students of a school 35% students plays football, 25% plays cricket and 20% neither play football nor cricket. How many students play football and cricket ?
Q8) Fresh vegetables contains 70% of water and dry vegetables contain 20% of water. How much dry vegetables can be obtained from 100kg of fresh vegetables?
Q9) If Hari height is 30% less than that of Mithun. How much percentage Mithun height is more than that of Hari?
Q10) The population of a city is increased 5% ,7% and 11% in the last three years. What will be the present population if the population of a town is 2,40,000 three years ago?
Q11) 500 pencils were distributed equally among children in such a way that the number of pencils received by each child is 20% of the total number of children. How many pencils did each child
Q12) One type of liquid contains 20% water and the second type of liquid contains 35% water. A glass filled with 8 parts of the first liquid and 5 parts of the second liquid. The water percentage in
Q13) If the radius of the circle is increased by 5% then the area is increased by
Q14) In an election a candidate who got 25% of the total votes polled was defeated by his rival by 270 votes. Assuming that there were only 2 candidates in the election, the total number of votes
polled was
Q15) The population of a town is 16500. During the last 3 years, the population increased at the rate of 20% per year. The population 3 years ago was
Q16) The passing marks in an examination is 40%. If Rama gets 88 marks and is declared failed by 10 marks, then the maximum mark in the examination is
Q17) From the salary, Mubarak spent 15% for house rent, 5% for children’s education and 15% for Entertainment. Now he left with Rs.13,000. His salary is
Q18) In an examination 30% of the students failed in Science, 45% of the students failed in Maths and 25% of the students failed in both subjects. Find the % of the students passed?
Q19) The salary of a workers increased by 15% and decreased by 7%, What % change rises in his salary?
Q20) In an examination, 30% of the maximum marks required to pass. A student get 120 marks and failed by 90marks. Find the maximum marks
1. Chabahar Port, which was in the news recently, is located in which of the following countries? a. Pakistan b. Qatar c. UAE d. Iran ... | {"url":"https://www.arthacs.in/logical-reasoning-percentage-questions-cseet","timestamp":"2024-11-14T18:15:26Z","content_type":"text/html","content_length":"1050514","record_id":"<urn:uuid:eff5cbe9-a130-44e7-bd2b-92d4a424891e>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00425.warc.gz"} |
How Many Weeks Until Calculator - Weeks Between Dates - onlinecalculator.guide
How Many Weeks Until Calculator
If you’re not obtaining any way to calculate how many weeks until then for sure you can use our how many weeks until calculator and surely it will make your job easy. The only thing you need to do is
to put the start date and end date values and then press the calculate button to obtain your final answer.
Examples of How Many Weeks Until Calculator
Weeks Between Given Date Duration Formula
The basic formula to calculate how many weeks until is given by,
No. of Weeks = End Date - Start Date
Simply enter the end date and start date which are mentioned in the given formula and then determine how much the value is equivalent in terms of finding the final answer in a week. You can use this
basic formula and make all your conversions concerning weeks easily.
Finding Weeks Between Dates - How Many Weeks Until?
The steps to be followed to calculate how many weeks until are given below. Follow the detailed steps here and make your computations much quicker and easier.
• The first step is to obtain the date value you are about to convert into the week.
• Then also insert the value of the date you want to till.
• Then to get the answer, subtract the end date from the start date.
• After performing the required math, the resultant value is the time value you need.
Weeks Until Date Examples
Example 1:
How many weeks until 22nd December,2022 to 12th January,2023?
Given data,
Start Date = 22nd December,2022
End Date = 12th January,2023
To calculate how many weeks until with this simple formula:
No. of Weeks = End Date - Start Date
Now will substitute the start date and end date,
No. of weeks = End Date - Start Date
Number of weeks = 12th January,2023 – 22nd December,2022
So, several weeks = 3 weeks.
Example 2:
How many weeks from 3rd February,2023 to 8th May,2023?
Given data,
Start Date = 3rd February,2023
End Date = 8th May,2023
To calculate how many weeks until with this simple formula:
No. of Weeks = End Date - Start Date
Now will substitute the start date and end date,
No. of weeks = End Date - Start Date
Number of weeks = 8th May,2023 – 3rd February,2023
So, the number of weeks = 13 weeks 3 days.
Become familiar with many more concepts that are arranged efficiently on Online Calculator.guide and clear your concerns on many more calculations like this.
FAQs on How Many Weeks Until Calculator
1. How many weeks in a year?
There are 52 weeks in a year.
2. Number of weeks in 4 months?
There are 17.381 weeks in 4 months.
3. How many weeks until February 2023?
There are 8 weeks and 6 days until February 2023. | {"url":"https://onlinecalculator.guide/how-many-weeks-until-calculator/","timestamp":"2024-11-11T19:49:50Z","content_type":"text/html","content_length":"37911","record_id":"<urn:uuid:dbbf056f-fff4-4ace-be51-54c7b7c2491d>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00675.warc.gz"} |
POSCO Facility Installation
POSCO is the most profitable and environmentally conscious steel company in the world. As such, POSCO undertook the installation of the plasma lighting systems for a new warehouse in Pohang, South
Korea. The warehouse had dimensions of 165ft x 132ft for a total floor space of 21780sq ft and had a ceiling height of 25ft. The target floor lumination was 300 lux. The lighting simulation showed a
requirement of 61 units of 400 watt high pressure sodium lamps or 16 units of 700 watt plasma lamps, model PSH0731B to achieve the illumination targets. Following this installation, POSCO installed
several thousand plasma lamps in their integrated steel mills throughout South Korea.
Luminaire Positions
Simulated Illumination Levels
Initial Fixed Costs
400w High Pressure Sodium Lamps: 700w Plasma Lamps:
Part # of Units Cost Total Cost Part # of Units Cost Total Cost
Bulb 61 $50 $3050 Luminaire 16 $1500 $24000
Ballast 61 $75 $4575
Fixture 61 $100 $6100
Subtotal $13725 Subtotal $24000
Installation Labor Costs 61 $50 $3050 Installation Labor Costs 16 $50 $800
Initial Cost $16775 Initial Cost $24800
Installation labor costs is calculated at $50 per fixture, which takes into consideration the time to install, lift equipment and the wiring involved in installing these fixtures. Initial fixed costs
came out to be $24800 for the plasma lamps compared to $16775 for high pressure sodium lamps, a difference of $8025. The cost of plasma lamps might seem high at first, but keep in mind energy costs
and maintenance costs are far more important and that's where you'll see the tremendous savings with plasma lamps, as shown in the charts below. Note, the plasma fixtures prices are high estimates
and are available at a lower price. The high pressure sodium lamps are current retail prices.
Energy Costs
400w High Pressure Sodium Lamps: 700w Plasma Lamps:
Watts per Fixture 465w Watts per Fixture 730w
Color Rendering Index 20~25 Color Rendering Index 80~85
# of Units 61 # of Units 16
Power Consumption 28365w Power Consumption 11680w
Cost per kWh $0.15 Cost per kWh $0.15
Hours per Day 24 Hours per Day 24
Days per Year 365 Days per Year 365
Annual Power Consumption 248477kWh Annual Power Consumption 102316kWh
Annual Energy Costs $37271 Annual Energy Costs $15347
Annual Energy Savings $21924
The POSCO warehouse is a distribution facility and required the lights to be on continuously, 24 hours a day, 365 days a year. The 465w figure for the high pressure sodium lamps include the power
used by the ballast. The annual energy savings is obvious with plasma lamps. A total of $21924 is saved annually, which pays off for the price difference in the initial fixed costs in just over 4
months. Plasma lamps require less maintenance because the bulbs do not dim with age and the parts last longer, see the below chart for details.
Maintenance Costs
400w High Pressure Sodium Lamps: 700w Plasma Lamps:
Bulb Life 12000 hours Magnetron Life 30000 hours
Ballast Life 30000 hours
Hours on per Year 8760 Hours on per Year 8760
Hours per 10 Years 87600 Hours per 10 Years 87600
Bulb Replacements in 10 Years 7.3 Magnetron Replacements in 10 Years 2.9
Ballast Replacements in 10 Years 2.9
# of Bulbs Replaced 445 # of Magnetrons Replaced 46
# of Ballasts Replaced 178
Replacement Parts Cost $35624 Replacement Parts Cost $14016
Bulb Replacement Labor Costs $11132 Magnetron Replacement Labor Costs $1168
Ballast Replacement Labor Costs $4453
Ten Year Labor Costs $15585 Ten Year Labor Costs $1168
Ten Year Parts Cost $35624 Ten Year Parts Cost $14016
Ten Year Energy Costs $372716 Ten Year Energy Costs $153475
Ten Year Total Operating Costs $423925 Ten Year Total Operating Costs $168659
Annual Total Operating Costs $42392 Annual Total Operating Costs $16866
Annual Total Savings $25526
Price Difference Payoff Time 0.31 Years
Though high pressure sodium bulbs might be rated for 24000 hours, lumen depreciation within the first 12000 hours drops by 20%. Bulb orientation can also have a substantial effect on the bulb life
and brightness. We used a figure of $25 per part replacement for the labor costs. Once we factor in the maintenance costs, the savings are even greater. Total costs to operate the plasma lamps
annually is only $16866 compared to $42392 for high pressure sodium. This is an annual savings of $25526, which means the initial price difference of $58340 between plasma and metal halide will be
paid off in under 4 months!
Keep in mind these energy costs are calculated under the assumption energy prices remain the same during the 10 years. History has proven that energy costs will always rise with time so the payback
time might be quicker than the figures shown above. 445 high pressure sodium bulbs and the 13350mg of mercury that those bulbs would have contained were saved from the landfills by going with plasma
lamps. Note the lower color rendering index of 20~25Ra with the high pressure sodium lamps compared to 80~85Ra with the plasma lamps. The savings and benefits are obvious.
Picture of Final Installation | {"url":"https://lgelighting.com/posco.facility.asp","timestamp":"2024-11-02T18:10:20Z","content_type":"text/html","content_length":"19974","record_id":"<urn:uuid:1ddaf8a2-7ef1-44f2-b47b-a4aff572ce53>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00307.warc.gz"} |
Naive Bayes Classifier in Machine Learning
In Machine Learning and Data Science field, researchers have developed many advanced algorithms like Support Vector Machines, Logistic Regression, Gradient Boosting, etc. These algorithms are capable
enough to produce very high accuracy. But among these advanced ones, there exists an elementary and Naive algorithm, known as Naive Bayes.
In English, the "Naive" word is used for a person or action lacking experience, wisdom, or judgment. This tag is associated with the Naive Bayes algorithm because it also makes some silly assumptions
while making any predictions. But the most exciting thing is it still performs better or equivalent to the best algorithms. So let's learn about this algorithm in greater detail.
Key takeaways from this blog
• What is Bayes theorem?
• Why Bayes theorem in Machine Learning?
• Naive Bayes examples for single and multiple features.
• How does Naive Bayes handle the non-categorical features?
• What is Gaussian Naive Bayes?
• Python-based implementation
• Advantages and disadvantages of Naive Bayes.
• Industrial applications of Naive Bayes.
• Possible Interview Questions on this topic.
Let's start without any further delay.
Note: In our probability blog, we discussed Baye's theorem.
Why Bayes Theorem in Machine Learning?
Let's try to find the answer to this question. In the case of supervised learning, we have input features and the corresponding output labels present with us. We try to make our machines learn the
relationship between input features to the output variable. Once this learning is done on training data, we can use this model to make predictions on test data. Let's represent this supervised
approach in a Bayesian format.
We know the feature values for test data and want to see the output label if that particular values of features are present. We can also say we want to predict the chances of occurrence of any label
if the values of features are already known. That is precisely the same as p(label | feature). From the Bayes theorem, to know the value of p(label | feature), we must know the likelihood
probabilities p(feature | label) and the prior probabilities, p(label) and p(features). But, do we really have these values?
Yes! from the training data. That's the whole crux of supervised learning. Right?
Let's discuss the terms on the right side of the formulae above. The likelihood term p(feature | label) says the probability of that feature if we already know the label. And for training data, we
know the label for each sample. Also, the prior probabilities p(label) and p(feature) can be calculated from the training data. So ultimately, we will have the posterior probability that we wanted to
Naive Bayes example
Single Feature
Let's take the example of a football game, and below is the data which says if humidity is high or normal, then play happens; otherwise, if humidity is low, the play does not occur. Straightforward
data, correct?
Suppose we want to make a machine learning model, which receives the feature value of humidity and tries to predict whether the play will happen or not. So suppose we know the humidity as Normal, and
let's calculate the chances of play happening, i.e., p(play = Yes | humidity = Normal).
From the data:
p(humidity = Normal | play = Yes) = Probability of humidity to be normal when we know that play happened is 1/2, as there are two cases when the play happened. One is when humidity = High and
Humidity = Normal. So the probability of humidity = Normal will be 0.5.
p(play = Yes) = Probability of game to have happened will be 2/3 as we have 3 samples out of which 2 say the play happened as "Yes".
p(humidity = Normal) = probability of humidity to be normal will be 1/3 as there are three samples out of which 1 instance is of humidity = Normal.
The same can be understood from the data intuition as well. Similarly, p(play= No | humidity = Normal) can be calculated, and it will be 0.
Multiple Feature
The example we saw above had just one feature, the humidity value. But in the practical, real-life data, we will have multiple features. In such a case, the same equation can be represented as,
Expanding the above equation,
This calculation is extensive and expansive in terms of computations. That's where the Naïve Bayes algorithms come with their "naive" assumption. It considers every feature is independent of each
other, which means one feature is unaffected by the occurrence of any other feature.
This assumption is impractical in real-life scenarios. Suppose we are recording features from our cell phone's battery, the terminal voltage, current, and temperature. Using these, we estimate
whether battery health is "good" or "bad". It's a classification problem statement, and algorithms like SVM, logistic regression, etc., will learn mapping functions between feature sets and labels.
While learning, these algorithms will not assume that current is independent of the Voltage feature, which is oblivious. Voltage produces the current, which results in the heating of the battery, so
temperature varies accordingly. So all features are dependent.
But Naive Bayes algorithm assumes that all three features are independent, and the occurrence of one feature is totally unaffected by the occurrence of others. This is not true in practical cases.
Still, this algorithm produces quite fascinating results. According to Naïve Bayes, the above equation is modified as follows:
This assumption drastically reduces the computation cost and also delivers good accuracy. Also, in the Naive Bayes algorithm, we don't calculate the denominator of the above equation to save some
computation power. Because, for all the classes, the denominator term remains the same and does not make any contribution in segregating different classes. Hence, we can represent the above equation
How does Naive Bayes handle the non-categorical features?
What if the features are continuous numerical values (non-categorical). How will we estimate probabilities in such a case?
Here comes the theory of Probability Distribution Function (PDF). We need to estimate the probabilities for numerical variables using this concept.
Suppose we assume that the PDF is a gaussian or normal distribution. In that case, we need to calculate the mean(μ) and standard deviation (σ) of that feature,and then for any value of feature x, the
probability f(x) can be calculated from the below equation.
Gaussian Naïve Bayes
In the case of gaussian distribution assumption, we call Naïve Bayes, Gaussian Naïve Bayes algorithm. We can choose other probability distribution functions, like Bernoulli, Binomial, etc., to
estimate probabilities. Among all these, gaussian is the most famous one as most of the real-world sensors produce data in the gaussian distribution format. If we look at the image below closely,
most people in the gym use 15–20 Kg weights. Later on, the usage of other weights decreases subsequently, like what we observe in the gaussian distribution.
Too much theory, let's implement it on a practical machine learning application, i.e., Sentiment analysis.
Can we solve regression problems using Naive Bayes?
Although the native form of the Naive Bayes algorithm is made to solve the classification problems only, a team tried to solve the regression problem using Naive Bayes, and the work can be found here
. They concluded that they did not achieve good results and suggested that Naive Bayes is suitable for classification algorithms.
Naive Bayes Sklearn In Python
Let's train a Naive Bayes algorithm on the famous Iris dataset. The objective of our algorithm would be to look at the available features like Sepal/Petal length/width and classify flowers into three
categories of Setosa, Versicolor, and Virginica.
Step 1: Importing required libraries
Standard libraries like numpy (for numeric operations), pandas (for managing the data), and matplotlib (to visualize the analysis of the dataset) would be required.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Step 2: Loading the dataset and visualizing scatter-plot of features
The famous iris dataset comes with the Scikit-learn library. If we print the features present in the dataset, the output is ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width
(cm)']. Also, the target variable y can take three values [0, 1, 2] corresponding to three flower classes.
from sklearn import datasets
iris = datasets.load_iris() #loading dataset
X = iris.data[:,] #input
y = iris.target #target
print("Features : ", iris['feature_names'])
iris_dataframe = pd.DataFrame(data=np.c_[iris['data'],iris['target']],
grr = pd.plotting.scatter_matrix(iris_dataframe, c=iris['target'],
# Features : ['sepal length (cm)', 'sepal width (cm)', 'petal # length (cm)', 'petal width (cm)']
The code above will plot the diagram shown below.
Step 3: Visualizing the correlation and checking the assumption of Naive Bayes
We can plot the correlation matrix using the seaborn library.
import seaborn as sns
dataplot = sns.heatmap(iris_dataframe.corr(), annot=True)
We can see that the features are highly correlated. But as per Naive Bayes assumption, it will treat features as entirely independent of each other. Based on this, our algorithm will compute the
following probability for all three flower classes.
Step 4: Split the dataset
Now we are in a position where we can segregate our training and testing data. For that, we can use the inbuilt function of traintestsplit from Sklearn.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.25,random_state=0)
Step 5: Fit the model
Now, assume our data follows Gaussian distribution and import the Gaussian Naive Bayes GaussianNB model from Sklearn. Let's fit this model into our training data.
from sklearn.naive_bayes import GaussianNB
NB = GaussianNB()
NB.fit(X_train, y_train)
Hurrah! We have our model ready with us now.
Step 6: Evaluate the model
We must know that we have solved a classification problem to evaluate the model. Hence some standard metrics for evaluating our model can be accuracy, precision, recall, F-1 Score, etc. A detailed
list can be found here in this blog. As we know, all these matrices can be calculated by having the confusion matrix ready with us. So let's plot the confusion matrix.
Y_pred = NB.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, Y_pred)
df_cm = pd.DataFrame(cm, columns=np.unique(y_test), index = np.unique(y_test))
df_cm.index.name = 'Actual'
df_cm.columns.name = 'Predicted'
sns.heatmap(df_cm, annot=True)# font size
From the matrix, we can see that the model quickly achieved 100% accuracy on test data. This is really amazing. It started with the silly assumption but provided whatever we wanted. That's the main
reason why this algorithm is famous among researchers.
Advantages of Naive Bayes
• Computationally Simple: This classifier is computationally very simple compared to algorithms like SVM and XGBoost.
• When the independent nature of features becomes true in data, the Naive Bayes algorithm performs the best and can beat the accuracy of Logistic Regression.
• Multi-class prediction: It can provide us with the probabilities do different classes and hence can give multi-class predictions.
• It works best in the case of Categorical variables compared to the numerical ones.
Disadvantages of Naive Bayes
• Independent treatment of features: One of the most significant disadvantages of this algorithm is that it treats every feature independently, which can be cumbersome most of the time. Advanced
classifiers like SVMs learn the relationship between the feature and target variables and the relationship among features. This is not the case with Naive Bayes.
• Zero frequency case: Probably, some values from a categorical variable do not appear in training data. In such a case, Naive Bayes will assign a zero probability to these samples when they appear
in test data. This is called "Zero Frequency" and can be cured using smoothing techniques, like Laplace estimation.
• Bad Estimator: Naive Bayes is considered to be a lousy probability estimator, so we should not take the probability predicted by this algorithm too seriously.
Industrial Applications of Naive Bayes
This algorithm is naive, but because of its performance, several industrial applications are based on this.
• Recommendation System: Collaborative Filtering and the Naive Bayes algorithm form the recommendation system that can recommend some products/movies/songs to users.
• Multi-class prediction: One of the most significant advantages of Naive Bayes is its applicability in predicting the probability of multiple classes.
• Real-time prediction: Because of its lesser complex nature, it can give prediction results very fast. Many advanced algorithms like SVM XGBoost are too heavy sometimes for smaller classification
tasks. In such a scenario, Naive Bayes performs well.
Possible Interview Question
The theory of this algorithm is really important as it is rare to find full-scale projects in resumes related to this algorithm. Interviewers want to test the probability knowledge of this topic.
Some popular questions can be:
• Explain the Bayes Theorem and associate it with the Naive Bayes algorithm.
• Why is this algorithm called Naive?
• How do these probability calculations hold for continuous numerical variables?
• What is Gaussian Naive Bayes algorithm? What variants can be possible or why Gaussian?
In this article, we learned about the Naive Bayes classifier in detail. We drew the intuition from Baye's theorem of probability and stated its correlation with the Naive Bayes algorithm. We also saw
the python implementation of this algorithm using sklearn. In the last, we discussed the advantages, disadvantages, and industrial applications. We hope you enjoyed the article.
Enjoy learning, Enjoy algorithms! | {"url":"https://www.enjoyalgorithms.com/blog/naive-bayes-in-ml/","timestamp":"2024-11-12T13:58:55Z","content_type":"text/html","content_length":"90732","record_id":"<urn:uuid:452700d0-c53b-49eb-b3b7-5bb46c8cfcf1>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00722.warc.gz"} |
DIY Chevron Dress Tutorial: Part One - Drafting the Pattern
DIY Chevron Dress Tutorial: Part One - Drafting the Pattern
April 18, 2013
This is the tutorial for how to draft the pattern to make a Chevron Dress. [The rest of the Tutorial can be found here.]({% post_url 2013-04-17-chevron-dress-tutorial %})
As I mentioned in my preamble, I screwed up when I originally drafted this pattern a little.
I made the top way too boxy at first. Once I tried it on I realized I needed to bring the sides in at more of an angle to make the dress hang right. Therefore a few of the pictures below will not
quite look how they’re supposed to, but I’ve compensated for that with some edits, as you’ll see below.
I strongly recommend drafting this pattern BEFORE you buy your fabric for this project, because you will need to know how big your skirt pattern is to know how wide of fabric you will need to buy.
Once you have the whole thing drafted out, you can experiment with laying it out and decide how much fabric you’ll need to buy. Also note that this pattern is drafted with negative ease, so you’ll
want to buy knit fabric or something with some stretch to it.
Here’s how to draft it, it’s really quite simple.
You will need:
-a flexible tape measure -a yard stick -a ruler -some large sheets of paper (I used old newsprint to recycle, you can use anything you want.) -a marking pen of some kind -a large cutting mat or any
kind of ruled base will be extremely helpful, but not absolutely necessary.
DIY Chevron Dress: Bodice
Step One: Take some Measurements
Using your flexible tape measure, take the following measurements on whoever the dress is intended for:
1. From one shoulder just to one side of the neck to just underneath the breast on that same side. Add 2” to this measurement and call it H.
2. Bust measurement (all the way around the torso at the fullest part of the bust, this can and should be taken with a bra on, if you normally wear bras.) Subtract 2 from this measurement, then
divide by 4, round it to the nearest 1/4”, and call it W.
3. Around the base of the neck. Divide this by 4, round it to the nearest 1/4”, and call it N.
4. Around the chest, just below the breast. Write this measurement down for your skirt later, then subtract this measurement from W. Divide it by 4, round it to the nearest 1/4”, and call it Z.
That’s it.
Step Two: Draw a Square Line
Before you start drafting, it’s important to draw a long line across the bottom of your sheet to which you can square all your other measurements to. This is why it’s handy to have a ruled cutting
mat, you can draw this line edge to edge and line it up on the mat before doing your other lines and they’ll be square.
Step Three: Draw a Rectangle that is H x W
Step Four: Place a Mark N from the Left
Step Five: Draw in the Neckline
Decide how deep you want the neckline to be, as well as the shape. For my dress, I wanted a deep V-Neck, so I drew a line 10” from the top and connected a line from N that was fairly steep. You could
do a shallower V by drawing a line only 5” from the top, or create a rounded neckline by curving the connecting line.
Step Six: Draw in Guidelines for Shoulder and Sleeve:
Next you will create two dotted lines to help finish up the pattern. First draw a horizontal dotted line 2” down from the top. Then, draw a vertical dotted line 1” to the right of the rectangle. Note
the intersection of these points. (Also, ignore the top line that I scribbled out. That was also a mistake.)
Step Seven: Draw the Shoulder Line
Draw a slightly curving line from N to the intersection of the two dotted lines.
Step Eight: Draw in the Armhole
Mark a line 10” from the top of the rectangle (8” from the horizontal dotted line). Draw a curve from the intersection of the dotted lines to this point to create the arm hole.
Step Nine: Shape the Bodice
This is the step I neglected to do originally. Draw a line Z in from the right side of the rectangle. Draw a straight line from the bottom of the armhole to Z.
Step Ten: Add Seam Allowances, Cut Out, Mark Fold Line
You now have your basic bodice-front pattern. Add a 1/4” seam allowance (or larger, if you prefer) around the whole thing and cut it out. Mark on “Fold” on the side of the pattern opposite the
sleeve, to remind yourself to cut it out on a fold.
Step Eleven: Create the Bodice Back
You will create the back of the bodice the exact same way, except you will change Step Five as follows: Draw a line that is 1” from the top and create a sharply curved (almost right angle) line from
N to this point.
All other steps will be the same.
DIY Chevron Dress: The Skirt
The skirt for this dress is a simple half circle skirt. I have seen some tutorials online for this, but in my opinion they all over complicate the issue. People who are afraid of math need not fear,
there is a very simple formula you can use. Here are simple steps for drafting the half circle skirt:
Step One: Calculate Circumference
In the fourth measurement I had you take before, I told you to write down the measurument you took from around the chest (below the bust).
All you need to do is take that measurement and divide it by 3.14 (aka, Pi.)
This will give you a number with a lot of decimals, just round of all those decimals to the nearest .25 (ie 9, 9.25, 9.5, or 9.75) and label this number L
For example, my chest measurement was 31”. 31 divided by 3.14 is about 9.87. I rounded this to 9.75 to have 9.75” = L
Step Two: Plot the Waist Line
You will need a much larger piece of paper for this, I just pieced some newspaper together with tape. I wanted a fairly short skirt, so my overall paper length was a little over 30” long.
From one corner of your paper, mark a line that is L from the top and L from the left.
Step Three: Plot the Hemline
Decide how long you want your skirt to be, then mark a line that far from L on both the top and side of your paper.
Step Four: Draw the curves
Tie a long, inelastic piece of string (such as dental floss) to a marker. Holding your marker at one of the L points, hold the other end of the string taught at the top left corner of the paper.
Keeping the marker straight, draw an arc from L to L.
Repeat this process from your hemline points.
Step Five: Add Seam and Hem Allowances, Cut it out, Mark the Fold
Using the same string and marker process, draw an arc 1/4” (or your chosen seam allowance) closer to the corner of the paper from the waist line mark fro the seam allowance.
Then, draw an arc that is 1/2” (or however large you’d like your hem to be) further from the corner on the outside of the whole skirt deal. Cut out these outer lines for your pattern. One one edge of
the skirt, mark “Fold” to remind yourself to put that side on a fold when cutting it out.
When all is said and done, you should have three pattern pieces. Easy easy easy. | {"url":"https://stefmakes.art/posts/2013-04-18-chevron-p1/","timestamp":"2024-11-06T05:00:06Z","content_type":"text/html","content_length":"15663","record_id":"<urn:uuid:9fba9ca5-c718-46af-a7e5-722c92c81a9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00633.warc.gz"} |