content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
What is the cross sum?
The cross sum is the sum of all single numbers. For example the cross sum of 12 is 1+2=3.
What does sum mean in algebra?
: the aggregate of two or more numbers or quantities taken with regard to their signs (as + or −) according to the rules of addition in algebra the algebraic sum of −2, 8, and −1 is 5 — compare
arithmetical sum.
What does digit sum mean?
The digit sum of a number, say 152, is just the sum of the digits, 1+5+2=8. The digit sum is the end result of repeatedly computing the sum of the digits until a single digit answer is obtained. The
digit sum of a number n is denoted as DigitSum(n).
What is digit sum used for?
The digit sum – add the digits of the representation of a number in a given base. For example, considering 84001 in base 10 the digit sum would be 8 + 4 + 0 + 0 + 1 = 13. The digital root –
repeatedly apply the digit sum operation to the representation of a number in a given base until the outcome is a single digit.
How do you find the sum of a number?
The formula to calculate the sum of integers is given as, S = n(a + l)/2, where, S is sum of the consecutive integers n is number of integers, a is first term and l is last term.
What is the digital sum of 6?
The addition of 7 to 8 results in a sum of digits of 6 and so on down to a sum of digits of 2….
Number Repeating Cycle of Sum of Digits of Multiples
4 {4,8,3,7,2,6,1,5,9}
5 {5,1,6,2,7,3,8,4,9}
6 {6,3,9,6,3,9,6,3,9}
7 {7,5,3,1,8,6,4,2,9}
What is algebraic sum with example?
Algebraic sum, a summation of quantities that takes into account their signs; e.g. the algebraic sum of 4, 3, and -8 is -1.
What is the sum of number?
The sum of two numbers is the answer you get when you add them both together. So the sum of 5 and 4 is 9.
What is the sum of all digits 1 to 100?
What is the sum number?
What is the sum of 7?
Number Repeating Cycle of Sum of Digits of Multiples
6 {6,3,9,6,3,9,6,3,9}
7 {7,5,3,1,8,6,4,2,9}
8 {8,7,6,5,4,3,2,1,9}
9 {9,9,9,9,9,9,9,9,9}
Which is an example of a numbers cross sum?
Here’s an example: The cross sum of 10 is 1+0 = 1 ; the cross sum of 275 = 2 + 7 + 5 = 14 (this happens to be the answer to the question above) . $begingroup$ I’m talking about a reference to your
definition. What you’re referring to as the “cross sum” I’m more used to thinking of as the “digit sum”.
How is the cross product of A and B calculated?
We can calculate the Cross Product this way: a × b = |a| |b| sin(θ) n. |a| is the magnitude (length) of vector a. |b| is the magnitude (length) of vector b. θ is the angle between a and b. n is the
unit vector at right angles to both a and b.
What is the cross product of two vectors?
The Cross Product a × b of two vectors is another vector that is at right angles to both: And it all happens in 3 dimensions! The magnitude (length) of the cross product equals the area of a
parallelogram with vectors a and b for sides: See how it changes for different angles:
What kind of answer does the cross product give?
The Cross Product gives a vector answer, and is sometimes called the vector product. But there is also the Dot Product which gives a scalar (ordinary number) answer, and is sometimes called the
scalar product. Question: What do you get when you cross an elephant with a banana? Answer: |elephant| |banana| sin (θ) n | {"url":"https://nikoismusic.com/what-is-the-cross-sum/","timestamp":"2024-11-10T22:20:47Z","content_type":"text/html","content_length":"53431","record_id":"<urn:uuid:49a35b0a-4ef4-4255-90e9-4648d50892ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00383.warc.gz"} |
Don't like this style? Click here to change it! blue.css
Welcome .... Click here to logout
Class 17: RSA day
Overview: RSA is the most famous modern encryption scheme. It has a great brand. I want to take a moment and demonstrate that key lengths must be dealt with more carefully in the public-key world.
We've just set up El Gamal and Diffie-Hellman where we can see that the secrecy entirely depends on the difficulty of a general solution to the discrete log problem.
Our next public-encryption scheme is also the most famous, RSA. It's secrecy depends on the difficulty of integer factorization. It is easy to multiply two large numbers (can be done in \(\mathbb{O}
(\log{n})\) time) but expensive to reconstruct those factors given just the product ( \( \mathbb{O}(n) \) time (actually a bit better than this)) (here \(n\) is the size of the numbers NOT the number
of digits).
Now a linear-time algorithm sounds fast to me. But we're talking about giant numbers, loop through every number from 1 to \(10^{600}\) and it'll eat up some CPU time. But something like AES is a far
more complex process to start unraveling than just finding a factor.
I wanted to say this just to demonstrate that the security factor (block-length in AES and log of the modulus in RSA/DH) must be significantly larger when deploying public-key crypto vs symmetric
crypto. Just remember this: 128-bits of security for AES is equivalent to 3,248-bits of security in RSA/DH.
Overview: Let's set up the math that we need to really pull off the RSA encryption scheme.
The world's most popular public-key encryption scheme is the RSA algorithm (named after the three authors which means that the NSA probably calls it something else in-house).
The idea is the following:
• Generate two large primes (not just any will do): \(p, q\)
• Compute \(n = p \cdot q\)
• Compute \(\Phi(n) = (p-1) \cdot (q-1)\)
• Select an \(e \in \{2, \cdots, \Phi(n)-1\}\) such that GCD(\(e,\Phi(n)\))=1
• Compute private key \(d\) such that \(e \cdot d \equiv 1 \mod{\Phi(n)}\)
• Publish Public key \((n, e)\)
Now to encrypt use the public key \((n,e)\) and secret message \(m\) the cipher text is: \(C := m^{e} \mod{n}\).
The decryption of \(C\) is just \(C^{d} \mod{n} \equiv m^{e \cdot d} \mod{n} \equiv m \mod{n} \). This is because the size of the cyclic subgroup is \( \Phi(n) \) and \(e \cdot d = 1 + \Phi(n) \cdot
k\) for some \(k\).
Trench Generation: Generate an RSA key pair using the command ssh-keygen -t rsa (it will ask you questions, the first question is where to save it, pick something in your current folder like rsa_key
Read the keys: Now use the command openssl rsa -text -noout -in rsa_key to see the various parameters in the private key file. There are 8 numbers stored there: the modulus \(n\), the public exponent
\(e\), the private exponent \(d\), \(p\), \(q\), \(d \mod{p-1}\), \(d \mod{q-1}\), and \(q^{-1} \mod{p}\). Those last three are there to make the exponentiation faster by computing mod p-1, q-1 and
using chinese remainder theorem to reconstruct the plaintext.
Confirm: confirm that \(n = p \cdot q\), \(e \cdot d \equiv 1 \mod{(p-1)\cdot (q-1)}\), and that \(p\) and \(q\) are prime.
You can now see the hard problem that an attacker must solve: given \(n, e, C\) find \(d\) or \(m\). The reason this is a factoring problem is that knowing \( \Phi(n) \) is enough to take an inverse
for \(e\). Knowing \(Phi(n)\) is equivalent to knowing \(p\) or \(q\) just given \(n\).
HEY PYTHON READ THIS FOR ME:
Here is the fastest way to consume an RSA PEM key that I found. I was able to import rsa directly but probably some of you will have to pip install rsa or something like that:
Overview: Finally we'll both encrypt with RSA and break it (when the problem is small enough). This helps you really understand all of the mechanics and begin to appreciate some subtlety that we'll
explore in the next module.
In public key encryption anyone that wants to receive a message must publish a public-key. Anyone that wants to send that person a message uses it to encrypt a message. RSA is a bit different than DH
in this sense because the sender of the message doesn't need to create a public/private key-pair in order to participate.
Mini-Bletchley Contest
Let's have a three stage contest. Form 4-person teams, team A, team B, team C, etc. . On each team we will have two message senders and two message crackers. SenderA1, SenderA2, CrackerA1, CrackerA2,
SenderB1, etc.
For this competition we will have a three stage race.
• Stage 1: Each sender must generate an RSA key-pair with modulus-length 240 bits. The crackers should prepare some utility functions / plan to help them decrypt when the time comes.
• Stage 2: Each sender publishes their public key \(N, e\). The CrackerA1 begins work factoring SenderB1's public modulus (use SAGE or Wolfram Alpha) and CrackerB1 tries to break the public key of
SenderA1. Meanwhile the senders use their partner's public key to encrypt a message of no more than 26 characters and broadcast it. SenderA1 generates a ciphertext for SenderA2, and SenderA2
generates a ciphertext for SenderA1.
• Stage 3:The first team to successfully read all four messages wins.
Some details you'll want:
• Generating weak RSA keys: openssl genrsa -out rsa240.pem 240
• Reading RSA keys: openssl rsa -text -noout -in rsa240.pem
• From a factorization to a private key: if you calculate \(N = p \cdot q\) and know \(e\) then you have to calculate \(d = e^{-1} \mod{(p-1)\cdot(q-1)}\).
• From message to integer (SAGE): Integer(msg.encode('hex'), 16)
• From integer to message (SAGE): format(intmessage, 'x').decode('hex') (if you get odd length, inspect hex, and pad with a 0)
Some "Common" RSA Attacks
• Common Modulus (using the same \(N\) but issuing several private keys) leads to an easy factoring method
• Moduli that share factors on accident. Turns out that 20% of RSA moduli share factors because of laziness/restrictions. If you GCD a bunch of moduli you might just find factors.
• Small values of \(d\) lead to the Weiner Attack
• Related messages: If \(M_1 = f(M_2)\) and you know \(C_1, C_2\) then do GCD of \(g_1(x) = (f(x))^e - C1\) and \(g_2(x) = x^2 - C2\) and you have a chance of finding the factor \(x-M_2\) straight
• Timing attacks if you can get them to decrypt many messages, you can deduce the timing of the repeated squaring involved in computing pow(ct, d, n)
• Cyclic Group Check: when setting up it's possible that \(e^k = 1\) mod \( \phi(n) \) for a small \(k\), in which case multiple encryption of the message can just break it.
• Related Primes, if you know something about the relationship of the two prime factors to each other attacks exist (Fermat Factoring).
• COPPERSMITH ATTACKS!!! So there is a whole series of attacks related to small values of \(d\), small values of \(e\), knowing some bits of \(d\), some bits of \(p\), some bits of \(q\), or some
bits of \(M\) (the message). These are important enough that I'm going to take a couple of lectures to teach you lattices, lattice reduction, and the flavors of Coppersmith.
• Blind Signature Forgeries (next set of notes) | {"url":"https://crypto.prof.ninja/class17/","timestamp":"2024-11-11T17:53:29Z","content_type":"text/html","content_length":"11238","record_id":"<urn:uuid:f210bd1d-e5af-4311-9d19-ec2a0ee8de93>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00866.warc.gz"} |
Symmatry in Plants: Phyllotaxis
1. Observing Spiral Patterns > 2. Spiral Applet > 3. Dynamical Model Applet > 4. Cylindrical Spirals > 5. Cylindrical Spirals Applet > 6. The Farey Tree and the Golden Mean
The Dynamical Model Applet
In the previous section, you explored all the Logarithmic spirals that mathematics can create. You have seen that, of these spirals, only very specific ones are usually exhibited by plants. Two
fundamental questions arise:
• Why do plants favor spiral configurations at all?
• Why, among all possible logarithmic spirals, plants favor those with parastichy numbers successors in the Fibonacci sequence?
Even though this phenomenon has been observed for hundreds of years and studied by many botanists, mathematicians and crystalographers, only recently has there been a begining of an answer for these
two questions. My collaborators Pau Atela, Scott Hotton and I have a partial answer to these questions. The goal of this tutorial is to give you an idea of this answer.
In our research we have been looking at a model by the french physicists Stephane Douady and Yves Couder. Incidently, Douady came to the field of Phyllotaxis one day when, coming back from the market
with a cauliflower, he was intrigued by the fractal nature of this plant. But, after some time looking at it, his attention turned to the magic of parastichies on each of the little flowerets. He and
Couder came up with a simple model for the formation of these spiral patterns, which they implemented both physically and on the computer. This model, based on assumptions made by the botanist
Hofmeister, spontaneously generates the Fibonacci spiral patterns. The Dynamical Model applet is our version of Douady and Couder's model. The three basic principles of Hofmeister on which this model
is based are the following:
• A new dot is formed periodically in the place around the central disk where it is least crowded by the others dots.
• Once they form, the dots move radially away from the center.
• As time increases, the rate at which new dots move away decreases
The dots represent the center of microscopic bulges of cells (called primordia in botany) that occur at the growing tip (Called apical meristem in botany) of the plant. These bulges eventually
differentiate to become the leaves, petals, sepals, flowerets or scales of the plant.
Task 15: Turn on the Dynamical Model applet and let it run for a while. Describe what you see, using if need be the running counter of generations on the upper left corner. Do you notice any change
in the divergence angle? You can stop the program (without quitting it) by clicking in its window. Click again to restart it.
Task 16: How do you see the three basic principles transpire in the model as it is running? For the first principle, you can play my daughter's favourite game: try guessing where the next primordium
is born. Count 1 point each time you get it right...
Task 17: Speculate on what the three principles of Hofmeister tell us about the way plant develop. Do these assumptions seem realistic?
Task 18: Running the Dynamical Model applet program once again if necessary, stop it (by clicking the mouse on it) each time you recognize some new spiral pattern. Click on "draw". This allows you to
connect the dots with the cursor (keep the mouse button down). To change the color (to draw another set of parastichies, click on "color". Reproduce these drawings in your notebook, writing the
generation numbers (appearing in the upper left corner) at which you stopped to draw for each one. To restart the program, click on draw again, and then click once more anywhere on the body of the
window. Running up to 200 generations, what different parastichy configurations did the program show you? It might take you 2 or three runs to get all the different configurations: they are not very
obvious at first. You may use the Spiral applet in parallele with the Dynamical model to help you recognize some of the patterns.
Task 19: Running the Dynamical Model applet again, click on the "show graph" button. The top graph shows how the divergence angle a varies with time, starting at 180^0. The bottom graph shows how the
"growth rate", which is just our old friend r from the previous sections, varies. Letting run the program to 200, describe what happens to the divergence angle a and to the growth rate r. How does
this correspond to your statements in Tasks 15 and 16?
1. Observing Spiral Patterns > 2. Spiral Applet > 3. Dynamical Model Applet > 4. Cylindrical Spirals > 5. Cylindrical Spirals Applet > 6. The Farey Tree and the Golden Mean | {"url":"http://www.science.smith.edu/phyllo/CourseMaterial/Teachingmodules/phyltut3.html","timestamp":"2024-11-05T07:58:58Z","content_type":"text/html","content_length":"17942","record_id":"<urn:uuid:4107369e-47f6-4f18-8b40-645cbeac9477>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00623.warc.gz"} |
Maths Tutors | Maths Tutoring in Australia
Get Online Maths Tutoring with Our Expert Maths Tutor
Are you struggling with Maths concepts? Finding yourself stuck in this complex field? Wish you could get help from an experienced and qualified Maths tutor?
We offer 1-to-1 private Maths tutoring online lessons starting at A$35/hour.
Our expert online Maths tutors will help you excel in Maths.
What sets Wiingy apart
Expert verified tutors
Free Trial Lesson
No subscriptions
Sign up with 1 lesson
Transparent refunds
No questions asked
Starting at $28/hr
Affordable 1-on-1 Learning
Top Maths tutors available online in Australia
2055 Maths tutors available
Responds in 10 min
Message Now
Star Tutor
Math Tutor
1+ years experience
Expert private online math tutor with 1 year of tutoring experience. Helps students learn a new concept, homework help, and test prep. Also provides test-taking strategies and boosts confidence.
Responds in 7 min
Message Now
Star Tutor
Math Tutor
2+ years experience
Passionate 1st grade math tutor for students in IN, US, AU, CA, and UK. MS Math, with 2+ years of tutoring experience. Provides excellent personalized tutoring, exam prep, and homework help.
Responds in 1 min
Message Now
Star Tutor
Math Tutor
2+ years experience
Mathematics graduate and online Math tutor with 2+ years of tutoring experience. Provides customized lessons, test prep, and assignment help in Algebra, Calculus, Geometry, Statistics and more.
How Wiingy works
Start learning with a Wiingy Maths tutor in 3 simple steps
• Tell us your need
New to a a topic or struggling with one, falling behind in class or looking to ace your exams. Tell us what you need
• Book a free trial
We will find the perfect tutor for your need and set up your first free trial lesson. With our Perfect Match Guarantee you can be assured you will have the right tutor for your need.
• Sign up for lesson
Like the tutor, sign up for your lessons. Pay only for the time you need. Renew when you want.
Try our affordable private lessons risk-free
• Our free trial lets you experience a real session with an expert tutor.
• We find the perfect tutor for you based on your learning needs.
• Sign up for as few or as many lessons as you want. No minimum commitment or subscriptions.
In case you are not satisfied with the tutor after your first session, let us know, and we will replace the tutor for free under our Perfect Match Guarantee program.
Mathematics is derived from the ancient word “mathema”. This is an area of study that includes numbers, quantities, shapes, formulas, etc. The different topics in the maths subject are algebra,
calculus, trigonometry, geometry, and many more.
Learning maths is useful not only in school but also in everyday life. Performing basic mathematical calculations is very important in day-to-day life. Maths for elementary school is an introduction
to the basic concepts of math. Then, from Year 1 to Year 5, maths becomes a bit more progressive and fun with concepts like subtraction, addition, multiplication, etc. Then, from Year 6 to Year 12,
mathematics becomes more and more complex and interesting with concepts like trigonometry, geometry, etc.
What Is the Role of Maths Tutor?
Wiingy’s math tutor will help your child to understand different math topics, from the basics to more complex concepts and theories. A math tutor becomes a learning centre for your child. They will
conduct one-on-one sessions with your child to understand their weaknesses and strengths and build on them. Getting the perfect maths tutor is now just a few clicks away. Find specialist maths tutors
for primary school students as well as high school students.
Why Is It Important to Work With a Maths Tutor?
Maths can become an interesting subject or someone’s worst nightmare, if not taught and guided properly. Unlike traditional maths teachers, who often follow a rigid curriculum, our tutors provide a
personalized approach to teaching that is different from classroom teaching.
So, getting your child started with math tutoring is an intelligent move as it provides a personalised approach to teaching that is different from classroom teaching. Wiingy’s maths tutor will help
your child, from Year 1 to 10, to understand the basic and more complex topics and theories of the maths subject. A maths tutor understands your child’s weaknesses and strong points and helps them
excel in the subject.
You can opt for either offline tutoring or online tutoring for maths, depending on your preference. Your child’s math tutor also offers homework help. They plan math lesson plans suitable for the
children and are comfortable with their understanding speed.
Why Do Students Need a Maths Tutor?
Having a maths tutor helps your child to understand both the primary and complex theories and problems of maths. Wiingy’s maths tutoring offers individualized attention and tailored learning
experiences, ensuring that students receive the support they need to overcome learning obstacles and achieve their academic goals.
The best math tutor is not only involved in teaching mathematics but also helping with homework and school projects too. An online maths tutor is becoming more popular nowadays as it is more
convenient for both parents and children to learn mathematics from the comfort of their home. You can also opt for offline tutoring services by looking for a “Maths tutor near me” and finding the
best maths tutor for your child in your locality.
How Do I Find a Good Maths Tutor?
In order to find a good maths tutor for your child, first decide if you want an online maths tutor in Australia or an offline maths tutor. If you are opting for an offline maths tutor, you can simply
look up “best maths tutor near me” and find the top maths tutor in your area. You can contact the math tutoring services and sign up for a maths demo class to get to know the tutor and their teaching
style before signing up for full-time maths tutoring.
If you want to opt for an online maths tutor, you can search for “best online maths tutor” or “best maths tutor online”. Contact these services and sign up for a demo class before signing up for
full-time math tutoring.
Sign up for Wiingy’s maths classes today and get your child the best maths tutor to help them with their school work and teach maths in a simple and easy way. Wiingy’s maths classes teach math to
students from Year 1 to 10 and also help with homework and school projects. Wiingy’s maths tutor will do a thorough analysis of your child’s weak and strong points and will guide them accordingly.
What Are the Advantages of Studying With Wiingy’s Math Tutor?
With Wiingy’s math tutor, your child will have interactive sessions and a more personalised approach to math. Wiingy’s maths tutor will not only help your child with understanding complex maths
theories and problems, but will also help them with finishing school homework and projects. With Wiingy’s Math tutoring, you can rest assured that your child will have a better understanding of math
along with completing schoolwork.
FAQs on Private Maths Tutors in Australia
What is the math tutoring fee?
The cost of a math tutor is determined by the number of sessions you want, the year level you want to enrol your child in, and the subjects you want them to study. Also, the price for maths tutoring
for competitive exams will differ from the price for maths tutoring for high school exam preparation.
Sign up for Wiingy’s maths demo class today and get your child the best maths tutor to help them with their schoolwork and teach them the subjects and topics of middle school in a simple and easy
way. A demo class will also give you information about the tutor and the pricing. Wiingy’s math tutoring classes teach math to students from Year 1 to 10 and also help with homework and school
projects. A Wiingy’s math tutor will do a thorough analysis of your child’s weaknesses and strong points and will guide them accordingly.
Why should I hire a math tutor?
Hiring a maths tutor for your child is a wise and smart decision as a maths tutor will help understand your child’s weaknesses and strong points and help them understand the basics and fundamentals
of mathematics in a friendly environment. You can also opt for online math tutoring as it is becoming more popular nowadays. Private tutoring allows you to learn maths at a more relaxed pace and
within the comfort of your home or place of choice.
How to improve your math skills?
Practice, practice, and practice! If you want to excel in mathematics, then practising is your only option. But before that, it is also important to have a thorough understanding of all the theories
and concepts of mathematics. Once your fundamentals are clear, you can understand all the complex theorems. Now, all that is left to do is practice. Try to do maths every day for at least an hour or
two, and never skip this.
Sign up for Wiingy’s maths classes and practise with Wiingy’s best maths tutor. Wiingy’s math tutor will understand where you are lagging and what your strong point is and will build on that. With
Wiingy’s math tutor, you can have a personalised approach to studying math.
What are the best maths classes near me?
Wiingy’s math classes are the help you can give to your child today. With Wiingy’s online maths tutor, students from Year 1 to 10 can get help to understand the basic and more complex topics and
theories of the maths subject. A maths tutor understands your child’s weaknesses and strong points and helps them excel in the subject.
Your child’s math tutor also offers homework help. They plan math lesson plans in line with the pace of understanding of your child.
Why should I choose Wiingy math tutors?
When it comes to maths tuition, no one does better than Wiingy! Match with a private maths tutor for 1-on-1 learning sessions at your preferred time and pace. The best thing about working with an
experienced tutor is that they can pay individualized attention to specific areas of the subject that you’re struggling with. This could include maths methods, mathematical concepts, and more.
Explore tutoring for related subjects | {"url":"https://wiingy.com/au/tutoring/subject/maths/","timestamp":"2024-11-14T14:08:05Z","content_type":"text/html","content_length":"328905","record_id":"<urn:uuid:2db41d90-5957-42a6-a1e1-84dd5a12cb6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00242.warc.gz"} |
svd(x: array, /, *, full_matrices: bool = True) Tuple[array, array, array]¶
Returns a singular value decomposition (SVD) of a matrix (or a stack of matrices) x.
If x is real-valued, let \(\mathbb{K}\) be the set of real numbers \(\mathbb{R}\), and, if x is complex-valued, let \(\mathbb{K}\) be the set of complex numbers \(\mathbb{C}\).
The full singular value decomposition of an \(m \times n\) matrix \(x \in\ \mathbb{K}^{m \times n}\) is a factorization of the form
\[x = U \Sigma V^H\]
where \(U \in\ \mathbb{K}^{m \times m}\), \(\Sigma \in\ \mathbb{K}^{m \times\ n}\), \(\operatorname{diag}(\Sigma) \in\ \mathbb{R}^{k}\) with \(k = \operatorname{min}(m, n)\), \(V^H \in\ \mathbb
{K}^{n \times n}\), and where \(V^H\) is the conjugate transpose when \(V\) is complex and the transpose when \(V\) is real-valued. When x is real-valued, \(U\), \(V\) (and thus \(V^H\)) are
orthogonal, and, when x is complex, \(U\), \(V\) (and thus \(V^H\)) are unitary.
When \(m \gt n\) (tall matrix), we can drop the last \(m - n\) columns of \(U\) to form the reduced SVD
\[x = U \Sigma V^H\]
where \(U \in\ \mathbb{K}^{m \times k}\), \(\Sigma \in\ \mathbb{K}^{k \times\ k}\), \(\operatorname{diag}(\Sigma) \in\ \mathbb{R}^{k}\), and \(V^H \in\ \mathbb{K}^{k \times n}\). In this case, \
(U\) and \(V\) have orthonormal columns.
Similarly, when \(n \gt m\) (wide matrix), we can drop the last \(n - m\) columns of \(V\) to also form a reduced SVD.
This function returns the decomposition \(U\), \(S\), and \(V^H\), where \(S = \operatorname{diag}(\Sigma)\).
When x is a stack of matrices, the function must compute the singular value decomposition for each matrix in the stack.
The returned arrays \(U\) and \(V\) are neither unique nor continuous with respect to x. Because \(U\) and \(V\) are not unique, different hardware and software may compute different singular
Non-uniqueness stems from the fact that multiplying any pair of singular vectors \(u_k\), \(v_k\) by \(-1\) when x is real-valued and by \(e^{\phi j}\) (\(\phi \in \mathbb{R}\)) when x is complex
produces another two valid singular vectors of the matrix.
☆ x (array) – input array having shape (..., M, N) and whose innermost two dimensions form matrices on which to perform singular value decomposition. Should have a floating-point data type.
☆ full_matrices (bool) – If True, compute full-sized U and Vh, such that U has shape (..., M, M) and Vh has shape (..., N, N). If False, compute on the leading K singular vectors, such that
U has shape (..., M, K) and Vh has shape (..., K, N) and where K = min(M, N). Default: True.
out (Tuple[array, array, array]) – a namedtuple (U, S, Vh) whose
☆ first element must have the field name U and must be an array whose shape depends on the value of full_matrices and contain matrices with orthonormal columns (i.e., the columns are left
singular vectors). If full_matrices is True, the array must have shape (..., M, M). If full_matrices is False, the array must have shape (..., M, K), where K = min(M, N). The first
x.ndim-2 dimensions must have the same shape as those of the input x. Must have the same data type as x.
☆ second element must have the field name S and must be an array with shape (..., K) that contains the vector(s) of singular values of length K, where K = min(M, N). For each vector, the
singular values must be sorted in descending order by magnitude, such that s[..., 0] is the largest value, s[..., 1] is the second largest value, et cetera. The first x.ndim-2 dimensions
must have the same shape as those of the input x. Must have a real-valued floating-point data type having the same precision as x (e.g., if x is complex64, S must have a float32 data
☆ third element must have the field name Vh and must be an array whose shape depends on the value of full_matrices and contain orthonormal rows (i.e., the rows are the right singular
vectors and the array is the adjoint). If full_matrices is True, the array must have shape (..., N, N). If full_matrices is False, the array must have shape (..., K, N) where K = min(M,
N). The first x.ndim-2 dimensions must have the same shape as those of the input x. Must have the same data type as x.
Changed in version 2022.12: Added complex data type support. | {"url":"https://data-apis.org/array-api/latest/extensions/generated/array_api.linalg.svd.html","timestamp":"2024-11-06T19:14:50Z","content_type":"text/html","content_length":"30204","record_id":"<urn:uuid:dc2f8a5c-75e4-4516-8cd2-96f27b45bfce>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00164.warc.gz"} |
How do you code a Gauss-Seidel method in Matlab?
Use x1=x2=x3=0 as the starting solution. The program should prompt the user to input the convergence criteria value, number of equations and the max number of iterations allowed and should output the
solution along with the number of iterations it took for the solution to convergence to the user specified value.”
How do I know if my Gauss-Seidel converges?
The convergence properties of the Gauss–Seidel method are dependent on the matrix A. Namely, the procedure is known to converge if either: A is symmetric positive-definite, or. A is strictly or
irreducibly diagonally dominant.
What is the condition for convergence of Gauss-Seidel method?
The Gauss-Seidel method converges if the number of roots inside the unit circle is equal to the order of the iteration matrix.
How do you code bisection in Matlab?
Direct link to this answer
1. function c = bisectionMethod(f,a,b,error)%f=@(x)x^2-3; a=1; b=2; (ensure change of sign between a and b) error=1e-4.
2. c=(a+b)/2;
3. while abs(f(c))>error.
4. if f(c)<0&&f(a)<0.
5. a=c;
6. else.
7. b=c;
8. end.
Why does Gauss-Seidel not converge?
However, this method is not without its pitfalls. Gauss-Seidel method is an iterative technique whose solution may or may not converge. Convergence is only ensured is the coefficient matrix,
@ADnxn,is diagonally dominant, otherwise the method may or may not converge.
What is tolerance in Gauss-Seidel method?
Gauss-Seidel iterative method. d. PSOR method. For iterative methods, start with an initial guess of zero for all the unknowns and use a convergence tolerance of 5 psi (for hand calculations).
How do you code Newton’s method in Matlab?
Newton’s Method in Matlab
1. g(x)=sin(x)+x cos(x). Since.
2. g'(x)=2cos(x)-xsin(x), Newton’s iteration scheme,
3. xn+1=xn-g(xn)/g'(xn) takes the form.
4. xn+1=xn-[sin(xn)+x cos(xn)]/[2cos(xn)-xsin(xn)]. To check out in which range the root is, we first plot g(x) in the range 0£x£2.5 using the command.
What does EPS do in Matlab?
eps (MATLAB Functions) eps returns the distance from 1.0 to the next largest floating-point number. The value eps is a default tolerance for pinv and rank , as well as several other MATLAB functions.
eps = 2^(-52) , which is roughly 2.22e-16 .
Why we use Gauss Seidel method?
Gauss-Seidel Method is used to solve the linear system Equations. This method is named after the German Scientist Carl Friedrich Gauss and Philipp Ludwig Siedel. It is a method of iteration for
solving n linear equation with the unknown variables. This method is very simple and uses in digital computers for computing.
What is the difference between Gaussian and Gauss-Jordan Elimination?
Difference between gaussian elimination and gauss jordan elimination. The difference between Gaussian elimination and the Gaussian Jordan elimination is that one produces a matrix in row echelon form
while the other produces a matrix in row reduced echelon form.
What is limitation of Gauss-Seidel method?
What is the limitation of Gauss-seidal method? Explanation: It does not guarantee convergence for each and every matrix. Convergence is only possible if the matrix is either diagonally dominant,
positive definite or symmetric.
What is convergence tolerance?
In our example a “convergence tolerance” for Δu or Δf(t) (or both) can be defined such that when the change resulting from an iteration is smaller than one or both tolerances, the iterations are
Why does Gauss-Seidel method work?
The reason the Gauss–Seidel method is commonly known as the successive displacement method is because the second unknown is determined from the first unknown in the current iteration, the third
unknown is determined from the first and second unknowns, etc.
How do you write exp in MATLAB?
In MATLAB the function exp(x) gives the value of the exponential function ex. Find the value of e. e = e1 = exp(1).
How do you code bisection in MATLAB?
What is Gauss Seidel method in MATLAB?
Gauss-Seidel Method MATLAB Program. Gauss-Seidel method is a popular iterative method of solving linear system of algebraic equations. It is applicable to any converging matrix with non-zero elements
on diagonal. The method is named after two German mathematicians: Carl Friedrich Gauss and Philipp Ludwig von Seidel.
How do you solve linear equations using Gauss-Seidel method?
Further, the system of linear equations can be expressed as: In Gauss-Seidel method, the equation (a) is solved iteratively by solving the left hand value of x and then using previously found x on
right hand side. Mathematically, the iteration process in Gauss-Seidel method can be expressed as:
How do you find the maximum number of iterations in Gauss Seidel?
% Gauss-Seidel method n=input(‘Enter number of equations, n: ‘); A = zeros(n,n+1); x1 = zeros(n); tol = input(‘Enter the tolerance, tol: ‘); m = input(‘Enter maximum number of iterations, m: ‘);
How to use the Gauß-Seidel and Jacobi methods correctly?
The Gauß-Seidel and Jacobi methods only apply to diagonally dominant matrices, not generic random ones. So to get correct test examples, you need to actually constructively ensure that condition, for
instance via or similar. Else the method will diverge towards infinity in some or all components. | {"url":"https://vidque.com/how-do-you-code-a-gauss-seidel-method-in-matlab/","timestamp":"2024-11-04T05:41:55Z","content_type":"text/html","content_length":"56795","record_id":"<urn:uuid:fbaecc77-923c-4f94-997e-9f8a18c79791>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00312.warc.gz"} |
Welford: A Welford accumulator for sample mean and variance in statnet.common: Common R Scripts and Utilities Used by the Statnet Project Software
A simple class for keeping track of the running mean and the sum of squared deviations from the mean for a vector.
Welford(dn, means, vars) ## S3 method for class 'Welford' update(object, newdata, ...)
dn, means, initialization of the Welford object: if means and vars are given, they are treated as the running means and variances, and dn is their associated sample size, and if not, dn is the
vars dimension of the vector (with sample size 0).
object a Welford object.
newdata either a numeric vector of length d, a numeric matrix with d columns for a group update, or another Welford object with the same d.
... additional arguments to methods.
initialization of the Welford object: if means and vars are given, they are treated as the running means and variances, and dn is their associated sample size, and if not, dn is the dimension of the
vector (with sample size 0).
either a numeric vector of length d, a numeric matrix with d columns for a group update, or another Welford object with the same d.
an object of type Welford: a list with four elements:
SSDs: Running sum of squared deviations from the mean for each variable
X <- matrix(rnorm(200), 20, 10) w0 <- Welford(10) w <- update(w0, X) stopifnot(isTRUE(all.equal(w$means, colMeans(X)))) stopifnot(isTRUE(all.equal(w$vars, apply(X,2,var)))) w <- update(w0, X[1:12,])
w <- update(w, X[13:20,]) stopifnot(isTRUE(all.equal(w$means, colMeans(X)))) stopifnot(isTRUE(all.equal(w$vars, apply(X,2,var)))) w <- Welford(12, colMeans(X[1:12,]), apply(X[1:12,], 2, var)) w <-
update(w, X[13:20,]) stopifnot(isTRUE(all.equal(w$means, colMeans(X)))) stopifnot(isTRUE(all.equal(w$vars, apply(X,2,var))))
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/statnet.common/man/Welford.html","timestamp":"2024-11-07T18:46:41Z","content_type":"text/html","content_length":"33398","record_id":"<urn:uuid:ebc375f9-38ef-4314-8be1-99f1fba5667e>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00498.warc.gz"} |
How Much Power Does An Inverter Draw With No Load? - Energy Theory
Inverters are responsible for converting direct current into alternating current for appliances to use. Solar inverters or power inverters both have the same function with one slight difference.
Solar inverters take direct current from solar panels and transfer the converted current to solar batteries. Whereas normal inverters take current from batteries and transfer the alternating current
to the connected appliances. But do you know inverters consume power even when not in use? How much power does an inverter draw with no load? Do Inverters Use Power When Turned Off? Come find out.
What is No Load Current? Why Does it Matter?
Every inverter is featured with a no-load consumption facility. The amount of electricity consumed by a battery charger (inverter) when it is plugged into the socket is known as idle consumption.
During this time, the batteries are not connected to the socket. Another function is standby consumption, which means the inverter absorbs power from the battery even in standby mode.
It is important to understand no-load current because you do not want to waste energy. Here are the two main reasons to do so-
• To get an idea of whether the inverter is consuming more power than the load that is connected to it. For example, sometimes a radio connected to the inverter uses just 5 watts but the inverter
itself consumes 10 watts. This is a complete waste of money and energy.
• Secondly, accumulation of no current load like imagine your inverter is left on 24 by 7. Now, with a 10-watt no-power draw rating, it is almost 70 watts per week that will be wasted.
How Much Power Does an Inverter Draw with No Load?
You can find No Load Current mentioned on the specification sheet as no load current draw (amps) or as no-load power (watts). Now to determine how much power your inverter is drawing without any
load, multiply the battery voltage by the inverter no load current draw rating.
For example,
Battery voltage = 1000 watts
Inverter = 24V
No load current = 0.4 watts
Power drawn = 24V * 0.4 = 9.6 watts
This formula and calculation are applicable to all inverters irrespective of their size. 12V or 24V is the only thing that will make the difference in the power consumed. Remember, the higher the
voltage is the greater the no-load current will be.
In some configurations, a standard inverter may consume between 0.416 amps and 2.83 amps of power in idle mode. But this amount may vary depending on the type of battery bank used and the types of
loads connected to the inverter. Typically, in a no-load current, the energy drawn by the inverter is only 2 to 10 watts an hour.
What Amount of Power is Wasted by Inverter?
Do not confuse the inverter’s no-load current with the efficiency rating of the inverter. Efficiency means the amount of power the inverter can convert. The amount of energy preserved during the
process is the efficiency rating of the inverter. For example, an inverter with an 85% efficiency rating means that the remaining 15% of energy will not be used and is wasted. However, new inverters
have a 90% to 95% efficiency rating that considerably reduces the amount of power wasted, but there are no inverters with a 100% efficiency rating.
In other words, more power is wasted with lower efficiency ratings. And when you sum up this loss with no load current it can be a lot. This is why you should buy an inverter with the highest
possible efficiency ratings. This fact is an important consideration in determining how much power does an inverter draw with no load.
So, if the inverter is on the power consumed by it from the no-load current cannot be avoided. However, it can become negligible if connected to a large load. Suppose you are using a 5000 watts
inverter and run it at almost full load then 0.4 no-load currents can be ignored. Now, let’s see does an inverter draw power when not in use.
Also See: What is a Central Inverter?
Does an Inverter Draw Power When Not in Use?
Yes, the inverter turned on but not in use will draw power. The amount of power drawn can range between 0.2 amps to 2.0 amps depending on the size of the unit and the standby systems design. So, the
answer to does an inverter draw power when not in use is yes it does.
Do Inverters Use Power When Turned Off? Should You Switch Off the Inverter When Not in Use?
it is drawing power no matter which brand it is. This is common in most inverters. To calculate the amount of power being drawn by the inverter you should know about its maximum and minimum draw. The
former is the highest amount of electricity the inverter uses at a time and the latter is the lowest amount of the same.
Yes, it is a good idea to do so if you are using the inverter for over a long period of time and also if many appliances are connected to it. Once the batteries are fully charged and the inverter is
no longer in use, you can turn it off. This will also help to save on your electricity bill.
Do Inverters Consume Power When Fully Charged?
In case the inverters are fully charged they hardly consume less than 0.99% of their capacity. With this, there is little to no impact on the power bills. Also, it would be better if you switched off
the inverter when fully charged.
But if using it at full speed while it is plugged in it will consume more power. This sounds tricky, but this activity can be monitored with an energy monitor. Now, you must also be curious to know
how much power does an inverter draw from a battery.
Also See: Can Hybrid Inverter Work Without Battery?
How Much Power Does an Inverter Draw from a Battery?
After learning about how much power does an inverter draw with no load, it is time to know about the amount of power drawn from the batteries. Yes, inverters drain batteries if not in use and the
amount of power drained depends on the design and size of the inverter. Generally, it is said that modern inverters save more power than traditional ones. And if an inverter is left connected to the
batteries without any load, then it will drain the battery completely over time. It will draw from the batteries around 1 amp per hour, 24 amps per day, and around 168 amps per week.
How Many Amps Does a 2000 Watt Inverter Draw with No Load?
approximately 1.5 amps depending on its efficiency. A 2000-watt 24V inverter can draw approximately 83 amps of continuous current at full load. It is also capable of drawing a surge current of about
186 amps for a fraction of a second, which is typically twice its continuous current. This usually happens when the inverter is connected to large inductive loads like large refrigerators or motors.
Inverter rating (Watts) Battery current (A) Output current (A) Inverter output (Watts)
100 – 500 8.33 – 41.67 0.33 – 1.67 80 – 400
550 – 900 45.83 – 75 1.83 – 3 440 – 720
950 – 1100 80 – 91.67 3.17 – 3.67 760 – 880
1200 – 1400 100 – 116.67 4 – 4.33 960 – 1120
1500 – 1700 125 – 141.67 5 – 5.67 1200 – 1360
1800 – 1900 150 – 158.33 6 – 6.33 1440 – 1520
2000 – 3000 166.67 – 250 6.67 – 10 1600 – 2400
Note: Figures mentioned above are subject to change. Check the technical specification section of the inverter model. Here is a table of inverters with different ratings and amounts of power consumed
in idle mode.
Also See: How Many Amps Does a 2000 Watt Inverter Draw
How to Prevent Inverters from Wasting Power?
Start with looking for an inverter with a very low no-load current and if the system has an on/off switch then it is better. Also, a pure sine inverter is a good choice in this case. And after
learning about how much power does an inverter draw with no load, here are a few more things to consider preventing power wastage.
1. Always Check System Efficiency
Here the entire system is considered: solar panels, batteries, charge controller, and other related components. So, for the system to work smoothly with the least possible power wastage, it is
necessary for the efficiency rating of all these components to match.
2. Higher Volts Mean Lower Amps
Low voltages will draw more power in comparison to higher volts. Let us take an example here.
Watt load – 230 watts
Inverter – 12V
Here, 230/12 = 19.1
So, the amount of power drawn or wasted is 19.1 watts.
Watt load – 230 watts
Inverter – 24V
Here, 230/24 = 9.5
So, with a 24V inverter, you can see there is a considerable reduction in the amount of power wasted. If you are calculating it for deep cycle batteries, you need to divide the power drawn in half
because of their discharge rate.
3. Inverter Watt Rating vs. Power Consumption
How much power an inverter uses is not determined by its watt rating. To know the power consumption, you need to add a percentage to the power used by a load according to the inverter efficiency. For
example, an inverter with a watt load of 200 watts and an efficiency rating of 90% will draw 230 watts or 200 watts plus 10% to make up for the inefficiency.
Also Read: Can You Run Inverters in Parallel?
4. Use the Right Inverter System
Different applications require different types of inverters. For an RV a 12V inverter should be enough and for an off-grid cabin or mobile home a 24V inverter is ideal. But for more powered or large
applications, a 48V inverter system is preferred.
Well, today you learned how much power does an inverter draw with no load? Depending on the no load current, you can determine the amount of power your inverter is drawing with and without load. To
calculate it you should know about battery and inverter voltage, along with no load current rating mentioned on the specification sheet of the inverter. The answer can be calculated by multiplying
battery voltage by no load current. How many amps does a 2000 watt inverter draw with no load? The answer to this is 1.5 amps approximately but can differ on the basis of inverter size and efficiency
Recommended: What Happens When You Pay Off Your Solar Panels? | {"url":"https://energytheory.com/how-much-power-does-an-inverter-draw-with-no-load/","timestamp":"2024-11-06T07:02:37Z","content_type":"text/html","content_length":"180665","record_id":"<urn:uuid:86ed2d62-d123-4a4e-92b0-fd5a330cfe9b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00469.warc.gz"} |
Could gravity vary with time?
You must be registered to use this feature. Sign in or register.
Work by Dyson and Alex Shlyakhter on the fine-structure constant
Could gravity vary with time?
Freeman Dyson Scientist
• 1
• ...
• 9
• 10
• 11
• 12
• 13
• ...
• 16
The question was raised by Dirac, I think in 1936 or thereabouts: Could gravity be varying with time as the universe evolves? And the motivation for Dirac was he didn't like the fact that
gravitational interaction is so weak as compared with other kinds of interactions, so if you take a dimensionless ratio which is Gm^2/hc, where G is the gravitational constant of Newton, m is the
mass of a proton, h is Planck's constant, and c is the velocity of light - that's a dimensionless number; it happens to have the value 10^-39, and Dirac considered that to be ugly; that in the laws
of physics there's this enormously small quantity which appears to be just arbitrary and put in by God into the laws of physics, and he said any self-respecting god wouldn't have done that, so that
there must be some reason for this very small number appearing. So Dirac's argument was that if you assume that gravity goes down with time, like 1/T from the beginning of the universe, and you
measure time in units of the proton Compton wave length, which is sort of the natural unit of time - no, not the Compton wavelength but the Compton frequency, the Compton wave length divided by
velocity of light - then the unit of time is about 10^-22 seconds, and the universe has existed for about 10^17 seconds, so the ratio between the present age of the universe and the natural unit of
time is 10^39. So that's an interesting fact. So Dirac's hypothesis was that - so this small number merely is indicating the particular age at which we live in the history of the universe; in the
natural units we are 10^39 units from the beginning of time. So if you assume that gravity goes like 1/T, then you don't need to write this small number into the laws of physics. Well that was a very
attractive notion to Dirac. He had this very strong belief in the power of aesthetics to divine the laws of nature, but then it's a question whether that's experimentally true. Well after that,
then... Dirac's hypothesis remained a hypothesis for 40 years. Nobody had good enough observational data either to confirm it or to contradict it. So it remained quite possible that Dirac was right.
In the meantime I think it was Edward Teller who proposed that the same thing might be true for the fine-structure constant, since that's also a rather small number, not as small as the gravitational
coupling constant, but it's still... it's e^2/hc, that's 1/137, and that looks like a logarithm. If you take the logarithm of Dirac's number, the natural logarithm of 10^39 is about a 100, so it's
about a 100 powers of e, so you might imagine that 137 is the logarithm of the time. And so Teller proposed the hypothesis that the electromagnetic interaction is also weakening with time, but going
like 1 over logarithm. So that was also a very interesting question and that... Teller proposed that, I think - I don't remember exactly when, around 1950 or so - I mean it was some time after Dirac.
And that was clearly much easier to test because we have much more accurate information about the electromagnetic interaction than we do about gravity. So... attention then was immediately
concentrated on the fine-structure constant rather than on gravitation. And the first response to Teller, I think, came from Denys Wilkinson and he showed that in fact Teller couldn't be right, and
he did that by looking simply at the decay rate of uranium in ancient rocks. That if you observe isotopes of uranium and isotopes of lead into which they decay in ancient rocks you can... by - it's a
fairly circular argument, but you can in fact more or less prove by looking at these different kinds of rocks that the decay rates have remained pretty constant over the last 10^9 years or so, within
10%, something like that. I mean, there hasn't been a huge variation in the decay rate. Well, if you take the rate of the outer decay of uranium 238, it's actually extremely sensitive to the
fine-structure constant because it... the alpha particle has to come out of the nucleus over a very high Gamow barrier, and the Gamow formula for the lifetime has an exponential with the
fine-structure constant in it, since the fine-structure constant determines the Coulomb interaction between the alpha particle and the rest of the nucleus. So you... the lifetime goes like the
exponential of something proportional to the fine-structure constant with a big coefficient. And so if you change the fine-structure constant by a small fraction, you change the lifetime by the 500th
power of the fine-structure constant. So it's actually a very sensitive test for variation of the fine-structure constant, and so that by itself was enough to demolish Teller.
Freeman Dyson (1923-2020), who was born in England, moved to Cornell University after graduating from Cambridge University with a BA in Mathematics. He subsequently became a professor and worked on
nuclear reactors, solid state physics, ferromagnetism, astrophysics and biology. He published several books and, among other honours, was awarded the Heineman Prize and the Royal Society's Hughes
Title: Could gravity vary with time?
Listeners: Sam Schweber
Silvan Sam Schweber is the Koret Professor of the History of Ideas and Professor of Physics at Brandeis University, and a Faculty Associate in the Department of the History of Science at Harvard
University. He is the author of a history of the development of quantum electro mechanics, "QED and the men who made it", and has recently completed a biography of Hans Bethe and the history of
nuclear weapons development, "In the Shadow of the Bomb: Oppenheimer, Bethe, and the Moral Responsibility of the Scientist" (Princeton University Press, 2000).
Tags: Planck's constant, Compton frequency, fine-structure constant, Paul Dirac, Edward Teller, Denys Wilkinson
Duration: 6 minutes, 10 seconds
Date story recorded: June 1998
Date story went live: 24 January 2008 | {"url":"https://webofstories.com/play/freeman.dyson/109;jsessionid=FB333F71230A7E4EF9ACA93536EBDB48","timestamp":"2024-11-06T21:06:28Z","content_type":"application/xhtml+xml","content_length":"58437","record_id":"<urn:uuid:ea330819-f0ec-4295-8574-431b91c61137>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00219.warc.gz"} |
A wavelet-based approach to streamflow event identification and modeled timing error evaluation
Articles | Volume 25, issue 5
© Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License.
A wavelet-based approach to streamflow event identification and modeled timing error evaluation
Streamflow timing errors (in the units of time) are rarely explicitly evaluated but are useful for model evaluation and development. Wavelet-based approaches have been shown to reliably quantify
timing errors in streamflow simulations but have not been applied in a systematic way that is suitable for model evaluation. This paper provides a step-by-step methodology that objectively identifies
events, and then estimates timing errors for those events, in a way that can be applied to large-sample, high-resolution predictions. Step 1 applies the wavelet transform to the observations and uses
statistical significance to identify observed events. Step 2 utilizes the cross-wavelet transform to calculate the timing errors for the events identified in step 1; this includes the diagnostic of
model event hits, and timing errors are only assessed for hits. The methodology is illustrated using real and simulated stream discharge data from several locations to highlight key method features.
The method groups event timing errors by dominant timescales, which can be used to identify the potential processes contributing to the timing errors and the associated model development needs. For
instance, timing errors that are associated with the diurnal melt cycle are identified. The method is also useful for documenting and evaluating model performance in terms of defined standards. This
is illustrated by showing the version-over-version performance of the National Water Model (NWM) in terms of timing errors.
Received: 26 Jun 2020 – Discussion started: 22 Sep 2020 – Revised: 24 Feb 2021 – Accepted: 07 Apr 2021 – Published: 19 May 2021
Common verification metrics used to evaluate streamflow simulations are typically aggregated measures of model performance, e.g., the Nash–Sutcliffe Efficiency (NSE) and the related root mean square
error (RMSE). Although typically used to assess errors in amplitude, these statistical metrics include contributions from errors in both amplitude and timing (Ehret and Zehe, 2011), making them
difficult to use for diagnostic model evaluation (Gupta et al., 2008). Furthermore, common verification metrics are calculated using the entire time series, whereas timing errors require a comparison
of localized features or events in the data. This paper focuses explicitly on event timing error estimation, which is not routinely evaluated despite its potential benefit for model diagnostics
(Gupta et al., 2008) and practical forecast guidance (Liu et al., 2011).
The fundamental challenge with evaluating timing errors is identifying what constitutes an event in the two time series being compared. Identifying events is typically subjective, time consuming, and
not practical for large-sample hydrological applications (Gupta et al., 2014). A variety of baseflow separation methods, ranging from physically based to empirical, have been developed to identify
hydrologic events (see Mei and Anagnostou, 2015, for a summary), though many of these approaches require some manual inspection of the hydrographs. Merz et al. (2006) put forth an automated approach,
but it requires a calibrated hydrologic model, which is a limitation in data-poor regions. Koskelo et al. (2012) developed a simple, empirical approach that only requires rainfall and runoff time
series, but it is limited to small watersheds and daily data. Mei and Anagnostou (2015) introduced an automated, physically based approach which is demonstrated for hourly data, though one caveat is
that basin events need to have a clearly detectable recession period. Additional methods have focused on identifying flooding events using peak-over-threshold methods. The thresholds used for such
analyses are often either based on historical percentiles (e.g., the 95th percentile) or on local impact levels (river stage), such as the National Weather Service (NWS) flood categories (NOAA
National Weather Service, 2012). Timing error metrics are often calculated from the peaks of these identified events. For example, the peak time error, or its derivative of the mean absolute peak
time error, requires matching observed and simulated event peaks and calculating their offset (Ehret and Zehe, 2011). While this may be straightforward visually, it can be difficult to automate; some
of the reasons for this are discussed below.
Difficulties arise when using thresholds for event identification. For example, exceedances can cluster if a hydrograph vacillates above and below a threshold, leading to the following questions: is
it one or multiple events? Which peak should be used for the assessment? In the statistics of extremes, declustering approaches can be applied to extract independent peaks (e.g., Coles, 2001), but
this reductionist approach may miss relevant features. For instance, if background flows are elevated for a longer period of time before and after the occurrence of these events, the threshold-based
analysis identifies features of the flow separately from the primary hydrologic process responsible for the event. If one focuses just on peak timing differences in this example, then that timing
error may only apply to some small fraction of the total flow of the larger event which happens mainly below the threshold. Furthermore, for overall model diagnosis that focuses on model performance
for all events, not just flood events, variable thresholds would be needed to account for different kinds of events (e.g., a daily melt event versus a convective precipitation event).
Using a threshold approach to identify events and timing error assessment, Ehret and Zehe (2011) develop an intuitive assessment of hydrograph similarity, i.e., the series distance. This algorithm is
later improved upon by Seibert et al. (2016). The procedure matches observed and simulated segments (rise or recession) of an event and then calculates the amplitude and timing errors and the
frequency of the event agreement. The series distance requires smoothing the time series, identifying an event threshold, and selecting a time range in which to consider the matching of two segments.
Liu et al. (2011) developed a wavelet-based method for estimating model timing errors. Although wavelets have been applied in many hydrologic applications, such as model analysis (e.g., Lane, 2007;
Weedon et al., 2015; Schaefli and Zehe, 2009; Rathinasamy et al., 2014) and post-processing (Bogner and Kalas, 2008; Bogner and Pappenberger, 2011), Liu et al. (2011) were the first to use it for
timing error estimation. Liu et al. (2011) apply a cross-wavelet transform technique to streamflow time series for 11 headwater basins in Texas. Timing errors are estimated for medium- to high-flow
events that are determined a priori by threshold exceedance. They use synthetic and real streamflow simulations to test the utility of the approach. They show that the technique can reliably estimate
timing errors, though they conclude that it is less reliable for multi-peak or consecutive events (defined qualitatively). ElSaadani and Krajewski (2017) followed the cross-wavelet approach used by
Liu et al. (2011) to provide similar analysis and further investigate the effect of the choice of mother wavelet on the timing error analysis. Ultimately, they recommended that, in the situation of
multiple adjoining flow peaks, the improved time localization of the Paul wavelet might justify its poorer frequency localization compared the Morlet wavelet.
Liu et al. (2011) provide a starting point for the work in this paper in which we develop the following two new bases for their method: (1) objective event identification for timing error evaluation
and (2) the use of observed events as the basis for the model timing error calculations. The latter is important for model benchmarking, i.e., the practice of evaluating models in terms of defined
standards (e.g., Luo et al., 2012; Newman et al., 2017). Here, the use of observed events provides a baseline by which to evaluate changes and to compare multiple versions or experimental designs.
This paper provides a methodology for using wavelet analysis to quantify timing errors in hydrologic simulations. Our contribution is a systematic approach that integrates (1) statistical
significance to identify events with (2) a basis for timing error calculations independent of model simulations (i.e., benchmarking). We apply our method to a timing error evaluation of
high-resolution streamflow prediction. The paper is organized as follows: Sect. 2 describes the observational and simulated data used. Section 3 provides the detailed methodology of using wavelets to
identify events and estimate timing errors in a synthetic example. In Sect. 4, we demonstrate the method using real and simulated streamflow data for several use cases and then illustrate the
application of the method for version-over-version comparisons. Section 5 is the discussion and conclusions, including how specific methodological choices may vary by application.
The application of the methodology is illustrated using real and simulated stream discharge (streamflow in cubic meters per second) data at three US Geological Survey (USGS) stream gauge locations in
different geographic regions, i.e., Onion Creek at US Highway 183, Austin, Texas, for the South Central region (Onion Creek, TX; USGS site no. 08159000), Taylor River at Taylor Park, Colorado, for
the Intermountain West (Taylor River, CO; USGS site no. 09107000), and Pemigewasset River at Woodstock, New Hampshire, for New England (Pemigewasset River, NH; USGS site no. 01075000). We use the
USGS instantaneous observations averaged on an hourly basis.
NOAA's National Water Model (NWM; https://www.nco.ncep.noaa.gov/pmb/products/nwm/, last access: 8 May 2021) is an operational model that produces hydrologic analyses and forecasts over the
continental United States (CONUS) and Hawaii (as of version 2.0). The model is forced by downscaled atmospheric states and fluxes from NOAA's operational weather models. Next, the Noah-MP
(Noah-multiparameterization; Niu et al., 2011) land surface model calculates energy and water states and fluxes. Water fluxes propagate down the model chain through overland and subsurface (soil and
aquifer representations) water routing schemes to reach a stream channel model. The NWM applies the three-parameter Muskingum–Cunge river routing scheme to a modified version of the National
Hydrography Dataset Plus (NHDPlus) version 2 (McKay et al., 2012) river network representation (Gochis et al., 2020).
In this study, NWM simulations are taken from each version's retrospective runs (https://docs.opendata.aws/nwm-archive/readme.html, last access: 8 May 2021). These are continuous simulations (not
cycles) run for the period from October 2010 to November 2016 and forced by the National Land Data Assimilation System (NLDAS)-2 product as atmospheric conditions. The nudging data assimilation was
not applied in these runs. We use NWM discharge simulations from versions V1.0, V1.1, and V1.2 (not all versions may be publicly available).
The methodology developed in this paper is implemented in the R language and is made publicly available, as detailed in the code availability section at the end of the paper.
This section provides the description of the methodology using wavelets to identify events and estimate timing errors. The steps can be seen in the accompanying flowchart (Fig. 1) and nomenclature
(Table 1), which define the key terms of the approach. To facilitate understanding, the steps are illustrated by an application of the methodology to an observed time series of an isolated peak in
Onion Creek, TX, (Fig. 2a) and the synthetic modeled time series, which is identical to the observation time series but shifted 5h in to the future (Fig. 3a; note the log scale).
3.1Step 1 – identify observed events
The first step is to identify a set of observed events for which the timing error should be calculated. We break this step into the following three substeps: 1a – apply the wavelet transform to
observations; 1b – determine all observed events using significance testing; and 1c – sample observed events to an event set relevant to analysis.
3.1.1Step 1a – apply wavelet transform to observations
First, we apply the continuous wavelet transform (WT) to the observed time series. The main steps and equations for the WT are provided here, though the reader is referred to Torrence and Compo
(1998) and Liu et al. (2011) for more details.
Before applying the WT, a mother wavelet needs to be selected. In Torrence and Compo (1998), they discuss the key factors that should be considered when choosing the mother wavelet. There are four
main considerations, including (i) orthogonal or nonorthogonal, (ii) complex or real, (iii) width, and (iv) shape. In this study, we follow Liu et al. (2011) in selecting the nonorthogonal and
complex Morlet wavelet as follows:
$\begin{array}{}\text{(1)}& \mathit{\psi }\left(n\right)={\mathit{\pi }}^{-\mathrm{1}/\mathrm{4}}{e}^{i{w}_{\mathrm{0}}n}{e}^{-{n}^{\mathrm{2}}/\mathrm{2}},\end{array}$
where w[0] is the nondimensional frequency with a value of 6 (Torrence and Compo, 1998).
Once the mother wavelet is selected, the WT is applied to a time series, x[n], in which n goes from n=0 to $n=N-\mathrm{1}$ with a time step of δt. The WT is the convolution of the time series with
the mother wavelet that has been scaled and normalized as follows:
$\begin{array}{}\text{(2)}& {W}_{n}\left(s\right)=\sum _{{n}^{\prime }=\mathrm{0}}^{N-\mathrm{1}}{x}_{{n}^{\prime }}{\mathit{\psi }}^{*}\left[\frac{\left({n}^{\prime }-nt\right)\mathit{\delta }t}{s}\
where n^′ is the localized time in [0, N−1], s is the scale parameter, and the asterisk indicates the complex conjugate of the wavelet function. The wavelet power is defined as $|{W}_{n}^{\mathrm{2}}
|$, which represents the squared amplitude of an imaginary number when a complex wavelet is used as in this study. We use the bias-corrected wavelet power (Liu et al., 2007; Veleda et al., 2012),
which ensures that the power is comparable across timescales. We also identify a maximum timescale a priori that corresponds to our application. We select 256h (∼10d), but this number could be
higher or lower for other applications, and there are no real penalties for using too high a maximum (lower than the annual cycle).
The wavelet transform (WT) expands the dimensionality of the original time series by introducing the timescale (or period) dimension. Wavelet power is also a function of both time and timescale
(e.g., Torrence and Compo, 1998). This is illustrated in Fig. 2. The streamflow time series (Fig. 2a) is expanded into a 2-dimensional (2-D) wavelet power spectrum (Fig. 2b). Wavelet analysis can
detect localized signals in the time series (Daubechies, 1990), including hydrologic time series, which are often irregular or aperiodic (i.e., events may be isolated and do not regularly repeat) or
nonstationary. We note that, in many wavelet applications, timescale is referred to as “period”, and this axis is indeed the Fourier period in our plots. However, to emphasize that our study is more
focused on irregular events and less on periodic behavior of time series, we use the term timescale to denote the Fourier period (and not wavelet scale).
Because we are applying the WT to a finite time series, there are timescale-dependent errors at the beginning and end times of the power spectrum, where the entirety of the wavelet at each scale is
not fully contained within the time series. This region of the WT is referred to as the cone of influence or COI (Torrence and Compo, 1998). Figure 2b illustrates the COI as the regions in which the
colors are muted; we ignore all results within the COI in this study.
We make several additional notes on the wavelet power and its representation in the figures. The units of the wavelet power are those of the time series variance (m^6s^−2 – meters to the sixth power
per square second for streamflow), and it is natural to want to cast the power in a physical light or relate it to the time series variance. Indeed, the power is often normalized by the time series
variance when presented graphically. However, it must be noted that the wavelet convolved with the time series frames the resulting power in terms of itself at a given scale. Wavelet power is a
(normalized) measure of how well the wavelet and the time series match at a given time and scale. The power can only be compared to other values of power resulting from a similarly constructed WT.
There are various transforms that can be applied to aid the graphical interpretation of the power (log and variance scaling), but the utility of these often depends on the nature of the individual
time series analyzed. For simplicity, we plot the raw bias-rectified wavelet power in this paper.
3.1.2Step 1b – determine all observed events using significant testing
In their seminal wavelet study, Torrence and Compo (1998) outline a method for objectively identifying statistical significance in the wavelet power by comparing the wavelet power spectra with a
power spectra from a red noise process. Specifically, the observed time series is fitted with an order 1 autoregressive (AR1 or red noise) model, and the WT is applied to the AR1 time series. The
power spectrum of the AR1 model provides the basis for the statistical significance testing. Significance is determined if the power spectra are statistically different using a chi-squared test.
Figure 2b shows significant (>=95% confidence level) regions of wavelet power inside black contours. Statistical significance indicates wavelet power that falls outside the time series background
statistical power based on an AR1 model of the time series. Statistical significance of the wavelet power can be thought of as events in the wavelet domain. We define events as regions of significant
wavelet power outside the COI. Figure 2c displays the wavelet power for the events in this time series. We emphasize that events defined in this way are a function of both time and timescale and
that, at a given time, events of different timescales can occur simultaneously.
3.1.3Step 1c – sample observed events to an event set relevant to analysis
Step 1b results in the identification of all events at all timescales and times. In this substep, the event space is sampled to suit the particular evaluation. Torrence and Compo (1998) offer the
following two methods for smoothing the wavelet plot that can increase significance and confidence: (i) averaging in time (over timescale) or (ii) averaging in timescale (over time). Because the goal
of this paper is to evaluate model timing errors over long simulation periods, we choose to sample the event space based on averaging in timescale. Although for some locations there may be physical
reasons to expect certain timescales to be important (e.g., the seasonal cycle of snowmelt), the most important timescales at which hydrologic signals occur at a particular location are not
necessarily known a priori. Averaging events in timescale can provide a useful diagnostic by identifying the dominant, or characteristic, timescales for a given time series. Averaging many events in
a timescale can filter noise and help reveal the expected timescales of dominant variability corresponding to different processes or sets of processes.
In our analysis, we seek to uncover the dominant event timescales and to evaluate modeled timing errors on them. The following points articulate our methodological choices for summarizing the
observed events:
• Calculate the average event power in each timescale. Considering only the statistically significant areas of the observed wavelet spectrum, calculate the average power in each timescale (Fig. 2c,
right panel). We point out that calculating the average power over events is different to what is found by averaging across all time points, which does not take statistical significance into
consideration (Fig. 2b, right panel).
• Identify timescales of absolute and local maxima in time-averaged power. After obtaining the average event power as a function timescale (Fig. 2c, right panel), the local and absolute maximums
for average event power can be determined. In the Onion Creek case, there is a single maximum at 22h (gray dot in Fig. 2c, right panel). The timescales corresponding to the absolute and local
maxima of the average power of the observed time series are called the characteristic timescales used for evaluation. This is the first subset of the events, i.e., all events that fall within the
characteristic timescales. For a single characteristic timescale, contiguous events in time are called event clusters (horizontal line in Fig. 2d).
• Identify events with maximum power in each event cluster. For all timescales, we identify the event with maximum power in each event cluster. This is the second event subset, i.e., all events
with maximum power in each cluster that fall within a characteristic timescale (asterisk in Fig. 2d); these are called cluster maxima.
3.2Step 2 – calculate timing errors
Step 1 identifies observed events by applying a wavelet transform to the observed time series. To calculate the timing error of a modeled time series, we perform its cross-wavelet transform with the
observed time series. Figure 3a shows the observed and modeled time series used in our illustration of the methodology, i.e., the observed is the same isolated peak from Onion Creek, TX, as in
Fig. 2a, and the synthetic modeled time series adds a prescribed timing error of +5h to the observed. (Note that while the observed time series is identical in both, Figs. 2a and 3a have linear and
log[10] axes, respectively.)
3.2.1Step 2a – apply cross-wavelet transform (XWT) to observations and simulations
The cross-wavelet transform (XWT) is performed between the observed and synthetic time series. Given the WTs of an observed time series ${W}_{n}^{X}\left(s\right)$ and a modeled time series ${W}_{n}^
{Y}\left(s\right)$, the cross-wavelet spectrum can be defined as follows:
$\begin{array}{}\text{(3)}& {W}_{n}^{XY}\left(s\right)={W}_{n}^{X}\left(s\right){W}_{n}^{{Y}^{*}}\left(s\right),\end{array}$
where the asterisk denotes the complex conjugate. The cross-wavelet power is defined as $|{W}_{n}^{XY}\left(s\right)|$ and signifies the joint power of the two time series. The XWT between the Onion
Creek observations and the synthetic 5h offset time series is shown in Fig. 3b, with power represented by the color scale.
Similar to step 1b of the WT, we can also calculate areas of significance for the XWT power as shown by the black contour in Fig. 3b. For the XWT, significance is calculated with respect to the
theoretical background wavelet spectra of each time series (Torrence and Compo, 1998). We define XWT events as points of significant XWT power outside the COI. XWT events indicate significant joint
variability between the observed and modeled time series. Below, in step 2d, we employ XWT events as a basis for identifying hits and misses on observed events for which the timing errors are
calculated. Figure 3c shows the observed events (colors) and the intersection between the observed and XWT events (dashed contour). As described later, this intersection (inside dashed contour) is a
region of hits where timing errors are considered valid. Note that the early part of the observed events at shorter timescales is not in the XWT events. This is because the timing offset in the
modeled time series misses the early part of the observed event for some timescales.
3.2.2Step 2b: calculate the cross-wavelet timing errors
For complex wavelets, such as the Morlet used in this paper, the individual WTs include an imaginary component of the convolution. Together, the real and imaginary parts of the convolution describe
the phase of each time series with respect to the wavelet. The cross-wavelet transform combines the WTs in conjugate, allowing the calculation of a phase difference or angle (radians), which can be
computed as follows:
$\begin{array}{}\text{(4)}& {\mathit{\varphi }}_{n}^{XY}\left(s\right)={\mathrm{tan}}^{-\mathrm{1}}\left[\frac{I\left(\mathcal{I}〈{s}^{-\mathrm{1}}{W}_{n}^{XY}\left(s\right)〉\right)}{\mathcal{R}\
where ℐ is the imaginary and ℛ is the real component of ${W}_{n}^{XY}\left(s\right)$. The arrows in Fig. 3b indicate the phase difference for our example case, which is used to calculate the timing
errors. Note that these are calculated at all points in the wavelet domain.
The distance around the phase circle at each timescale is the Fourier period (hours). We convert the phase angle into the timing errors (hours) as in Liu et al. (2011) as follows:
$\begin{array}{}\text{(5)}& \mathrm{\Delta }{t}_{n}^{XY}\left(s\right)={\mathit{\varphi }}_{n}^{XY}\left(s\right)\cdot T/\mathrm{2}\mathit{\pi },\end{array}$
where T is the equivalent Fourier period of the wavelet. Note that the maximum timing error which can be represented at each timescale is half the Fourier period because the phase angle is in the
interval (−π, π). In other words, only timescales greater than 2E can accurately represent a timing error E. Because the range of the arctan function is limited by ±π, true phase angles outside this
range alias to angles inside this range. (For example, the phase angles 1.05⋅π and $-\mathrm{0.95}\cdot \mathit{\pi }$ are both assigned to $-\mathrm{0.95}\cdot \mathit{\pi }$). Also note that, when
the wavelet transforms are approximately antiphase, the computed phase differences and timing errors produce corresponding bimodal distributions given the noise in the data. Figure 3c shows phase
aliasing in the negative timing errors at timescales less than 10h, which is double the 5h synthetic timing error we introduced. The bimodality of the phase and timing are also seen at the 10h
timescale when the timing errors abruptly change sign (or phase by 2π). We note the convention used is that the XWT produces timing errors that are interpreted as modeled minus observed, i.e.,
positive values mean the model occurs after the observed. Positive 5h timing errors in Fig. 3c describe that the model is late compared to the observations as seen in the hydrographs in the top
panel (Fig. 3a).
3.2.3Step 2c – subset cross-wavelet timing errors to sampled observed events
Step 2b results in an estimate of timing errors for all times and timescales in the cross-wavelet transform space. In our application, we are interested in the timing errors that correspond to the
identified sample of observed events, especially for the maximum power events in each cluster for each characteristic timescale. In the synthetic Onion Creek example, the point of interest in the
wavelet transform of the observed time series, used to sample the timing errors produced by the XWT, is shown by the gray asterisk in Fig. 3c.
The results for the synthetic Onion Creek example are summarized in Table 2. For the identified characteristic timescale of 22h in the observed wavelet power (which had an average WT power of
555700m^6s^−2; see Fig. 2c on the right), there was one event cluster, and the timing error at the cluster maximum was 5h, and it occurred at hour 37 of the time series.
3.2.4Step 2d – filter misses
The premise of computing a timing error between the observed and modeled time series is that they share common events which can be meaningfully compared. In a two-way contingency analysis of events,
a hit refers to when the modeled time series reproduces an observed event. When the modeled time series fails to reproduce an observed event, it is termed a miss. In the case of a miss, it does not
make sense to include the timing error in the overall assessment. Once the characteristic timescales of the observed event spectrum are identified and event cluster maxima are located, timing errors
are obtained at these locations in the XWT. In this step, the significance of the XWT on these event cluster maxima is used to decide if the model produced a hit or a miss for each point and to
determine if the timing error is valid. As previewed above, Fig. 3c shows the observed events (colors), and the dashed contour shows the intersection between the observed and XWT events. Regions of
intersection between observed events and XWT events are considered model hits, and observed events falling outside the XWT events are considered misses. Because we constrain our analysis to observed
events in the wavelet power spectrum, we do not consider either of the remaining categories in a two-way analysis (false alarms and correct negatives). We note that a complete two-way event analysis
could, alternatively, be constructed in the wavelet domain based on the Venn diagram of the observed and modeled events without necessarily using the XWT. We choose to use the XWT events because the
XWT is the basis of the timing errors.
In the synthetic example of Onion Creek, a single characteristic timescale and event cluster yields a single cluster maximum, as shown by the asterisk in Fig. 3c. Because this asterisk falls both
within the observed and XWT events, it is a hit, and the timing error at that point is valid (Table 2). For a longer time series, as seen in subsequent examples, a useful diagnostic and complement to
the timing error statistics at each characteristic timescale is the percent hits. When summarizing timing error statistics for a timescale, we drop misses from the calculation and the percent hits
indicates what portion of the time series was dropped (percent misses is equal to 100− percent hits). In our tables, we provided timing error statistics for hits only.
In the previous section, we illustrate the method using an isolated peak and a prescribed timing error. In this section, we demonstrate the method using NWM model simulations which introduce greater
complexity and longer time series. Finally, we show version-over-version comparisons for 5-year simulations to illustrate the utility for evaluation.
4.1Demonstration using NWM data
4.1.1Pemigewasset River, NH
This example uses a 3-month time series from the Pemigewasset River, NH, to examine multiple peaks in the hydrograph (Fig. 4a). It is fairly straightforward to pick out three main peaks with the
naked eye. From step 1 of our method, the wavelet transform is applied to the observations (Fig. 4b, left panel; Fig. 4c, left panel), revealing up to three event clusters, depending on the
characteristic timescale examined (Fig. 4d). When we plot the average event power by timescale (Fig. 4c, right panel), we see that there are nine relative maxima (small gray dots); hence, there are
nine characteristic scales for this example. The cluster maxima (gray asterisks) for each observed event cluster are shown in Fig. 4d.
Next, we compare the observed time series with the simulation from the NWM V1.2 (Fig. 5a) and follow step 2 of our method: (a) apply the cross-wavelet transform (Fig. 5b colors), (b) calculate the
timing error for all observed events from the phase difference (Fig. 5b arrows), (c) subset the timing errors to the observed cluster maxima (Fig. 5c asterisks), and (d) retain only modeled hits
(Fig. 5c asterisks within the dashed contours). Table 3 is ordered, by characteristic timescales, from highest to lowest average power; we only show the top five characteristic scales. The absolute
maximum of the time average event spectrum has a timescale equal to 24.8h; for cluster one, the model is nearly 11h late, and cluster two is early (−3.5h). Both are hits, and the average timing
error is 3.5h late. However, for the next timescale (=27.8h), the third cluster maximum is a miss, so its timing error is reported as n/a (not applicable) and is not included in the average. This
miss can be seen in Fig. 5c where the cluster 3 asterisk falls just outside the XWT events for the 27.8h, timescale. Moreover, this miss can also be interpreted from the comparison of the
hydrographs in Fig. 5a where the modeled third peak does not reasonably approximate the magnitude of the observed peak. Interestingly, while it is a narrow miss at the shorter timescale of 27.8h,
the associated (third) cluster maxima at the next most powerful characteristic timescale (33.1h) is a hit. This reflects that the hydrograph is insufficiently peaked for this event but does have
some of the observed, lower-frequency variability. Overall, the characteristic timescale of 33.1h has timing results similar to the 27.8h timescale, with the exception of the third cluster maximum.
This raises the question of whether these are distinct characteristic timescales. In Sect. 5, we discuss smoothing the time average event power by timescale to address this issue.
The characteristic timescale with the fourth-highest time-averaged power occurs at 111h, which is a different order of magnitude, suggesting that this may have a different physical process driving
it. At this timescale, the model is late in both event clusters (10 and 16h). Results are similar for the next timescale of 148h. We do not show results for the remaining four characteristic
timescales with lower average power, since they have similar characteristic timescale values and associated timing errors to what has already been shown.
4.1.2Taylor River, CO
In this example, we examine a 1-year time series from Taylor River, CO, that illustrates hydrograph peaks driven by different processes. The Taylor River is in a mountainous area where the spring
hydrology is dominated by snowmelt runoff. Figure 6a shows the time series from Taylor River, CO, where we can see the snowmelt runoff in spring and also several peaks in summer, likely driven by
summer rains. Figure 6b shows the WT and illustrates how missing data is handled. This results in additional COIs (muted colors) to account for the edge effects, and areas of the COI are ignored in
our analyses.
From the statistically significant events in the WT, we see the peak in the characteristic timescales at 23.4h (Fig. 6c, right), and there is another maxima at the 99 and 118h timescales. The
process-based shift in dominant timescales is evident in the wavelet power (Fig. 6b and c). The 23.4h timescale is dominant before 1 July, during snowmelt runoff, and then shifts to the 99 and 118h
timescales, relating to flows from summer rains. In step 2, we compare the observed time series with the simulation from the NWM V1.2 (Fig. 7a); here, it is useful to magnify the spring melt season
time series (Fig. 8), where we see that the amplitude of the diurnal signal is too high, but it is hard to visually tell much about the timing error. Next, the cross-wavelet transform (Fig. 7b) and
timing errors are calculated (Fig. 7c). The results are summarized in Table 4. Starting with the dominant 23.4h timescale, we see that there are 11 clusters, that 73% ($=\mathrm{8}/\mathrm{11}$
cluster maxima) are hits, and that the model is generally early (the mean is 6h early). For the 118 and 99h timescales, there are no hits. This suggests that we are confident in the timing errors
of the model for the diurnal snowmelt cycle, and these timing errors can be used as guidance for model performance and model improvements. However, the model does not successfully reproduce key
variability during the summer, and timing errors are not valid at this timescale. This underscores the key point that timing errors are timescale dependent and can help diagnose which processes to
target for improvements.
The term n/a stands for not applicable.
The term n/a stands for not applicable.
4.2Evaluating model performance
Finally, we show how the methodology can be used for evaluating performance changes across NWM versions. We point out that none of the NWM version upgrades were targeting timing errors, so these
results just provide a demonstration. We use 5-year observed and modeled time series at the three locations, namely Onion Creek, TX, Pemigewasset River, NH, and Taylor River, CO.
For Onion Creek, Table 5 summarizes the results for the three most important timescales, and Fig. 9 provides a graphical representation of these timing errors (hits only). For the dominant 29.5h
timescale and for all model versions, there were 19 cluster maxima, 89.5% of which were hits, with a median timing error of 1.4h early. However, the model shows progressively earlier timing errors
with increasing version (Fig. 9). The results are similar for the other two characteristic timescales.
For Pemigewasset River, Table 6 summarizes the results for the three most important timescales, and Fig. 10 provides a graphical representation of the timing errors (hits only). At this location, the
median timing error improved with NWM V1.2, moving closer to zero. While the distribution of the timing errors became less biased than the previous versions, it also became wider (Fig. 10). Over the
time series, there were between 59 and 76 event clusters. Interestingly, the hit rate for all timescales was best for NWM V1.1, though its timing errors are broadly the worst. From NWM V1.0 to
NWM V1.2, improvements to both hit rate and median timing errors were obtained at all timescales.
For Taylor River, Table 7 summarizes the results for the two most important timescales. For the characteristic timescale of 235h (∼10d), there are only four event clusters, and each model version
has only one hit. The timing of this hit improves by roughly half its error from NWM V1.0 to NWM V1.2 in going from 16 to 9h. The 23.4h timescale has 41 event clusters, with a hit rate varying
considerably by version. The median timing error is fairly consistent with version, however, ranging from 6 to 7h early.
5Discussion and conclusions
In this paper, we develop a systematic, data-driven methodology to objectively identify time series (hydrograph) events and estimate timing errors in large-sample, high-resolution hydrologic models.
The method was developed towards several intended uses. First, it was primarily developed for model evaluation, so that model performance can be documented in terms of defined standards. We
illustrate this with the version-over-version NWM comparisons. Second, it can be used for model development, whereby potential timing error sources can be diagnosed (by timescale) and targeted for
improvement. Related to this point, and given the advantages of calibrating using multiple criteria (e.g., Gupta et al., 1998), timing errors could be used as part of a larger calibration strategy.
However, minimizing timing errors at one timescale may not translate to improvements in timing errors (or other metrics) at other timescales. Wavelet analysis has also been used directly as an
objective function for calibration, although a difficulty arises in determining which similarity measure to use (e.g., Schaefli and Zehe, 2009; Rathinasamy et al., 2014). Future research will
investigate the application of the timing errors presented here for calibration purposes. Finally, the approach can be used for model interpretation and forecast guidance, as estimating timing errors
provides characterization of the timing uncertainty (i.e., for a given timescale, the model is generally late or early) or confidence.
Given the fact that several subjective choices were made specific to our application and goals, it is important to highlight that we have made the analysis framework openly available (detailed in the
code availability section below), so the method can be adapted, extended, or refined by the community right away. We look at timing errors from an observed event set relevant to our analysis, but
there are other ways to subset the events that might be more suitable to other applications. For example, we focus on the event cluster maxima, but one could also examine the event cluster means or
the local maxima along time. Another alternative to finding the event cluster maxima (i.e., for a given timescale) would be to identify the event with maximum power in islands of significance across
timescales, i.e., contiguous regions of contiguous significance across both time and timescale. This approach would ignore that multiple frequencies can be important at once. Moreover, defining such
islands is not straightforward. A different approach could be desirable if one suspected nonstationarity in the characteristic timescales over the time series. Then perhaps a moving average in
timescale could be employed to identify characteristic timescales. In our approach, we define the event set broadly. However, it could be subset using streamflow thresholds (e.g., for flooding
events) to compare events in the wavelet domain with traditional peak-over-threshold events. For example, Fig. 11 shows the maximum streamflows for the event set from the 5 year time series at Taylor
River. This figure shows that all events identified by the algorithm are not necessarily high-flow events (i.e., the maximum streamflow peaks are lower for the 23.4h timescale compared to the
235.6h timescale). To compare with traditional peak-over-threshold approaches, this event set could be filtered to include only events above a given threshold (i.e., events in both the wavelet and
time domains).
Another point that arises is how many characteristic timescales should be examined and the similarity of adjacent characteristic timescales. In our method, we average the power in timescales and
identify characteristic scales at every absolute and relative maxima. As seen in the illustrative examples, this can result in multiple characteristic scales, some of which can be quite similar,
suggesting that events at those scales are from similar or related processes. A solution could be to smooth the average power by timescale, which would reduce the number of local maxima, or to look
at timing errors within a band of timescales. It is also important to note that the characteristic scales are data driven, so they will change with different lengths of observed time series. Longer
runs capture more events and should converge on the more dominant timescales and events for a location. However, for performance evaluation, overlapping time periods for observed and modeled time
series are needed.
In our application of the WT, we follow Liu et al. (2011) and select the Morlet as the mother wavelet. However, results are sensitive to the mother wavelet selected. Further discussion of mother
wavelet choices can be found in Torrence and Compo (1998) and in ElSaadani and Krajewski (2017).
In summary, this paper provides a systematic, flexible, and computationally efficient methodology for calculating model timing errors that is appropriate for model evaluation and comparison and is
useful for model development and guidance. Based on the wavelet transform, the method introduces timescale as a property of timing errors. The approach also identifies streamflow events in the
observed and modeled time series and only evaluates timing errors for modeled events which are hits in a two-way contingency analysis. Future work will apply the approach to identify characteristic
timescales across the United States and to assess the associated timing errors in the NWM.
Code and data availability
The code for reproducing the figures and tables in this paper is provided in the GitHub repository at https://doi.org/10.5281/zenodo.4746587 (McCreight 2021), with instructions for installing
dependencies. The core code used in the above repository is provided in the “rwrfhydro” R package (https://doi.org/10.5281/zenodo.4746607; McCreight et al., 2021). The code is written in the
open-source R language (R Core Team, 2019) and builds on multiple, existing R packages. Most notably, the wavelet and cross-wavelet analyses are performed using the “biwavelet” package (Gouhier et
al., 2018).
We emphasize that the analysis framework is meant to be flexible and adapted to similar applications where different statistics may be desired. The figures created are specific to the applications in
this paper but provide a starting point for other work.
ET and JLM collaborated to develop the methodology. ET led the results analysis, prepared the paper, and did the revisions. JLM led the initial idea for the work and developed the open-source
software and visualizations.
The authors declare that they have no conflict of interest.
The authors would like to thank Dave Gochis, for the useful discussions, and Aubrey Dugger, for providing the NWM data. We thank the NOAA/OWP and NCAR NWM team for their support of this research.
This research has been supported by the Joint Technology Transfer Initiative grant (grant no. 2018-0303-1556911) and the National Oceanic and Atmospheric Administration R&D (contract
no. 1305M219FNWWY0382). This material is based upon work supported by the National Center for Atmospheric Research (NCAR), which is a major facility sponsored by the National Science Foundation (NSF;
grant no. 1852977).
This paper was edited by Matthew Hipsey and reviewed by Cedric David and Uwe Ehret.
Bogner, K. and Kalas, M.: Error-correction methods and evaluation of an ensemble based hydrological forecasting system for the Upper Danube catchment, Atmos. Sci. Lett., 9, 95–102, https://doi.org/
10.1002/asl.180, 2008.
Bogner, K. and Pappenberger, F.: Multiscale error analysis, correction, and predictive uncertainty estimation in a flood forecasting system, Water Resour. Res., 47, W07524, https://doi.org/10.1029/
2010WR009137, 2011.
Coles, S.: An Introduction to Statistical Modeling of Extreme Values, in: Springer Ser. Stat., Springer, London, 2001.
Daubechies, I.: The wavelet transform time-frequency localization and signal analysis, IEEE Trans. Inform. Theory, 36, 961–1004, 1990.
Ehret, U. and Zehe, E.: Series distance-an intuitive metric to quantify hydrograph similarity in terms of occurrence, amplitude and timing of hydrological events, Hydrol. Earth Syst. Sci., 15,
877–896, https://doi.org/10.5194/hess-15-877-2011, 2011.
ElSaadani, M. and Krajewski, W. F.: A time-based framework for evaluating hydrologic routing methodologies using wavelet transform, J. Water Resour. Protect., 9, 723–744, https://doi.org/10.4236/
jwarp.2017.97048, 2017.
Gochis, D., Barlage, M., Cabell, R., Dugger, A., Fanfarillo, A., FitzGerald, K., McAllister, M., McCreight, J., RafieeiNasab, A., Read, L., Frazier, N., Johnson, D., Mattern, J. D., Karsten, L.,
Mills, T. J., and Fersch, B.: WRF-Hydro^® v5.1.1, Zenodo [data set], https://doi.org/10.5281/zenodo.3625238, 2020.
Gouhier, T. C., Grinsted, A., and Simko, V.: R package biwavelet: Conduct Univariate and Bivariate Wavelet Analyses (Version 0.20.17), Git Hub, available at: https://github.com/tgouhier/biwavelet
(last access: 12 April 2021), 2018.
Gupta, H. V., Sorooshian, S., and Yapo, P. O.: Towards improved calibration of hydrologic models: multiple and non-commensurable measures of information, Water Resour. Res., 34, 751–763, 1998.
Gupta, H. V., Wagener, T., and Liu, Y.: Reconciling theory with observations: elements of a diagnostic approach to model evaluation, Hydrol. Process., 22, 3802–3813, https://doi.org/10.1002/hyp.6989,
Gupta, H. V., Perrin, C., Blöschl, G., Montanari, A., Kumar, R., Clark, M., and Andréassian, V.: Large-sample hydrology: a need to balance depth with breadth, Hydrol. Earth Syst. Sci., 18, 463–477,
https://doi.org/10.5194/hess-18-463-2014, 2014.
Koskelo, A. I., Fisher, T. R., Utz, R. M., and Jordan, T. E.: A new precipitation-based method of baseflow separation and event identification for small watersheds (<50km^2), J. Hydrol., 450–451,
267–278, https://doi.org/10.1016/j.jhydrol.2012.04.055, 2012.
Lane, S. N.: Assessment of rainfall–runoff models based upon wavelet analysis, Hydrol. Process., 21, 586–607, https://doi.org/10.1002/hyp.6249, 2007.
Liu, Y., Liang, X. S., and Weisberg, R. H.: Rectification of the bias in the wavelet power spectrum, J. Atmos. Ocean. Tech., 24, 2093–2102, 2007.
Liu, Y., Brown, J., Demargne, J., and Seo, D. J.: A wavelet-based approach to assessing timing errors in hydrologic predictions, J. Hydrol., 397, 210–224, https://doi.org/10.1016/
j.jhydrol.2010.11.040, 2011.
Luo, Y. Q., Randerson, J. T., Abramowitz, G., Bacour, C., Blyth, E., Carvalhais, N., Ciais, P., Dalmonech, D., Fisher, J. B., Fisher, R., Friedlingstein, P., Hibbard, K., Hoffman, F., Huntzinger, D.,
Jones, C. D., Koven, C., Lawrence, D., Li, D. J., Mahecha, M., Niu, S. L., Norby, R., Piao, S. L., Qi, X., Peylin, P., Prentice, I. C., Riley, W., Reichstein, M., Schwalm, C., Wang, Y. P., Xia, J.
Y., Zaehle, S., and Zhou, X. H.: A framework for benchmarking land models, Biogeosciences, 9, 3857–3874, https://doi.org/10.5194/bg-9-3857-2012, 2012.
McCreight, J. L.: NCAR/wavelet_timing: Publication (Version v0.0.1), Zenodo, https://doi.org/10.5281/zenodo.4746587, 2021.
McCreight, J. L., Mills, T. J., Rafieeinasab, A., FitzGerald, K., Reads, L., Hoover, C., Johnson, D. W., Towler, E., Huang, Y.-F., Dugger, A., and Nowosad, J.: NCAR/rwrfhydro: wavelet timing tag
(Version v1.0.1), Zenodo, https://doi.org/10.5281/zenodo.4746607, 2021.
McKay, L., Bondelid, T., Dewald, T., Johnston, J., Moore, R., and Rea, A.: NHDPlus Version 2: user guide, National Operational Hydrologic Remote Sensing Center, Washington, DC, 2012.
Mei, Y. and Anagnostou, E. N.: A hydrograph separation method based on information from rainfall and runoff records, J. Hydrol., 523, 636–649, https://doi.org/10.1016/j.jhydrol.2015.01.083, 2015.
Merz, R., Blöschl, G., and Parajka, J.: Spatio-temporal variability of event runoff coefficients, J. Hydrol., 331, 591–604, https://doi.org/10.1016/j.jhydrol.2006.06.008, 2006.
Newman, A. J., Mizukami, N., Clark, M. P., Wood, A. W., Nijssen, B., and Nearing, G.: Benchmarking of a physically based hydrologic model, J. Hydrometeorol., 18, 2215–2225, https://doi.org/10.1175/
JHM-D-16-0284.1, 2017.
Niu, G.-Y., Yang, Z.-L., Mitchell, K. E., Chen, F., Ek, M. B., Barlage, M., Kumar, A., Manning, K., Niyogi, D., Rosero, E., Tewari, M., and Xia, Y.: The community Noah land surface model with
multiparameterization options (Noah-MP): 1. Model description and evaluation with local-scale measurements, J. Geophys. Res., 116, D12109, https://doi.org/10.1029/2010JD015139, 2011.
NOAA National Weather Service: NWS Manual 10-950, Definitions and General Terminology, Hydrological Services Program, NWSPD 10-9, available at: http://www.nws.noaa.gov/directives/sym/
pd01009050curr.pdf (last access: 8 May 2021), 2012.
Rathinasamy, M., Khosa, R., Adamowski, J., Ch, S., Partheepan, G., Anand, J., and Narsimlu, B.: Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models,
Water Resour. Res., 50, 9721–9737, 2014.
R Core Team: R: A language and environment for statistical computing, R Foundation for Statistical Computing, Vienna, Austria, available at: https://www.R-project.org/ (last access: 7 April 2021),
Schaefli, B. and Zehe, E.: Hydrological model performance and parameter estimation in the wavelet-domain, Hydrol. Earth Syst. Sci., 13, 1921–1936, https://doi.org/10.5194/hess-13-1921-2009, 2009.
Seibert, S. P., Ehret, U., and Zehe, E.: Disentangling timing and amplitude errors in streamflow simulations, Hydrol. Earth Syst. Sci., 20, 3745–3763, https://doi.org/10.5194/hess-20-3745-2016,
Torrence, C. and Compo, G. P.: A practical guide to wavelet analysis, B. Am. Meteorol. Soc., 79, 61–78, 1998.
Veleda, D., Montagne, R., and Araujo, M.: Cross-wavelet bias corrected by normalizing scales, J. Atmos. Ocean. Tech., 29, 1401–1408, 2012.
Weedon, G. P., Prudhomme, C., Crooks, S., Ellis, R. J., Folwell, S. S., and Best, M. J.: Evaluating the performance of hydrological models via cross-spectral analysis: case study of the Thames Basin,
United Kingdom, J. Hydrometeorol., 16, 214–231, https://doi.org/10.1175/JHM-D-14-0021.1, 2015. | {"url":"https://hess.copernicus.org/articles/25/2599/2021/","timestamp":"2024-11-11T03:49:27Z","content_type":"text/html","content_length":"247209","record_id":"<urn:uuid:bf1abc21-7a8c-4761-9a8a-f0332849fb3d>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00163.warc.gz"} |
Test Paper 2 - General - Other
Q1)for all X(roar(X)=3D>lion(X))
a)all lions roar
b)some lions roar
Q2)on some polynomial bounds
Q3)on emiter coupled logic
q4)four qustions on given digital ckt
from kennedy book ther are 5 questions.
Q5)one from recivers,one from digital comm,one coaxial cable repeater
distace depends on the channel BW.What is the use of IF
Q6)If channel BW is 35khz.What is the maximum freq of data u can
Q7)line with 2400buads,the dat u can ransmit on it is
these are the qustions in 1 st section
Q1) void(int *a,int *b)
*b=3D*b^*a; /* ^ is exclusive OR */
this function gives the value
a)a & b values swaped
b) a&b unchanged
Q2) on inorder traversal in binary tree
Q3) on black box testing
Q4) on fun(n)
unsigned long n=3D~0; /* ~ is ones complement */
Out put of this programme segment is
a)it will give the word length in that machine
b)gives max int vlue in the machine
Q1)sentenc given with blanks we ahve to fill tehm with words
Q2)same as above
Q3)A qustion on relations
david is grand father to sue
karen is sister to jim
jim is uncle to eric and sue
jim is nephew to larry
only married couple can have children and blood relation can,t marry
there are 4 questions on this
Q4)on data suficiency
they will give the table of data
philips bpl onida vediocon
1989 data " " "
q1)whose is consistent growth
q2)whose is highest groth
q3)lowest growth
Q5)GK question ambassidor means
Q6)who IS WHERE ANS: hague
Q9)they will give u diagram and find the shortest path between some
points.like belman ford algo in NW.
Section 1 - Electronics and Mathematics - 20 questions
section 2 - Computer Science
section 3 - Aptitude
Electronics and computer science
1. A circuit was given with lot of flip-flops etc and the operation of that
circuit was asked.
2. 15 software functions are there. It is known that atleast 5 of them are
defective. What is the probability that if three functions are chosen and
tested, no errors are uncovered.
Ans : 10 * 9 * 8
-- -- --
Computer Science
1.Java is
a) Multithreaded b) intrepreter c) compiler d) all of the above
Ans :d
2. The number of nodes in a k-level m-ary binary tree is :
A graph was given and questions regarding shortest path was asked.
For one question shortest path was asked and the answer is none of the
Overall the question paper was easy.
HR questions:
1.What is your strengths and weaknesses
2.What are the values u respect
3.Site a reason why philips should hire u
4.What will u do if u are asked to manage a project which will definitely
skip its deadline.
Technical (for me)
1.What is runtime locatable code?
2.What is volatile, register definition in C
3.What is compiler and what its output. | {"url":"https://www.yuvajobs.com/download-philips-placement-papers/test-paper-2-general-other-1461.html","timestamp":"2024-11-10T16:09:27Z","content_type":"text/html","content_length":"26456","record_id":"<urn:uuid:21463a7f-f74b-47b1-afb3-a5ad93ee42e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00325.warc.gz"} |
The Kingman Formula – Variation, Utilization, and Lead Time
The lead time of a system is heavily influenced by both the utilization and the variation. There are approximations available to estimate this relation, and one of them is the Kingman formula. In
this post I would like to introduce you to this equation and describe the fundamental understanding of it. Luckily, you don’t really need the formula for your daily dose of lean. The equation itself
has little practical use. However, this relationship is important for understanding the behavior of your production system. While you won’t use the Kingman formula to evaluate your production system,
understanding the equation will help you in tweaking your system in the right direction.
Lead Time
The lead time is the time it takes for a single part to go through the entire process or system. This is an important measure of the speed at which the system can react to changes. If you introduce
new products or product changes, the lead time is (on average) the time until these changes come out at the other end. If your customer orders a custom made-to-order product, this is the (average)
time your customer has to wait.
Hence, a lean production system aims to reduce the lead time. It is easy to determine the lead time for an existing system. You simply use Little’s Law. There are three variables, often labeled as
• L – Inventory, measured, for example, in units or quantity
• λ – Throughput, measured in units or quantity per time
• W – Lead time, measured in time
Little’s Law is then the simple relation shown below:
${ L= \lambda \cdot W}$
While Little’s Law tells you the lead time, it does not tell you much about how to influence it. Obviously, the main lever is to reduce your inventory (if you are active in lean manufacturing, you
may have come across this idea 😉 ). Yet, in most cases this inventory is there for a reason, mainly to buffer fluctuations (but also others, see my post Why Do We Have Inventory?).
The Kingman Equation
The Kingman equation (also known as Kingman formula or Kingman approximation) gives you an approximation of the waiting time of the parts for a single process based on its utilization and variance.
It was developed by British mathematician Sir John Kingman in 1961. As shown in the image, the parts arrive randomly, with a mean time μ[a] between arrivals and a standard deviation σ[a]. The parts
then wait in the queue until they are processed (i.e., serviced). The service time has an average duration of μ[s] and a standard deviation of σ[s]. The equation determines an estimation of the
waiting time (excluding the part in the process).
The equation includes the following variables, commonly written as:
• E(W): The expected waiting time W.
• μ[a] and σ[a]:Mean and standard deviation of the time between arrival of parts. This includes the losses. I.e. it would be NOT the cycle time, but rather the arrival takt time.
• μ[s] and σ[s]:Mean and standard deviation of the time to process one part. Please note that this usage is not standardized, and some equations use the symbol μ for the service rate, which would
be the inverse of the mean time to process one part. Similar to the arrivals, this would also include the losses and hence be themean and standard deviation of the process takt time.
• p: Utilization (i.e., the percentage of the time a machine is working). It is calculated by dividing the mean time for service μ[s] by the mean time between arrival μ[a]. If the arrival is faster
than the service, you would have an utilization above 100%, which is not possible. Hence μ[a] has to be smaller than μ[s]. Therefore, p can range from 0 to 1. (Note: If the parts arrive faster
than they can be processed, then the waiting time will go toward infinity. Even if the arrival is exactly equal to the service, the waiting time will still go toward infinity.)
• c[a]: The coefficient of variation for the arrival (i.e., the standard deviation σ[a] divided by the mean μ[a] for the time between arrivals).
• c[s]: The coefficient of variation for the service (i.e., the standard deviation σ[s] divided by the mean μ[s] for the average duration of a service).
${ E(W) = \left ( \frac{p}{1-p} \right )\cdot \left ( \frac{ C_{a}^{2}+ C_{s}^{2} }{2} \right ) \cdot \mu_{s} }$
Please note that the Kingman equation requires independently distributed arrival and service times (which is usually valid for most manufacturing systems), and is valid only for higher utilizations
(which is also often true in manufacturing). Please note that this is also only an approximation, not a precise formula. And finally, it is only for a single arrival with a single process, which is
quite rare in practice.
An Example
Let’s look at an example. We have an arrival process with an average of 10 minutes between parts and a standard deviation of 8 minutes. Our service needs, on average, 8 minutes per part, with a
standard deviation of 7 minutes. We have an utilization of 8/10 or 80%. Or coefficient of variations are: c[a] = 8/10 = 0.8 and c[s] = 7/8 = 0.875. Hence the equation is as follows, giving an
expected waiting time of 22.49 minutes:
${ E(W) = \left ( \frac{0.8}{1-0.8} \right )\cdot \left ( \frac{ 0.8^{2} + 0.875^{2} }{2} \right ) \cdot 8 = 22.49 Minutes }$
Again, this is only an approximation. The results depend heavily on the type of distribution. Using simulation for verification, actual waiting times were 20.8 minutes for Lognormal distributions,
21.19 minutes for Weibull distributions, and 18.16 minutes for the Pearson Type V distribution. Especially the Pearson Type V had a larger error, presumably because this distribution is heavy tailed
compared to the others.
Interpreting the Formula
This equation (or more precisely, this approximation) shows us the two factors that influence your lead time and your queue length. One important factor is the utilization. The higher your
utilization, the longer your queue. Eventually your queue will approach infinity as your utilization approaches 100%. This would be the first bracket of the equation above. The graph below shows the
waiting time for different utilization for the example above. The closer you get to 100% utilization, the closer you will get to an infinite queue length.
The second factor is the variation. The higher your variation, the longer your queue. This would be represented by the right bracket in the equation above. The image below shows this relation again
for our example above. The standard deviation was varied from 0% of the mean to 300% of the mean. You can see clearly how the waiting time increases.
Finally, these two parts are not added but multiplied with each other. Hence, while a high value in each is not good, a combination thereof is even worse. This is visualized in the chart below.
What Does This Mean?
This means, most of all, two things if you want to have a reasonable lead time or queue length:
• Stay away from 100% utilization. The lower your utilization, the shorter your lead time. Of course, a higher utilization also means a better use of the invested capital. Here you have to make a
trade-off between a good usage of your machines with high utilization and a low lead time with low utilization.
• Try to reduce variation. Lower variation allows you to get away with lower inventories. Of course, this is easier said than done. The idea of leveling in lean manufacturing aims to reduce this
variation to obtain faster lead times and other benefits.
• If you have high variability, try to use lower utilization. If your workshop has a high demand on flexibility, you can ease the pressure by reducing the utilization.
• If you have high utilization, try to reduce variability. If your workshop has a very high utilization, it may help to reduce the variability, although this is often easier said than done.
Alternative Calculations
The Kingman formula is the best known version to estimate waiting time, but by far not the only one. In the 1960s, lots of researchers developed formulas, some of which were more precise but a bit
more cumbersome to handle. (For sources, see below.) W. G. Marchal published the following formula in 1976:
${ E(W) = \left ( \frac{p ^2 \cdot \left ( 1 + C_{s}^{2} \right ) }{1+p^2 C_{s}^{2}} \right )\cdot \left ( \frac{ C_{a}^{2} + p^2 C_{s}^{2} }{2\cdot \left ( 1-p \right )} \right ) \cdot \mu_{s} }$
Almost simultaneously, Kramer and Langenbach-Belz published the following formula also in 1976:
${ E(W) = \left ( \frac{p ^2 \cdot \left ( C_{a}^{2} + C_{s}^{2} \right ) e^g }{2\left ( 1-p \right )} \right ) \cdot \mu_{s} }$
${ g= \frac{ -2 \left ( 1-p \right )\left ( 1-C_{a}^{2} \right )^2}{3p \left ( C_{a}^{2} +C_{s}^{2} \right )} }$
Both formulas are (reportedly) a bit more precise, but also not as nice to show the effects of variation for our purpose.
Of course, if both your arrival and your service distributions are exponentially distributed (known as a M/M/1 queue in the queuing theory), then the following equation is a precise calculation.
Unfortunately, you cannot make this assumption in the real world.
${ E(W)= \frac{ p^2}{1-p} \cdot \mu_{s}}$
Anyway, this is all the math for today. Again, the formula is of little practical use, as it is only for a single arrival and a single process with an infinite queue, but the relationship it shows is
important. The formula can also be used for single arrival and single process for batches of parts or job orders. In sum, stay away from very high utilizations, and from high variability if you can.
Now, go out, find your trade-offs to control your lead time, and organize your industry!
• Kingman, J. F. C.: “The single server queue in heavy traffic.” Mathematical Proceedings of the Cambridge Philosophical Society. 57 (4): 902. October 1961
• Marchat, W. G.: An approximate formula for waiting time in single server queues, AIIE Trans. 8(1976)473.
• Kramer, W. and Lagenbach-Belz: Approximate formulae for the delay in the queueing
system GI/G/1, Congressbook, 8th Int. Teletraffic Congress, Melbourne, 1976, pp. 235.1-235.8.
Discover more from AllAboutLean.com
Subscribe to get the latest posts sent to your email.
16 thoughts on “The Kingman Formula – Variation, Utilization, and Lead Time”
1. Have you heard of Big’s Law? It is the reverse of Little’s Law and measures the number of inventory turns.It is defined as the ratio Throughput/Inventory and is very well known.
This is, obviously, a bad joke. I leave up to you to publish this comment.
The rest of the post, as usual, excellent.
Kind regards.
2. Good article. Just sent toi my MBB to review this and look from different point of view for some of the issues on warehouse.
3. Hi Chris,
References to Kingman are always welcome in production! Wrote about the same topic July 2016. I have even tried to put in some more lessons and a link to a simulation on the web where you play
with some parameters to experience the effects. You can find my post about it here: http://dumontis.com/2016/07/muri-mura-kingman/
MfG, Rob
4. Great article Chris. See also https://www.linkedin.com/pulse/supply-chain-flow-what-why-how-simon-eagle/?published=t on similar theme.
5. Hello Rob & Simoneagle, good posts on the topic by you, too. Without knowing I even used a similar cover picture as Rob. Interesting coincidence!
6. Hello Chris,
Excellent web and great post.
Perhaps it might be interesting to point out: 1) that the Kingman formula is applicable to both parts and batches/jobs and 2) that in the calculation of “μs” and “cs” must include the time lost
by reference change, breakdowns, reprocessing, lack of operator in the service (sources of the service time variation).
7. Hello Francisco, Good Points. I added these good points to the post above. thanks for the input 🙂
9. All your formulas for alternative calculations use Mu_a. However, in your equation for the Kingsman’s approximation, you used Mu_b. I think the correct result for the Kingsman should use, just
like all the other approximations, Mu_a.
In Wikipedia shows Mu_b, but in other papers, it shows as Mu_a.
Since Little’s law is
W=L x M_a
I would think maybe the right answer is to use M_a
10. Alberto: You are correct! I just checked my original Marchal paper, and he only calculates the waiting time, not the queue length, hence without the mu’s. For the queue length the mu-s is
obviously correct. Thank you very much for picking up that mistake!
11. Hi Chris,
You mention Little’s law in you article and I am curious to know what the difference is between Little’s Law and the Kingsman formula, mainly the purpose of both. Am I right to say that Little’s
law is showing the correlation between WIP and Flow? And Kingsman formula looks at waiting times taking utalisation & variance in consideration? Would Little’s Law more suitable for multiple
services and variances?
13. Hello,
I have one doubt: let’s assume that it is not possible to reduce variation. If there are several sequenced workstations and we want to reduce utilization, process time1 > process time2> process
So the line takt time is defined by workstation1?
If there are 20 workstations, there should be progressive process time reduction one workstation after the previous?
14. Hi Hector, the line takt depends on a lot of things. One is the bottleneck process. But this is also influenced by fluctuations, which causes bottlenecks to shift between processes. This in turn
is also influenced by inventory buffers between processes. The line takt is best measured directly, as it is hard to make a theoretical calculation.
15. Hi Christoph, working in IT the comparison with Lean manufacturing is interesting. I came across this article (https://fred.stlouisfed.org/series/MCUMFN) that, from what I understand, says that
the capacity utilization in manufacturing is about 75%. I was wondering if that should be interpreted as that 25% of the time workers in factories are waiting? Or is it rather that companies are
only utilizing 75% of their factories capacity, and to get to 100% they would need to bring in more people?
I realize you may not be fully aware of the data that the graph is based on, but what is your experience from having visited lots of factories?
16. Hi Dennis, capacity utilization usually refers to machines, not workers. It is also a question on how it is defined. It could be that the machines make 75% of what they could do in theory (I.e.
OEE), but more likely the machines are running 75% of the time, including waiting for material and small breaks.
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://www.allaboutlean.com/kingman-formula/","timestamp":"2024-11-02T17:24:25Z","content_type":"text/html","content_length":"202543","record_id":"<urn:uuid:97e5eca4-f2fd-4c6b-a17d-74ecde5cca12>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00478.warc.gz"} |
When jealousy accur-->Girl get your own man!!
Answer honestly
Most teenage relationships dont succeed coz both the girl and the boy are yound and they dont really know what are they doing.. so we find the guy talk to different girls and so do the girl.. but
Have you ever get attacked by another girl asking you to stay the hell away from her man?
Have you ever get in a fight with a girl over your boy friend? and what u really think about the idea of fighting over a man?
Unreal Heart Sweetheart, i wouldn't fight over man , if other sister wants him and he wants her , let her have him . why bother!!
Ok let's stop right there,plz what is your name OG i am hoping very much that you will fight what is right for you,and UH,plz do fight also what is right for you, if every little meat you want is
being taken awayfrom you,by some one who happens to be just as womanly as you can be, what did that tells you, it shows me that you 2 will never get hitched and perhaps will be baby sitting the
children of the man you have been stolen from you.LoL Fight Fight what is rightfully yours.You are making your men Very cheap and doest not worth to be fought for.Shame Shame Shame.
Yes, do not stoop to that kind of ********* . One owns no one and one has choices in life; it is up to one to make a right or a dumb choice. You have a good head on your shoulders and stick to what
you believe in….. Attitude is every thing in life… and who wants a wishy-washy person in their lives any way!
So you want to fight for your girl, huh, even if she’s consented? Bro keep it up and see where that will take you. I hope you have lots of money and a good lawyer to bail you out, not to mention a
jail time. But, what if, the guy that you fought with slapped with restraining order and you were bared to go any where near him and you found him talking to your girl? Certainly you can not approach
for fear of jail. What are you going to do? Beat up your girl? You are a jealous man in a making and may be and may be a future wife beater. You need a major attitude adjustments!
i have been called names before , but not this one,Wife Beater, not in my blood, are you kidding me, you are talking to some 1 who is married 2 wives and about to hitch the third one, so i am way out
thre 2 be called wife beater but i gather it is a slip in your tongue, is it? i hope so. And by the way do i sense you are immitating more toward western culture then yours,or you kind of showing off
to score points, sorry i am not interested it, just telling the way i see it, you have to fight what is urs, that is if you r interested if not then don't bother mentioning it,get it .And Plz this is
just a thread don't take it seriously or you will faint in here.
No I didn’t take you seriously but the message. I do understand this is a thread. I was just responding to your posting and took a different approach. Remember there may be teenagers reading these
messages and don’t you think we should be a bit responsible in posting messages?
By the way am I supposed to congratulate in having multiple wives? You must be working your butts off ..lol! Good luck with the fourth one.
diamond princes,
be afraid, be really afraid unless you already are one of the wives!
DiamondP, well ty for the rescue , this is the perfect timing,LoL
Qori did i hear ou said kids are reading this thread,damn that is nice to hear cz they will learn how civilized we are,and perhaps they will pm you not to call ppl ,Wife beater.LoL
Don’t misquote me for some thing I did not say. I did not call you a wife beater but have a potential to be one. There is a difference. But if I offended you, my apologies….truce!
As for the kids, they already know that we Somalis are uncivilized. I don’t want them to imitate us.
first of all
if me and da guy were seriously and some other girl wanted him.str8 up
i would ask him who wants..and if he picks her
den imma let him go..cuz aint no point in fighting for a guy who doesn't want you
and there are plenty of other guys out there
never fight over a guy
jus make him choose
Originally posted by Zakareye:
Ok let's stop right there,plz what is your name OG i am hoping very much that you will fight what is right for you,and UH,plz do fight also what is right for you, if every little meat you want is
being taken awayfrom you,by some one who happens to be just as womanly as you can be,
sweetheart, i am way too good to fight over a man sorry but men fight over me not the opposite ok ???
sweetheart, i am way too good to fight over a man sorry but men fight over me not the opposite ok ???
OK , OG_GL ,Say no more
diamond princes,
be afraid, be really afraid unless you already are one of the wives!
What is that suppose to mean? I can't look out for someone without being his wife? :confused:
DiamondP,don't u think it is time for u to tell us what city you at? I like to know | {"url":"https://www.somaliaonline.com/community/topic/49524-when-jealousy-accur-girl-get-your-own-man/","timestamp":"2024-11-04T11:30:58Z","content_type":"text/html","content_length":"283842","record_id":"<urn:uuid:e21d1687-a400-45ef-a725-b374073a120e>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00598.warc.gz"} |
: Integers
Unit Plan: Integers
Math / Grade 7
Big Ideas
Computational fluency and flexibility with numbers extend to operations with integers and decimals.
Essential Questions
Students will keep considering…
• What is balance?
• How do you achieve balance?
• How does change affect balance?
• How does change affect quantity?
Monitoring Progress
Teacher will monitor progress:
Teachers can monitor progress through ongoing formative assessment including but not limited to:
• Check ins
• Teacher observation
• Homework
• Quizzes
How will teachers and their students reflect on and evaluate the completed project?
Teacher Reflection
• What aspects of the unit went well?
• What did students struggle with?
• What did you struggle with?
• What would you add/revise the next time you taught this unit?
• Were there any unintended outcomes?
• Were students engaged?
• These learning events/activities are suggested activities only.
• In some cases the plans are not fully completed lesson plans.
• The teacher may choose some lessons/activities to span over several lessons.
• Teachers may add, revise and adapt these lessons based on the needs of their students, their personal preferences for resources, and the use of a variety of instructional techniques.
Learning events are enriched for students when teachers consider the “WHERE TO” acronym and guiding organizer by Wiggins and McTighe.
The Learning Events should always be prefaced by focusing on the essential questions.
Day One - What Is an Integer?
The Learning Events should always be prefaced by focusing on the essential questions:
• What is balance?
• How do you achieve balance?
• How does change affect balance?
• How does change affect quantity?
What Is an Integer? (Day One)
1) To establish where students are at with their understanding of integers, asks students to brainstorm what they already know about integers – what they are, where they may have seen them, real-life
applications, operations you can perform with them.
2) Guide students to understand:
• integers are positive and negative numbers
• on a number line, the negative integers are to the left of the zero and the positive integers are to the left of zero
• integers come in ‘opposite pairs;’ for example, the opposite of +3 is -3. This is because they are both equal distance from the zero on the number line
• integers have many real-life applications – golf, temperature, altitude (above and below sea level), finance, etc.
• you can work with them – add, subtract, multiply and divide them
3) Using red and yellow integer tiles, have students model integers. The red ones represent negative numbers, while the yellow represent positive.
4) Start by having the students work in partners to model a positive integer (i.e. +3) with the positive tiles only.
5) Repeat by asking students to model a negative integer, (i.e. -4), using only negative tiles.
6) Next, ask students to model a positive integer again, but tell them they have to use negative tiles as well. Answers will vary. Expect students to struggle at first with this.
7) Guide students to understand the concept of a zero pair (one red tile, and one yellow tile represent +1 and -1. This makes a zero pair because if you add the two together, they make zero).
8) Ask them to make integers using zero pairs. Reinforce that the number of zero pairs you use doesn’t matter. How many you have left over after you cancel out the zero pairs does – this is what
makes the actual number. For example:
9) Have students practice making many integers using zero pairs
10) Paper pencil practice of making integers. For example:
• Give students a target integer and ask them to find as many different ways as they can to make it, using zero pairs
• Textbook practice (Math Makes Sense 7, or Math Focus 7, or Math Links 7 has some)
Day Two - Adding Integers with Integer Tiles
The Learning Events should always be prefaced by focusing on the essential questions:
• What is balance?
• How do you achieve balance?
• How does change affect balance?
• How does change affect quantity?
Adding Integers with Integer Tiles? (Day Two)
1) Review how to make integers using zero pairs. Tell students that when they do this, they are actually adding integers. Have students model +4 with zero pairs. Model what the addition sentence
would be for the various ways they made it.
2) Repeat this activity with other integers. Possible ideas:
• Provide one ‘target’ number and make as many addition sentences as you can
• Have students create their addition sentences with tiles, and have others identify the corresponding addition sentence
• Provide images of integer tiles and have students identify corresponding addition sentence
3) Textbook practice – (Math Makes Sense 7, or Math Focus 7, or Math Links 7 has some)
Day Three - Adding Integers with a Number Line
The Learning Events should always be prefaced by focusing on the essential questions:
• What is balance?
• How do you achieve balance?
• How does change affect balance?
• How does change affect quantity?
Adding Integers with a Number Line (Day Three)
1) The purpose of this lesson is to show students another way to add integers. Show students a ‘regular’ addition sentence and a number line (i.e. 3+4 = 7). Ask them to explain how they would
typically use this number line to show how to solve it. Look for:
• The first number (3) shows where to start on the number line
• Adding means make the number bigger, so go right on the number line
• The second number (4) shows how far to ‘jump’ from the starting number
• Where the line/arrow ends, is the sum (7)
2) Next, provide students with an integer addition sentence (i.e. (-2) + (+5) = (+3) ) and a large number line on the board. Ask them to think about what they already know about addition and think
about how we might show this one on a number line. It is important that students know the sum, in order to do this at first. In other words, they need to know where they are going, in order to get
there. Allow students to struggle with this for a little bit and then show them the process:
• Start at zero
• Draw an arrow from zero to the first number (-2)
• Look at the sign on the second number – NOT the operation. If it is positive, we are making the number bigger, and if it negative, we are making the number smaller. Bigger means move right on
the number line, and smaller means move left.
• In this case, with (+5), draw an arrow from (-2) that moves four spaces to the left.
• The arrow should now point to the sum (+3).
3) Repeat this activity with other addition sentences. Provide students with number lines they can work on
4) Other activities (these can be done on paper, or vertically on whiteboards and windows):
• Give students various addition sentences and have them model them on number lines; do this in partners
• Have students model addition sentences on number lines and get other students to identify the addition sentences that go with them.
• Textbook practice (Math Makes Sense 7 has some)
Day Four - Adding Integers Review Using Magic Squares
The Learning Events should always be prefaced by focusing on the essential questions:
• What is balance?
• How do you achieve balance?
• How does change affect balance?
• How does change affect quantity?
Adding Integers Review Using Magic Squares (Day Four)
1) Put students in random groups of two or three and have them work vertically on whiteboards or windows to work with the magic squares
2) Explain to students what a magic square is – a grid, where each row, column and diagonal has the same sum.
3) Give students the following magic square and have them work to solve it. Ensure students show their thinking around the magic square.
4) Once students complete the first one, give them the next one.
5) To extend, ask students to make their own 3 x 3 magic square, then continue to extend to larger magic squares (4 x 4, 5 x 5, etc…)
6) Follow up activity: Diamond sheet on adding integers (see below – imagine there is a box in the middle of the page where and integer addition sentence is written). The goal of this activity is to
get students communicate their understanding in various ways, using the curricular competencies.
Represent Visually Explain in Words:
Describe a Real-Life Scenario A Question You Still Have
Day Four - Subtracting Integers with Tiles
The Learning Events should always be prefaced by focusing on the essential questions:
• What is balance?
• How do you achieve balance?
• How does change affect balance?
• How does change affect quantity?
Subtracting Integers with Tiles (Day Four)
1) Using the red and yellow tiles again, ask students to model (+4) with positive tiles. Ask students to take away (+1). What is the difference? Repeat with (-4) – (-1).
2) Have students model (+4) again with positive tiles. Ask them to take away (-1) now. Let students struggle with how to do this for a bit. Watch for what students do. They may realize they do not
have any negative tiles to take away. They may add a negative tile. Walk around room and remind students what they have modelled with their tiles, if they do this. For example, if they started
with four positive tiles, and add a negative, they only have (+3) now, as they created a zero pair with one of the existing tiles.
3) After a bit, redirect students to watch you. Model (+4) on the board. Ask students if they actually have any negatives to take away. Point out that they do not, so where are they going to come
from? Remind students that you can add zero pairs to integers, without changing the integer. This keeps things balanced. In a subtraction sentence, the first number tells you what you start with
(in this case, +4), and the second number tells you how many to take away. Sometimes you have enough (as was the case in the very first examples), and sometimes you do not (as is the case in this
example). When you do not have enough, the second number tells you how many zero pairs you need to add in, in order to get what you need to take away.
• For example, in this question, (+4) – (-1), you start with four positive tiles. You do not have any negatives to take away, so you need to add one zero pair (because the second number is (-1).
Now, you can take away one negative, and you are left with five positives, so the difference is (-5).
• Repeat this with various examples like this question (i.e. (+5) – (-6) or (-2) – (+4)
4) Next, ask students to model a question like this: (+4) – (+6)
• After practicing with the above examples, students will be tempted to add six zero pairs, since the second number is 6. Allow them to do this, but ask them what they have left when they take the
six positives away. They should notice that something isn’t working now.
• With the whole group, model (+4). Remind them that the second number is telling them how many to take away. In this case, it is six positives. Guide them to realize they already have four
positives, so they don’t need to add six zero pairs, they only need to add two of them in order to have enough positives to take away.
5) Guide students to realize that when you are subtracting integers, you need to think about the following:
• How many are you starting with?
• How many do you need to take away?
• Do you have enough? If yes, then take away.
• If not, but you have some, add enough zero pairs, so you have enough to take away (some of them)
• If not, and you don’t have any, add enough zero pairs, so you have enough to take away (all of them)
6) Do LOTS of practice with these. There are practice questions in Math Makes Sense 7, or Math Focus 7, or Math Links 7 to support.
7) Follow up activity: Diamond sheet on subtracting integers (see below – imagine there is a box in the middle of the page where and integer subtraction sentence is written). The goal of this
activity is to get students communicate their understanding in various ways, using the curricular competencies.
Represent Visually Explain in Words:
Describe a Real-Life Scenario A Question You Still Have
Day Five - Multiplying Integers with Tiles
The Learning Events should always be prefaced by focusing on the essential questions:
• What is balance?
• How do you achieve balance?
• How does change affect balance?
• How does change affect quantity?
Multiplying Integers with Tiles (Day Five)
1) Show students a simple multiplication expression (i.e. 3 x 2). Ask them to model it with integer tiles. Pay attention to how students model it (the arrangements they use). They will likely have
six yellow tiles out, in various organizations – perhaps one row of six tiles, perhaps in two groups of three, perhaps in three groups of two. Ask students what the expression actually means. Is it
the same as 2 x 3? Students will likely say yes, however, what is the same is the answer, not what it actually represents. Remind students that 3 x 2 can be read as “three groups of two,” and 2 x 3
is read as “two groups of three.” Ask students to then revisit their models – what have they actually modelled? Is it three groups of two? Is it three groups of positive two?
2) Next, have students model (+3) x (-2). Remind them that this means “three groups of negative two.”
3) Next, ask students to model (-3) x (+2). Students will likely model this the same way they did for (+3) x (+2). Some students will notice that there is now a negative sign involved and will
start to incorporate negative tiles, and you will need to guide them to remember how to count the tiles – that they may be making zero pairs and not actually making the correct product. Allow
students to struggle with this for a bit.
4) After some struggle, ask students what 3 x 2 is. They will know that 6 needs to be involved in the answer to the (-3) x (+2) question. But what about the negative? Remind them about how to say
it; that it is somehow referencing three groups of two, but again, what about the negative? Guide students to understand that the negative means you have to take away three groups of positive two.
But where are you going to take these groups from? This is where the six comes from. We know that 3 x 2 is 6, so we need to start with 6 zero pairs. Then take away the three groups of positive
two. This will leave you with six negatives, or -6.
5) Repeat this with (-3) x (-2). This means “take away three groups of negative two.” Again, start with 6 zero pairs, because 3 X 2 is 6 and then take away three groups of negative two. You will
be left with six positives, or + 6.
6) Repeat this activity with many different multiplication sentences, to have students practice the process.
7) Note – at this point, some students may notice, or start to generate a ‘short cut’ or rule for multiplication. Allow this to happen, but don’t focus too much on it. It’s more important for
students to know what is happening through the process, than memorizing a short cut or rule.
8) After spending time with the tiles, move to paper and pencil practice. There are resources for this in Math Makes Sense 8, Math Focus 8 and Math Links 8.
9) Follow up activity: Diamond sheet on multiplying integers (see below – imagine there is a box in the middle of the page where and integer multiplication sentence is written). The goal of this
activity is to get students communicate their understanding in various ways, using the curricular competencies.
Represent Visually Explain in Words:
Describe a Real-Life Scenario A Question You Still Have
Day Six - Dividing Integers - Connecting to Fact Families
The Learning Events should always be prefaced by focusing on the essential questions:
• What is balance?
• How do you achieve balance?
• How does change affect balance?
• How does change affect quantity?
Dividing Integers – Connecting to Fact Families (Day Six)
1) Once students are feeling confident with multiplication, move on to division. Remind students that division is the inverse of multiplication and review what a fact family is. Have students
generate the fact family that goes with 3 + 2. Ask students what they notice about the numbers in the fact family (they are the same three numbers, just rearranged in different orders with different
operations). Review a multiplication and division fact family as well (i.e. 2 x 5).
2) Next, have students look at fact families involving integers. Guide students to notice that a ‘regular’ fact family only involves three numbers, while an integer fact family will have more, since
you can alternate the positive and negative sign in front of the numbers. Work with students to generate the fact family for 2 x 5, involving all the integers.
(+ 2) x (+5) = (+10) (-2) x (+5) = (-10) (-10) ¸ (+5) = (-2) (+10) ¸ (+5) = (+2)
(+2) x (-5) = (-10) (-2) x (-5) = (+10) (-10) ¸ (-5) = (+2) (+10) ¸ (-5) = (-2)
(+5) x (+2) = (+10) (-5) x (+2) = (-10) (-10) ¸ (+2) = (-5) (+10) ¸ (+2) = (+5)
(+5) x (-2) = (-10) (-5) x (-2) = (+10) (-10) ¸ (-2) = (+5) (+10) ¸ (-2) = (-5)
3) Once the fact family is made, ask students what they notice about it. Guide students to notice that they are more facts involved and have them speculate as to why. Also have them look at the
division facts. Guide students to notice what is happening with the integer signs. Have students generate the ‘rule’ or short-cuts for multiplication and division of integers at this point.
• A negative times/divided by a negative gives you a positive.
• A negative times/divided by a positive gives you a negative.
• A positive times/divided by a negative gives you a negative.
• A positive times/divided by a positive gives you a positive.
4) Have students now create their own integer fact families, showing the relationship between multiplication and division. As an extension, have students create fact families using decimals, or
larger numbers. Reinforce that the process is the same, no matter what kinds of numbers you use.
Day Seven - Order of Operations with Integers
The Learning Events should always be prefaced by focusing on the essential questions:
• What is balance?
• How do you achieve balance?
• How does change affect balance?
• How does change affect quantity?
Order of Operations with Integers (Day Seven)
1) Review what the order of operations actually is. Guide students to understand that it is much more than BEDMAS or PEMDAS; that the order of operations is a universal system used in mathematics.
Talk about why this might be – why do we need a universal system?
2) Do some sample questions with students, demonstrating how to use the order of operations. Start by doing this without integers. Next, introduce the integers and remind them that it doesn’t
matter what kinds of numbers are involved, you will always follow the same order of operations. Start with simple questions and move to more complex ones.
3) For student practice, give students a ‘target number’ and ask them to make as many order of operations equations as they can that will get to that answer. Give them some parameters, such as:
• Your equation must include at least one (or two, or three…) negative number(s)
• Your equation must include at least two (or three, or four…) different operations
• Your equation must include one (or more) set of brackets
The following resources are made available through the British Columbia Ministry of Education. For more information, please visit BC’s New Curriculum.
Big Ideas
The Big Ideas consist of generalizations and principles and the key concepts important in an area of learning. The Big Ideas represent what students will understand at the completion of the
curriculum for their grade. They are intended to endure beyond a single grade and contribute to future understanding.
Core Competencies
Communications Competency
The set of abilities that students use to impart and exchange information, experiences and ideas, to explore the world around them, and to understand and effectively engage in the use of digital
Thinking Competency
The knowledge, skills and processes we associate with intellectual development
Social Competency
The set of abilities that relate to students’ identity in the world, both as individuals and as members of their community and society
Curricular Competencies & Content
Curricular Competencies are the skills, strategies, and processes that students develop over time. They reflect the “Do” in the Know-Do-Understand model of curriculum. The Curricular Competencies are
built on the thinking, communicating, and personal and social competencies relevant to disciplines that make up an area of learning. | {"url":"https://nvsd44curriculumhub.ca/unit-plan-stage-3-integers-math-grade-7/","timestamp":"2024-11-08T21:28:50Z","content_type":"text/html","content_length":"217970","record_id":"<urn:uuid:e7288998-1c2a-4ffa-8773-ba514b69a1a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00327.warc.gz"} |
QbD with Scale-up Suite
Confidence bands and prediction intervals (or bands) are useful in quantifying the amount of uncertainty associated with model predictions. Quantifying uncertainty has grown in importance with the
adoption of a risk based approach consistent with
Q8 and Q9, providing a basis for estimating the probability of successful operation at a given set of conditions and thereby defining a design space.
Confidence bands for linear models such as those used in statistics software packages are often ‘u-shaped’ like those shown in Figure 1. Users who are familiar with these plots may expect similarly
shaped confidence bands for other types of model. In fact, u-shaped confidence bands are more the exception rather than the rule, as discussed in the knowledge base article available
Prediction intervals (or bands) are wider than confidence bands and tend to run more parallel with average responses.
U-shaped confidence bands (indicated in Figure 1 by the blue curves around the best fit line) are observed when fitting to a linear model (
) and when the intercept is non-zero (i.e. fitting both the slope and the intercept). In these cases, confidence bands are narrowest at the average value of x (e.g. see reference 1) and expand on
either side of this value.
When a linear model of form
is fitted instead, confidence bands are no longer u-shaped, but run as straight lines diverging from the best fit line as x increases, as in Figure 2.
Mechanistic models of most interest for design space and
work are initial value problems, where the initial values of responses are known, the independent variable is time and the rates of change of those responses are calculated from ordinary differential
The general procedure for obtaining asymptotic confidence bands for such a non-linear mechanistic model follows the same steps as the two linear cases above: calculation of the gradients of responses
with respect to the fitted parameter values and matrix multiplication of these gradients with the covariance matrix of the fitted parameters.
The qualitative behaviour of the confidence bands can be deduced at certain limits without any calculations:
• The initial values for integrated responses are not sensitive to the parameter estimates and therefore confidence bands for these have zero width at time zero, like the case of a linear model
with no intercept.
• The values for some integrated responses will become constant at long times, e.g. towards the end of a simulation when all rates of change have dropped to zero. These final values will again not
be sensitive at those times to the parameter estimates and therefore the confidence bands will again have zero width.
Figure 3 shows one example of a response from a non-linear mechanistic model of this type, for product formation in a system of competing chemical reactions. This has typical confidence band
behaviour for such a profile (confidence band width plotted in green, values on the right hand y-axis):
- zero confidence band width at the start
- maximum confidence band width when the product level is changing rapidly
- almost zero confidence band width at the end, when the reaction is nearly over.
1. Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, George E. P. Box, William G. Hunter, J. Stuart Hunter , John Wiley & Sons, 1978 | {"url":"https://dynochem.blogspot.com/2009/02/","timestamp":"2024-11-14T08:00:50Z","content_type":"text/html","content_length":"56565","record_id":"<urn:uuid:e0c6cca6-b2d9-42c9-a783-be812600a26d>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00558.warc.gz"} |
How to calculate and remove 18% GST? - GST Calculator
How to calculate GST on an invoice?
The procedure is simple; it’s just a matter of putting it into practice. To address any doubts you might have, we will show you an example of how to calculate GST. You can apply it to any business in
the following way:
Let’s suppose that there are three companies involved in a production process. The first company is called HAY and it produces hay for cattle. The second company is LIVESTOCK and they have a dairy
farm. The last one is DAIRY, which produces and sells milk to the end consumer.
When HAY sells hay to LIVESTOCK, they will charge the corresponding selling price. They should also add an extra GST percentage to this amount that will also be charged to the client. For example, if
the selling price is ₹ 8,000.00 and the tax percentage is 18%, they would charge a total of ₹ 9,440.00.
How to calculate the GST amount?
The selling price can change depending on the final cost of your goods or services. However, the tax percentage is subject to the amount required by the Tax Agency. Considering this, how was the
amount of ₹ 9,440.00 calculated in the previous example?
To answer this, you need to use the corresponding formula. First, you divide 18%, which is the GST percentage, by 100. This will give you a result of 0.18. In other words:
GST% = 18% / 100 = 0.18
Then, you multiply the selling price, also known as the taxable amount, by the percentage. According to the example, this is ₹ 8,000.00 (which is the taxable amount) times 0.18 (the result of the
above formula). This equals ₹ 1,440.00 (GST amount). The equation looks like this:
GST amount: taxable amount x GST% = ₹ 8,000.00 x 0.18 = ₹ 1,440.00
Total amount with GST = ₹ 8,000.00 + ₹ 1,440.00 = ₹ 9,440.00 . This is the total price of the hay including GST.
How to calculate the total price with GST included?
There is a simple, efficient and quick way to do this that will help you calculate the total amount or cost including GST using a well-known formula. It consists of adding a one to the tax percentage
that we saw earlier: 1 + 0.18 = 1.18
Then, you multiply this number by the taxable amount, or selling price, which according to the example is ₹ 8,000.00. After multiplying these, you will notice that it gives you a result with GST
already included (total amount), which is ₹ 9,440.00. The formula looks like this:
Total amount with GST = ₹ 8,000.00 x 1.18 = ₹ 9,440.00
How to calculate a price without GST?
If you want to know how much a product or service costs without tax, you can calculate this as well. There is a way to calculate the taxable amount by removing GST. We will continue using the same
example to illustrate how to do this.
You have to divide the total amount that will be charged to the end consumer by the Value-Added Tax percentage. If the total cost of the hay is ₹ 9,440.00 and the tax rate is 18%, the formula is like
Taxable amount = Total amount / GST (18%) = ₹ 9,440.00 / 1.18 = ₹ 8,000.00
To verify that this is the correct result, you can use the following operation:
GST (18%) = ₹ 8,000.00 x 0.18 = ₹ 1,440.00
Total amount with GST = ₹ 1,440.00 + ₹ 8,000.00 = ₹ 9,440.00
How to calculate the taxable amount with just GST
On the other hand, if you only have the total cost of the tax, you need to use a different formula to figure out the taxable amount. You simply have to divide the amount of tax by 0.18, which is the
tax rate to be charged.
In other words, if the amount of GST for the hay is ₹ 1,440.00 – according to the applied tax – you have to divide that by 0.18, which gives you the taxable amount. You can see this in the following
Taxable amount = GST amount / GST (18%) = ₹ 1,440.00 / 0.18 = ₹ 8,000.00
This means that the selling price of hay without GST is ₹ 8,000.00. You can verify that this result is accurate by using the following formula:
Total amount with GST = ₹ 8,000.00 + ₹ 1,440.00 = ₹ 9,440.00 | {"url":"https://vat-calculator.yurkap.com/india","timestamp":"2024-11-09T13:48:07Z","content_type":"text/html","content_length":"87204","record_id":"<urn:uuid:56ea98d0-e77f-459a-9adf-c63698226aab>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00347.warc.gz"} |
How to count different text values listed in a column
I have 2 columns/ranges.
I need to count if range1 is value text1 and range2 has values text2, and text3, and text4.
This formula works so far =COUNTIFS([Range1]:[Range1], "Text1", [Range2]:[Range2], "Text2")
However things get wonky when I wan to add more text values to Range 2. So it would kinda look like below. But nothing I am trying is working so far.
=COUNTIFS([Range1]:[Range1], "Text1", [Range2]:[Range2], "Text2", and "Text3", and "Text4")
So for example:
Based on the example above if we were trying to count all of Text2 and Text5 in Range 2, the answer would be 3, since Text2 is listed twice and Text5 is listed once.
Please help. Thanks!
Best Answer
• Hi @SJTA
The part of your formula in bold does not follow the smartsheet COUNTIFS formula syntax:
=COUNTIFS([Range1]:[Range1], "Text1", [Range2]:[Range2], "Text2", and "Text3", and "Text4")
You need to enter range followed by criteria. COUNTIFS will count rows where all criteria are met (so you don't explicitly need to say AND). So the formula would look like this:
=COUNTIFS([Range1]:[Range1], "Text1", [Range2]:[Range2], "Text2", [Range2]:[Range2], "Text3", [Range2]:[Range2], "Text4")
However, looking at your example I don't think you mean to use AND and instead want OR. AND would mean that each thing is true, which is not possible as the Range2 can't equal both "Text2" AND
"Text3" AND "Text4". It will be just one of those or another. The Range2 value could be "Text2" OR "Text3" OR "Text4". In that case, you need to use an OR function in the criteria part of the
COUNTIFs, like this:
=COUNTIFS([Range1]:[Range1], "Text1", [Range2]:[Range2], OR(@cell = "Text2", @cell = "Text3", @cell = "Text4"))
• Hi @SJTA
The part of your formula in bold does not follow the smartsheet COUNTIFS formula syntax:
=COUNTIFS([Range1]:[Range1], "Text1", [Range2]:[Range2], "Text2", and "Text3", and "Text4")
You need to enter range followed by criteria. COUNTIFS will count rows where all criteria are met (so you don't explicitly need to say AND). So the formula would look like this:
=COUNTIFS([Range1]:[Range1], "Text1", [Range2]:[Range2], "Text2", [Range2]:[Range2], "Text3", [Range2]:[Range2], "Text4")
However, looking at your example I don't think you mean to use AND and instead want OR. AND would mean that each thing is true, which is not possible as the Range2 can't equal both "Text2" AND
"Text3" AND "Text4". It will be just one of those or another. The Range2 value could be "Text2" OR "Text3" OR "Text4". In that case, you need to use an OR function in the criteria part of the
COUNTIFs, like this:
=COUNTIFS([Range1]:[Range1], "Text1", [Range2]:[Range2], OR(@cell = "Text2", @cell = "Text3", @cell = "Text4"))
• @KPH Thank you! Thank you! Thank you! Worked perfectly. You were right, I should have been working with OR and not AND.
Greatly appreciated :-) | {"url":"https://community.smartsheet.com/discussion/116293/how-to-count-different-text-values-listed-in-a-column","timestamp":"2024-11-06T08:59:00Z","content_type":"text/html","content_length":"410715","record_id":"<urn:uuid:9ecd83db-a398-4a0d-82b9-48672fae9266>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00805.warc.gz"} |
Linear Algebra and Linear Regression
Linear Algebra and Linear Regression
at University of Sheffield on Oct 13, 2015 [jupyter][google colab][reveal]
In this session we combine the objective function perspective and the probabilistic perspective on linear regression. We motivate the importance of linear algebra by showing how much faster we can
complete a linear regression using linear algebra.
• Last time: Looked at objective functions for movie recommendation.
• Minimized sum of squares objective by steepest descent and stochastic gradients.
• This time: explore least squares for regression.
Regression Examples [edit]
Regression involves predicting a real value, $\dataScalar_i$, given an input vector, $\inputVector_i$. For example, the Tecator data involves predicting the quality of meat given spectral
measurements. Or in radiocarbon dating, the C14 calibration curve maps from radiocarbon age to age measured through a back-trace of tree rings. Regression has also been used to predict the quality of
board game moves given expert rated training data.
Olympic 100m Data
• Gold medal times for Olympic 100 m runners since 1896.
Image from Wikimedia Commons
Olympic 100m Data
Olympic Marathon Data
• Gold medal times for Olympic Marathon since 1896.
• Marathons before 1924 didn’t have a standardised distance. Image from Wikimedia Commons http://bit.ly/16kMKHQ
• Present results using pace per km.
• In 1904 Marathon was badly organised leading to very slow times.
The first thing we will do is load a standard data set for regression modelling. The data consists of the pace of Olympic Gold Medal Marathon winners for the Olympics from 1896 to present. First we
load in the data and plot.
Things to notice about the data include the outlier in 1904, in this year, the olympics was in St Louis, USA. Organizational problems and challenges with dust kicked up by the cars following the race
meant that participants got lost, and only very few participants completed.
More recent years see more consistently quick marathons.
What is Machine Learning? [edit]
What is machine learning? At its most basic level machine learning is a combination of
$$\text{data} + \text{model} \xrightarrow{\text{compute}} \text{prediction}$$
where data is our observations. They can be actively or passively acquired (meta-data). The model contains our assumptions, based on previous experience. That experience can be other data, it can
come from transfer learning, or it can merely be our beliefs about the regularities of the universe. In humans our models include our inductive biases. The prediction is an action to be taken or a
categorization or a quality score. The reason that machine learning has become a mainstay of artificial intelligence is the importance of predictions in artificial intelligence. The data and the
model are combined through computation.
In practice we normally perform machine learning using two functions. To combine data with a model we typically make use of:
a prediction function a function which is used to make the predictions. It includes our beliefs about the regularities of the universe, our assumptions about how the world works, e.g. smoothness,
spatial similarities, temporal similarities.
an objective function a function which defines the cost of misprediction. Typically it includes knowledge about the world’s generating processes (probabilistic objectives) or the costs we pay for
mispredictions (empiricial risk minimization).
The combination of data and model through the prediction function and the objectie function leads to a learning algorithm. The class of prediction functions and objective functions we can make use of
is restricted by the algorithms they lead to. If the prediction function or the objective function are too complex, then it can be difficult to find an appropriate learning algorithm. Much of the
acdemic field of machine learning is the quest for new learning algorithms that allow us to bring different types of models and data together.
A useful reference for state of the art in machine learning is the UK Royal Society Report, Machine Learning: Power and Promise of Computers that Learn by Example.
You can also check my post blog post on What is Machine Learning?..
Sum of Squares Error [edit]
Last week we considered a cost function for minimization of the error. We considered items (films) and users and assumed that each movie rating, $\dataScalar_{i,j}$ could be summarised by an inner
product between a vector associated with the item, v[j] and one associated with the user u[i]. We justified the inner product as a measure of similarity in the space of ‘movie subjects’, where both
the users and the items lived, giving the analogy of a library.
To make predictions we encouraged the similarity to be high if the movie rating was high using the quadratic error function,
$$ E_{i,j}(\mathbf{u}_i, \mathbf{v}_j) = \left(\mathbf{u}_i^\top \mathbf{v}_j - \dataScalar_{i,j}\right)^2, $$
which we then summed across all the observations to form the total error
$$ \errorFunction(\mathbf{U}, \mathbf{V}) = \sum_{i,j}s_{i,j}\left(\mathbf{u}_i^\top \mathbf{v}_j - \dataScalar_{i,j}\right)^2, $$
where s[i,j] is an indicator variable which is set to 1 if the rating of movie j by user i is provided in our data set. This is known as a sum of squares error.
This week we will reinterpret the error as a probabilistic model. We will consider the difference between our data and our model to have come from unconsidered factors which exhibit as a probability
density. This leads to a more principled definition of least squares error that is originally due to Carl Friederich Gauss, but is mainly inspired by the thinking of Pierre-Simon Laplace.
Regression: Linear Releationship [edit]
For many their first encounter with what might be termed a machine learning method is fitting a straight line. A straight line is characterized by two parameters, the scale, m, and the offset c.
$$\dataScalar_i = m \inputScalar_i + c$$
For the olympic marathon example $\dataScalar_i$ is the winning pace and it is given as a function of the year which is represented by $\inputScalar_i$. There are two further parameters of the
prediction function. For the olympics example we can interpret these parameters, the scale m is the rate of improvement of the olympic marathon pace on a yearly basis. And c is the winning pace as
estimated at year 0.
Overdetermined System [edit]
The challenge with a linear model is that it has two unknowns, m, and c. Observing data allows us to write down a system of simultaneous linear equations. So, for example if we observe two data
points, the first with the input value, $\inputScalar_1 = 1$ and the output value, $\dataScalar_1 =3$ and a second data point, $\inputScalar = 3$, $\dataScalar=1$, then we can write two simultaneous
linear equations of the form.
point 1: $\inputScalar = 1$, $\dataScalar=3$
point 2: $\inputScalar = 3$, $\dataScalar=1$
The solution to these two simultaneous equations can be represented graphically as
The challenge comes when a third data point is observed and it doesn’t naturally fit on the straight line.
point 3: $\inputScalar = 2$, $\dataScalar=2.5$
Now there are three candidate lines, each consistent with our data.
This is known as an overdetermined system because there are more data than we need to determine our parameters. The problem arises because the model is a simplification of the real world, and the
data we observe is therefore inconsistent with our model.
The solution was proposed by Pierre-Simon Laplace. His idea was to accept that the model was an incomplete representation of the real world, and the manner in which it was incomplete is unknown. His
idea was that such unknowns could be dealt with through probability.
Pierre-Simon Laplace [edit]
Famously, Laplace considered the idea of a deterministic Universe, one in which the model is known, or as the below translation refers to it, “an intelligence which could comprehend all the forces by
which nature is animated”. He speculates on an “intelligence” that can submit this vast data to analysis and propsoses that such an entity would be able to predict the future.
Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it—an intelligence sufficiently vast
to submit these data to analysis—it would embrace in the same formulate the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and
the future, as the past, would be present in its eyes.
This notion is known as Laplace’s demon or Laplace’s superman.
Unfortunately, most analyses of his ideas stop at that point, whereas his real point is that such a notion is unreachable. Not so much superman as strawman. Just three pages later in the
“Philosophical Essay on Probabilities” (Laplace 1814), Laplace goes on to observe:
The curve described by a simple molecule of air or vapor is regulated in a manner just as certain as the planetary orbits; the only difference between them is that which comes from our ignorance.
Probability is relative, in part to this ignorance, in part to our knowledge.
In other words, we can never make use of the idealistic deterministc Universe due to our ignorance about the world, Laplace’s suggestion, and focus in this essay is that we turn to probability to
deal with this uncertainty. This is also our inspiration for using probability in machine learning.
The “forces by which nature is animated” is our model, the “situation of beings that compose it” is our data and the “intelligence sufficiently vast enough to submit these data to analysis” is our
compute. The fly in the ointment is our ignorance about these aspects. And probability is the tool we use to incorporate this ignorance leading to uncertainty or doubt in our predictions.
Laplace’s concept was that the reason that the data doesn’t match up to the model is because of unconsidered factors, and that these might be well represented through probability densities. He
tackles the challenge of the unknown factors by adding a variable, $\noiseScalar$, that represents the unknown. In modern parlance we would call this a latent variable. But in the context Laplace
uses it, the variable is so common that it has other names such as a “slack” variable or the noise in the system.
point 1: $\inputScalar = 1$, $\dataScalar=3$
$$ 3 = m + c + \noiseScalar_1 $$
point 2: $\inputScalar = 3$, $\dataScalar=1$
$$ 1 = 3m + c + \noiseScalar_2 $$
point 3: $\inputScalar = 2$, $\dataScalar=2.5$
$$ 2.5 = 2m + c + \noiseScalar_3 $$
Laplace’s trick has converted the overdetermined system into an underdetermined system. He has now added three variables, $\{\noiseScalar_i\}_{i=1}^3$, which represent the unknown corruptions of the
real world. Laplace’s idea is that we should represent that unknown corruption with a probability distribution.
A Probabilistic Process [edit]
However, it was left to an admirer of Gauss to develop a practical probability density for that purpose. It was Carl Friederich Gauss who suggested that the Gaussian density (which at the time was
unnamed!) should be used to represent this error.
The result is a noisy function, a function which has a deterministic part, and a stochastic part. This type of function is sometimes known as a probabilistic or stochastic process, to distinguish it
from a deterministic process.
The Gaussian Density [edit]
The Gaussian density is perhaps the most commonly used probability density. It is defined by a mean, $\meanScalar$, and a variance, $\dataStd^2$. The variance is taken to be the square of the
standard deviation, $\dataStd$.
$$ p(\dataScalar| \meanScalar, \dataStd^2) & = \frac{1}{\sqrt{2\pi\dataStd^2}}\exp\left(-\frac{(\dataScalar - \meanScalar)^2}{2\dataStd^2}\right)\\& \buildrel\triangle\over = \gaussianDist{\
dataScalar}{\meanScalar}{\dataStd^2} $$
Two Important Gaussian Properties [edit]
The Gaussian density has many important properties, but for the moment we’ll review two of them.
Sum of Gaussians
If we assume that a variable, $\dataScalar_i$, is sampled from a Gaussian density,
$$\dataScalar_i \sim \gaussianSamp{\meanScalar_i}{\sigma_i^2}$$
Then we can show that the sum of a set of variables, each drawn independently from such a density is also distributed as Gaussian. The mean of the resulting density is the sum of the means, and the
variance is the sum of the variances,
$$\sum_{i=1}^{\numData} \dataScalar_i \sim \gaussianSamp{\sum_{i=1}^\numData \meanScalar_i}{\sum_{i=1}^\numData \sigma_i^2}$$
Since we are very familiar with the Gaussian density and its properties, it is not immediately apparent how unusual this is. Most random variables, when you add them together, change the family of
density they are drawn from. For example, the Gaussian is exceptional in this regard. Indeed, other random variables, if they are independently drawn and summed together tend to a Gaussian density.
That is the central limit theorem which is a major justification for the use of a Gaussian density.
Scaling a Gaussian
Less unusual is the scaling property of a Gaussian density. If a variable, $\dataScalar$, is sampled from a Gaussian density,
$$\dataScalar \sim \gaussianSamp{\meanScalar}{\sigma^2}$$
and we choose to scale that variable by a deterministic value, $\mappingScalar$, then the scaled variable is distributed as
$$\mappingScalar \dataScalar \sim \gaussianSamp{\mappingScalar\meanScalar}{\mappingScalar^2 \sigma^2}.$$
Unlike the summing properties, where adding two or more random variables independently sampled from a family of densitites typically brings the summed variable outside that family, scaling many
densities leaves the distribution of that variable in the same family of densities. Indeed, many densities include a scale parameter (e.g. the Gamma density) which is purely for this purpose. In the
Gaussian the standard deviation, $\dataStd$, is the scale parameter. To see why this makes sense, let’s consider,
$$z \sim \gaussianSamp{0}{1},$$
then if we scale by $\dataStd$ so we have, $\dataScalar=\dataStd z$, we can write,
$$\dataScalar =\dataStd z \sim \gaussianSamp{0}{\dataStd^2}$$
Laplace’s Idea
Laplace had the idea to augment the observations by noise, that is equivalent to considering a probability density whose mean is given by the prediction function
This is known as stochastic process. It is a function that is corrupted by noise. Laplace didn’t suggest the Gaussian density for that purpose, that was an innovation from Carl Friederich Gauss,
which is what gives the Gaussian density its name.
Height as a Function of Weight
In the standard Gaussian, parametized by mean and variance.
Make the mean a linear function of an input.
This leads to a regression model.
$$ \begin{align*} \dataScalar_i=&\mappingFunction\left(\inputScalar_i\right)+\noiseScalar_i,\\ \noiseScalar_i \sim & \gaussianSamp{0}{\dataStd^2}. \end{align*} $$
Assume $\dataScalar_i$ is height and $\inputScalar_i$ is weight.
Sum of Squares Error
Minimizing the sum of squares error was first proposed by Legendre in 1805. His book, which was on the orbit of comets, is available on google books, we can take a look at the relevant page by
calling the code below.
Of course, the main text is in French, but the key part we are interested in can be roughly translated as
In most matters where we take measures data through observation, the most accurate results they can offer, it is almost always leads to a system of equations of the form
where a, b, c, f etc are the known coefficients and x, y, z etc are unknown and must be determined by the condition that the value of E is reduced, for each equation, to an amount or zero or very
He continues
Of all the principles that we can offer for this item, I think it is not broader, more accurate, nor easier than the one we have used in previous research application, and that is to make the
minimum sum of the squares of the errors. By this means, it is between the errors a kind of balance that prevents extreme to prevail, is very specific to make known the state of the closest to
the truth system. The sum of the squares of the errors E^2+E^′^2+E^′′^2+etc being if we wanted a minimum, by varying x alone, we will have the equation …
This is the earliest know printed version of the problem of least squares. The notation, however, is a little awkward for mordern eyes. In particular Legendre doesn’t make use of the sum sign,
$$ \sum_{i=1}^3 z_i = z_1 + z_2 + z_3 $$
nor does he make use of the inner product.
In our notation, if we were to do linear regression, we would need to subsititue:
$$\begin{align*} a &\leftarrow \dataScalar_1-c, \\ a^\prime &\leftarrow \dataScalar_2-c,\\ a^{\prime\prime} &\leftarrow \dataScalar_3 -c,\\ \text{etc.} \end{align*}$$
to introduce the data observations $\{\dataScalar_i\}_{i=1}^{\numData}$ alongside c, the offset. We would then introduce the input locations
$$\begin{align*} b & \leftarrow \inputScalar_1,\\ b^\prime & \leftarrow \inputScalar_2,\\ b^{\prime\prime} & \leftarrow \inputScalar_3\\ \text{etc.} \end{align*}$$
and finally the gradient of the function
The remaining coefficients (c and f) would then be zero. That would give us
$$\begin{align*} &(\dataScalar_1 - (m\inputScalar_1+c))^2 \\ + &(\dataScalar_2 -(m\inputScalar_2 + c))^2\\ + &(\dataScalar_3 -(m\inputScalar_3 + c))^2 \\ + & \text{etc.} \end{align*}$$
which we would write in the modern notation for sums as
$$ \sum_{i=1}^\numData (\dataScalar_i-(m\inputScalar_i + c))^2 $$
which is recognised as the sum of squares error for a linear regression.
This shows the advantage of modern summation operator, ∑, in keeping our mathematical notation compact. Whilst it may look more complicated the first time you see it, understanding the mathematical
rules that go around it, allows us to go much further with the notation.
Inner products (or dot products) are similar. They allow us to write
$$ \sum_{i=1}^q u_i v_i $$
in a more compact notation, u⋅v.
Here we are using bold face to represent vectors, and we assume that the individual elements of a vector z are given as a series of scalars
$$ \mathbf{z} = \begin{bmatrix} z_1\\ z_2\\ \vdots\\ z_\numData \end{bmatrix} $$
which are each indexed by their position in the vector.
Linear Algebra
Linear algebra provides a very similar role, when we introduce linear algebra, it is because we are faced with a large number of addition and multiplication operations. These operations need to be
done together and would be very tedious to write down as a group. So the first reason we reach for linear algebra is for a more compact representation of our mathematical formulae.
Running Example: Olympic Marathons
Now we will load in the Olympic marathon data. This is data of the olympic marath times for the men’s marathon from the first olympics in 1896 up until the London 2012 olympics.
You can see what these values are by typing:
Note that they are not pandas data frames for this example, they are just arrays of dimensionality $\numData\times 1$, where $\numData$ is the number of data.
The aim of this lab is to have you coding linear regression in python. We will do it in two ways, once using iterative updates (coordinate ascent) and then using linear algebra. The linear algebra
approach will not only work much better, it is easy to extend to multiple input linear regression and non-linear regression using basis functions.
Plotting the Data
You can make a plot of $\dataScalar$ vs $\inputScalar$ with the following command:
Maximum Likelihood: Iterative Solution
Now we will take the maximum likelihood approach we derived in the lecture to fit a line, $\dataScalar_i=m\inputScalar_i + c$, to the data you’ve plotted. We are trying to minimize the error
$$ \errorFunction(m, c) = \sum_{i=1}^\numData(\dataScalar_i-m\inputScalar_i-c)^2 $$
with respect to m, c and σ^2. We can start with an initial guess for m,
Then we use the maximum likelihood update to find an estimate for the offset, c.
Coordinate Descent
In the movie recommender system example, we minimised the objective function by steepest descent based gradient methods. Our updates required us to compute the gradient at the position we were
located, then to update the gradient according to the direction of steepest descent. This time, we will take another approach. It is known as coordinate descent. In coordinate descent, we choose to
move one parameter at a time. Ideally, we design an algorithm that at each step moves the parameter to its minimum value. At each step we choose to move the individual parameter to its minimum.
To find the minimum, we look for the point in the curve where the gradient is zero. This can be found by taking the gradient of $\errorFunction(m,c)$ with respect to the parameter.
Update for Offset
Let’s consider the parameter c first. The gradient goes nicely through the summation operator, and we obtain
$$ \frac{\text{d}\errorFunction(m,c)}{\text{d}c} = -\sum_{i=1}^\numData 2(\dataScalar_i-m\inputScalar_i-c). $$
Now we want the point that is a minimum. A minimum is an example of a stationary point, the stationary points are those points of the function where the gradient is zero. They are found by solving
the equation for $\frac{\text{d}\errorFunction(m,c)}{\text{d}c} = 0$. Substituting in to our gradient, we can obtain the following equation,
$$ 0 = -\sum_{i=1}^\numData 2(\dataScalar_i-m\inputScalar_i-c) $$
which can be reorganised as follows,
$$ c^* = \frac{\sum_{i=1}^\numData(\dataScalar_i-m^*\inputScalar_i)}{\numData}. $$
The fact that the stationary point is easily extracted in this manner implies that the solution is unique. There is only one stationary point for this system. Traditionally when trying to determine
the type of stationary point we have encountered we now compute the second derivative,
$$ \frac{\text{d}^2\errorFunction(m,c)}{\text{d}c^2} = 2n. $$
The second derivative is positive, which in turn implies that we have found a minimum of the function. This means that setting c in this way will take us to the lowest point along that axes.
Update for Slope
Now we have the offset set to the minimum value, in coordinate descent, the next step is to optimise another parameter. Only one further parameter remains. That is the slope of the system.
Now we can turn our attention to the slope. We once again peform the same set of computations to find the minima. We end up with an update equation of the following form.
$$m^* = \frac{\sum_{i=1}^\numData (\dataScalar_i - c)\inputScalar_i}{\sum_{i=1}^\numData \inputScalar_i^2}$$
Communication of mathematics in data science is an essential skill, in a moment, you will be asked to rederive the equation above. Before we do that, however, we will briefly review how to write
mathematics in the notebook.
$\LaTeX$ for Maths
These cells use Markdown format. You can include maths in your markdown using $\LaTeX$ syntax, all you have to do is write your answer inside dollar signs, as follows:
To write a fraction, we write $\frac{a}{b}$, and it will display like this $\frac{a}{b}$. To write a subscript we write $a_b$ which will appear as a[b]. To write a superscript (for example in a
polynomial) we write $a^b$ which will appear as a^b. There are lots of other macros as well, for example we can do greek letters such as $\alpha, \beta, \gamma$ rendering as α,β,γ. And we can do
sum and intergral signs as $\sum \int \int$.
You can combine many of these operations together for composing expressions.
Fixed Point Updates
Worked example.
$$ \begin{aligned} c^{*}=&\frac{\sum _{i=1}^{\numData}\left(\dataScalar_i-m^{*}\inputScalar_i\right)}{\numData},\\ m^{*}=&\frac{\sum _{i=1}^{\numData}\inputScalar_i\left(\dataScalar_i-c^{*}\right)}{\
sum _{i=1}^{\numData}\inputScalar_i^{2}},\\ \left.\dataStd^2\right.^{*}=&\frac{\sum _{i=1}^{\numData}\left(\dataScalar_i-m^{*}\inputScalar_i-c^{*}\right)^{2}}{\numData} \end{aligned} $$
Gradient With Respect to the Slope
Now that you’ve had a little training in writing maths with $\LaTeX$, we will be able to use it to answer questions. The next thing we are going to do is a little differentiation practice.
We can have a look at how good our fit is by computing the prediction across the input space. First create a vector of ‘test points’,
Now use this vector to compute some test predictions,
Now plot those test predictions with a blue line on the same plot as the data,
The fit isn’t very good, we need to iterate between these parameter updates in a loop to improve the fit, we have to do this several times,
And let’s try plotting the result again
Clearly we need more iterations than 10! In the next question you will add more iterations and report on the error as optimisation proceeds.
Important Concepts Not Covered
• Other optimization methods:
□ Second order methods, conjugate gradient, quasi-Newton and Newton.
• Effective heuristics such as momentum.
• Local vs global solutions.
Objective Functions and Regression [edit]
Noise Corrupted Plot
Contour Plot of Error Function
• Visualise the error function surface, create vectors of values.
• create a grid of values to evaluate the error function in 2D.
• compute the error function at each combination of c and m.
Contour Plot of Error
Steepest Descent
• We start with a guess for m and c.
Offset Gradient
• Now we need to compute the gradient of the error function, firstly with respect to c,
$$\frac{\text{d}\errorFunction(m, c)}{\text{d} c} = -2\sum_{i=1}^\numData (\dataScalar_i - m\inputScalar_i - c)$$
• This is computed in python as follows
Deriving the Gradient
To see how the gradient was derived, first note that the c appears in every term in the sum. So we are just differentiating $(\dataScalar_i - m\inputScalar_i - c)^2$ for each term in the sum. The
gradient of this term with respect to c is simply the gradient of the outer quadratic, multiplied by the gradient with respect to c of the part inside the quadratic. The gradient of a quadratic is
two times the argument of the quadratic, and the gradient of the inside linear term is just minus one. This is true for all terms in the sum, so we are left with the sum in the gradient.
Slope Gradient
The gradient with respect tom m is similar, but now the gradient of the quadratic’s argument is $-\inputScalar_i$ so the gradient with respect to m is
$$\frac{\text{d}\errorFunction(m, c)}{\text{d} m} = -2\sum_{i=1}^\numData \inputScalar_i(\dataScalar_i - m\inputScalar_i - c)$$
which can be implemented in python (numpy) as
Update Equations
• Now we have gradients with respect to m and c.
• Can update our inital guesses for m and c using the gradient.
• We don’t want to just subtract the gradient from m and c,
• We need to take a small step in the gradient direction.
• Otherwise we might overshoot the minimum.
• We want to follow the gradient to get to the minimum, the gradient changes all the time.
Move in Direction of Gradient
Update Equations
• The step size has already been introduced, it’s again known as the learning rate and is denoted by $\learnRate$.
$$ c_\text{new}\leftarrow c_{\text{old}} - \learnRate \frac{\text{d}\errorFunction(m, c)}{\text{d}c} $$
• gives us an update for our estimate of c (which in the code we’ve been calling c_star to represent a common way of writing a parameter estimate, c^*) and
$$ m_\text{new} \leftarrow m_{\text{old}} - \learnRate \frac{\text{d}\errorFunction(m, c)}{\text{d}m} $$
• Giving us an update for m.
Update Code
• These updates can be coded as
Iterating Updates
• Fit model by descending gradient.
Gradient Descent Algorithm
Stochastic Gradient Descent
• If $\numData$ is small, gradient descent is fine.
• But sometimes (e.g. on the internet $\numData$ could be a billion.
• Stochastic gradient descent is more similar to perceptron.
• Look at gradient of one data point at a time rather than summing across all data points)
• This gives a stochastic estimate of gradient.
Stochastic Gradient Descent
• The real gradient with respect to m is given by
$$\frac{\text{d}\errorFunction(m, c)}{\text{d} m} = -2\sum_{i=1}^\numData \inputScalar_i(\dataScalar_i - m\inputScalar_i - c)$$
but it has $\numData$ terms in the sum. Substituting in the gradient we can see that the full update is of the form
$$m_\text{new} \leftarrow m_\text{old} + 2\learnRate \left[\inputScalar_1 (\dataScalar_1 - m_\text{old}\inputScalar_1 - c_\text{old}) + (\inputScalar_2 (\dataScalar_2 - m_\text{old}\inputScalar_2
- c_\text{old}) + \dots + (\inputScalar_n (\dataScalar_n - m_\text{old}\inputScalar_n - c_\text{old})\right]$$
This could be split up into lots of individual updates
$$m_1 \leftarrow m_\text{old} + 2\learnRate \left[\inputScalar_1 (\dataScalar_1 - m_\text{old}\inputScalar_1 - c_\text{old})\right]$$
$$m_2 \leftarrow m_1 + 2\learnRate \left[\inputScalar_2 (\dataScalar_2 - m_\text{old}\inputScalar_2 - c_\text{old})\right]$$
$$m_3 \leftarrow m_2 + 2\learnRate \left[\dots\right]$$
$$m_n \leftarrow m_{n-1} + 2\learnRate \left[\inputScalar_n (\dataScalar_n - m_\text{old}\inputScalar_n - c_\text{old})\right]$$
which would lead to the same final update.
Updating c and m
• In the sum we don’t m and c we use for computing the gradient term at each update.
• In stochastic gradient descent we do change them.
• This means it’s not quite the same as steepest desceint.
• But we can present each data point in a random order, like we did for the perceptron.
• This makes the algorithm suitable for large scale web use (recently this domain is know as ‘Big Data’) and algorithms like this are widely used by Google, Microsoft, Amazon, Twitter and Facebook.
Stochastic Gradient Descent
• Or more accurate, since the data is normally presented in a random order we just can write
$$ m_\text{new} = m_\text{old} + 2\learnRate\left[\inputScalar_i (\dataScalar_i - m_\text{old}\inputScalar_i - c_\text{old})\right] $$
SGD for Linear Regression
Putting it all together in an algorithm, we can do stochastic gradient descent for our regression data.
Reflection on Linear Regression and Supervised Learning
Think about:
1. What effect does the learning rate have in the optimization? What’s the effect of making it too small, what’s the effect of making it too big? Do you get the same result for both stochastic and
steepest gradient descent?
2. The stochastic gradient descent doesn’t help very much for such a small data set. It’s real advantage comes when there are many, you’ll see this in the lab.
Multiple Input Solution with Linear Algebra
You’ve now seen how slow it can be to perform a coordinate ascent on a system. Another approach to solving the system (which is not always possible, particularly in non-linear systems) is to go
direct to the minimum. To do this we need to introduce linear algebra. We will represent all our errors and functions in the form of linear algebra. As we mentioned above, linear algebra is just a
shorthand for performing lots of multiplications and additions simultaneously. What does it have to do with our system then? Well the first thing to note is that the linear function we were trying to
fit has the following form:
$$ \mappingFunction(x) = mx + c $$
the classical form for a straight line. From a linear algebraic perspective we are looking for multiplications and additions. We are also looking to separate our parameters from our data. The data is
the givens remember, in French the word is données literally translated means givens that’s great, because we don’t need to change the data, what we need to change are the parameters (or variables)
of the model. In this function the data comes in through x, and the parameters are m and c.
What we’d like to create is a vector of parameters and a vector of data. Then we could represent the system with vectors that represent the data, and vectors that represent the parameters.
We look to turn the multiplications and additions into a linear algebraic form, we have one multiplication (m×c) and one addition (mx+c). But we can turn this into a inner product by writing it
in the following way,
$$ \mappingFunction(x) = m \times x + c \times 1, $$
in other words we’ve extracted the unit value, from the offset, c. We can think of this unit value like an extra item of data, because it is always given to us, and it is always set to 1 (unlike
regular data, which is likely to vary!). We can therefore write each input data location, $\inputVector$, as a vector
$$ \inputVector = \begin{bmatrix} 1\\ x\end{bmatrix}. $$
Now we choose to also turn our parameters into a vector. The parameter vector will be defined to contain
$$ \mappingVector = \begin{bmatrix} c \\ m\end{bmatrix} $$
because if we now take the inner product between these to vectors we recover
$$ \inputVector\cdot\mappingVector = 1 \times c + x \times m = mx + c $$
In numpy we can define this vector as follows
This gives us the equivalence between original operation and an operation in vector space. Whilst the notation here isn’t a lot shorter, the beauty is that we will be able to add as many features as
we like and still keep the seame representation. In general, we are now moving to a system where each of our predictions is given by an inner product. When we want to represent a linear product in
linear algebra, we tend to do it with the transpose operation, so since we have a⋅b=a^⊤b we can write
$$ \mappingFunction(\inputVector_i) = \inputVector_i^\top\mappingVector. $$
Where we’ve assumed that each data point, $\inputVector_i$, is now written by appending a 1 onto the original vector
$$ \inputVector_i = \begin{bmatrix} 1 \\ \inputScalar_i \end{bmatrix} $$
Design Matrix
We can do this for the entire data set to form a design matrix $\inputMatrix$,
$$\inputMatrix = \begin{bmatrix} \inputVector_1^\top \\\ \inputVector_2^\top \\\ \vdots \\\ \inputVector_\numData^\top \end{bmatrix} = \begin{bmatrix} 1 & \inputScalar_1 \\\ 1 & \inputScalar_2 \\\ \
vdots & \vdots \\\ 1 & \inputScalar_\numData \end{bmatrix},$$
which in numpy can be done with the following commands:
Writing the Objective with Linear Algebra
When we think of the objective function, we can think of it as the errors where the error is defined in a similar way to what it was in Legendre’s day $\dataScalar_i - \mappingFunction(\
inputVector_i)$, in statistics these errors are also sometimes called residuals. So we can think as the objective and the prediction function as two separate parts, first we have,
$$ \errorFunction(\mappingVector) = \sum_{i=1}^\numData (\dataScalar_i - \mappingFunction(\inputVector_i; \mappingVector))^2, $$
where we’ve made the function $\mappingFunction(\cdot)$’s dependence on the parameters $\mappingVector$ explicit in this equation. Then we have the definition of the function itself,
$$ \mappingFunction(\inputVector_i; \mappingVector) = \inputVector_i^\top \mappingVector. $$
Let’s look again at these two equations and see if we can identify any inner products. The first equation is a sum of squares, which is promising. Any sum of squares can be represented by an inner
$$ a = \sum_{i=1}^{k} b^2_i = \mathbf{b}^\top\mathbf{b}, $$
so if we wish to represent $\errorFunction(\mappingVector)$ in this way, all we need to do is convert the sum operator to an inner product. We can get a vector from that sum operator by placing both
$\dataScalar_i$ and $\mappingFunction(\inputVector_i; \mappingVector)$ into vectors, which we do by defining
$$ \dataVector = \begin{bmatrix}\dataScalar_1\\ \dataScalar_2\\ \vdots \\ \dataScalar_\numData\end{bmatrix} $$
and defining
$$ \mappingFunctionVector(\inputVector_1; \mappingVector) = \begin{bmatrix}\mappingFunction(\inputVector_1; \mappingVector)\\ \mappingFunction(\inputVector_2; \mappingVector)\\ \vdots \\ \
mappingFunction(\inputVector_\numData; \mappingVector)\end{bmatrix}. $$
The second of these is actually a vector-valued function. This term may appear intimidating, but the idea is straightforward. A vector valued function is simply a vector whose elements are themselves
defined as functions, i.e. it is a vector of functions, rather than a vector of scalars. The idea is so straightforward, that we are going to ignore it for the moment, and barely use it in the
derivation. But it will reappear later when we introduce basis functions. So we will, for the moment, ignore the dependence of $\mappingFunctionVector$ on $\mappingVector$ and $\inputMatrix$ and
simply summarise it by a vector of numbers
$$ \mappingFunctionVector = \begin{bmatrix}\mappingFunction_1\\\mappingFunction_2\\ \vdots \\ \mappingFunction_\numData\end{bmatrix}. $$
This allows us to write our objective in the folowing, linear algebraic form,
$$ \errorFunction(\mappingVector) = (\dataVector - \mappingFunctionVector)^\top(\dataVector - \mappingFunctionVector) $$
from the rules of inner products. But what of our matrix $\inputMatrix$ of input data? At this point, we need to dust off matrix-vector multiplication. Matrix multiplication is simply a convenient
way of performing many inner products together, and it’s exactly what we need to summarise the operation
$$ f_i = \inputVector_i^\top\mappingVector. $$
This operation tells us that each element of the vector $\mappingFunctionVector$ (our vector valued function) is given by an inner product between $\inputVector_i$ and $\mappingVector$. In other
words it is a series of inner products. Let’s look at the definition of matrix multiplication, it takes the form
where c might be a k dimensional vector (which we can intepret as a k×1 dimensional matrix), and B is a k×k dimensional matrix and a is a k dimensional vector (k×1 dimensional matrix).
The result of this multiplication is of the form
$$ \begin{bmatrix}c_1\\c_2 \\ \vdots \\ a_k\end{bmatrix} = \begin{bmatrix} b_{1,1} & b_{1, 2} & \dots & b_{1, k} \\ b_{2, 1} & b_{2, 2} & \dots & b_{2, k} \\ \vdots & \vdots & \ddots & \vdots \\ b_
{k, 1} & b_{k, 2} & \dots & b_{k, k} \end{bmatrix} \begin{bmatrix}a_1\\a_2 \\ \vdots\\ c_k\end{bmatrix} = \begin{bmatrix} b_{1, 1}a_1 + b_{1, 2}a_2 + \dots + b_{1, k}a_k\\ b_{2, 1}a_1 + b_{2, 2}a_2 +
\dots + b_{2, k}a_k \\ \vdots\\ b_{k, 1}a_1 + b_{k, 2}a_2 + \dots + b_{k, k}a_k\end{bmatrix} $$
so we see that each element of the result, a is simply the inner product between each row of B and the vector c. Because we have defined each element of $\mappingFunctionVector$ to be given by the
inner product between each row of the design matrix and the vector $\mappingVector$ we now can write the full operation in one matrix multiplication,
$$ \mappingFunctionVector = \inputMatrix\mappingVector. $$
Combining this result with our objective function,
$$ \errorFunction(\mappingVector) = (\dataVector - \mappingFunctionVector)^\top(\dataVector - \mappingFunctionVector) $$
we find we have defined the model with two equations. One equation tells us the form of our predictive function and how it depends on its parameters, the other tells us the form of our objective
Objective Optimisation
Our model has now been defined with two equations, the prediction function and the objective function. Next we will use multivariate calculus to define an algorithm to fit the model. The separation
between model and algorithm is important and is often overlooked. Our model contains a function that shows how it will be used for prediction, and a function that describes the objective function we
need to optimise to obtain a good set of parameters.
The model linear regression model we have described is still the same as the one we fitted above with a coordinate ascent algorithm. We have only played with the notation to obtain the same model in
a matrix and vector notation. However, we will now fit this model with a different algorithm, one that is much faster. It is such a widely used algorithm that from the end user’s perspective it
doesn’t even look like an algorithm, it just appears to be a single operation (or function). However, underneath the computer calls an algorithm to find the solution. Further, the algorithm we obtain
is very widely used, and because of this it turns out to be highly optimised.
Once again we are going to try and find the stationary points of our objective by finding the stationary points. However, the stationary points of a multivariate function, are a little bit more
complext to find. Once again we need to find the point at which the derivative is zero, but now we need to use multivariate calculus to find it. This involves learning a few additional rules of
differentiation (that allow you to do the derivatives of a function with respect to vector), but in the end it makes things quite a bit easier. We define vectorial derivatives as follows,
$$ \frac{\text{d}\errorFunction(\mappingVector)}{\text{d}\mappingVector} = \begin{bmatrix}\frac{\text{d}\errorFunction(\mappingVector)}{\text{d}\mappingScalar_1}\\\frac{\text{d}\errorFunction(\
mappingVector)}{\text{d}\mappingScalar_2}\end{bmatrix}. $$
where $\frac{\text{d}\errorFunction(\mappingVector)}{\text{d}\mappingScalar_1}$ is the partial derivative of the error function with respect to $\mappingScalar_1$.
Differentiation through multiplications and additions is relatively straightforward, and since linear algebra is just multiplication and addition, then its rules of diffentiation are quite
straightforward too, but slightly more complex than regular derivatives.
Multivariate Derivatives
We will need two rules of multivariate or matrix differentiation. The first is diffentiation of an inner product. By remembering that the inner product is made up of multiplication and addition, we
can hope that its derivative is quite straightforward, and so it proves to be. We can start by thinking about the definition of the inner product,
which if we were to take the derivative with respect to z[k] would simply return the gradient of the one term in the sum for which the derivative was non zero, that of a[k], so we know that
$$ \frac{\text{d}}{\text{d}z_k} \mathbf{a}^\top \mathbf{z} = a_k $$
and by our definition of multivariate derivatives we can simply stack all the partial derivatives of this form in a vector to obtain the result that
$$ \frac{\text{d}}{\text{d}\mathbf{z}} \mathbf{a}^\top \mathbf{z} = \mathbf{a}. $$
The second rule that’s required is differentiation of a ‘matrix quadratic’. A scalar quadratic in z with coefficient c has the form cz^2. If z is a k×1 vector and C is a k×k matrix of
coefficients then the matrix quadratic form is written as z^⊤Cz, which is itself a scalar quantity, but it is a function of a vector.
Matching Dimensions in Matrix Multiplications
There’s a trick for telling that it’s a scalar result. When you are doing maths with matrices, it’s always worth pausing to perform a quick sanity check on the dimensions. Matrix multplication only
works when the dimensions match. To be precise, the ‘inner’ dimension of the matrix must match. What is the inner dimension. If we multiply two matrices A and B, the first of which has k rows and ℓ
columns and the second of which has p rows and q columns, then we can check whether the multiplication works by writing the dimensionalities next to each other,
$$ \mathbf{A} \mathbf{B} \rightarrow (k \times \underbrace{\ell)(p}_\text{inner dimensions} \times q) \rightarrow (k\times q). $$
The inner dimensions are the two inside dimensions, ℓ and p. The multiplication will only work if ℓ=p. The result of the multiplication will then be a k×q matrix: this dimensionality comes from
the ‘outer dimensions’. Note that matrix multiplication is not commutative. And if you change the order of the multiplication,
$$ \mathbf{B} \mathbf{A} \rightarrow (\ell \times \underbrace{k)(q}_\text{inner dimensions} \times p) \rightarrow (\ell \times p). $$
firstly it may no longer even work, because now the condition is that k=q, and secondly the result could be of a different dimensionality. An exception is if the matrices are square matrices
(e.g. same number of rows as columns) and they are both symmetric. A symmetric matrix is one for which A=A^⊤, or equivalently, a[i,j]=a[j,i] for all i and j.
You will need to get used to working with matrices and vectors applying and developing new machine learning techniques. You should have come across them before, but you may not have used them as
extensively as we will now do in this course. You should get used to using this trick to check your work and ensure you know what the dimension of an output matrix should be. For our matrix quadratic
form, it turns out that we can see it as a special type of inner product.
$$ \mathbf{z}^\top\mathbf{C}\mathbf{z} \rightarrow (1\times \underbrace{k) (k}_\text{inner dimensions}\times k) (k\times 1) \rightarrow \mathbf{b}^\top\mathbf{z} $$
where b=Cz so therefore the result is a scalar,
$$ \mathbf{b}^\top\mathbf{z} \rightarrow (1\times \underbrace{k) (k}_\text{inner dimensions}\times 1) \rightarrow (1\times 1) $$
where a (1×1) matrix is recognised as a scalar.
This implies that we should be able to differentiate this form, and indeed the rule for its differentiation is slightly more complex than the inner product, but still quite simple,
$$ \frac{\text{d}}{\text{d}\mathbf{z}} \mathbf{z}^\top\mathbf{C}\mathbf{z}= \mathbf{C}\mathbf{z} + \mathbf{C}^\top \mathbf{z}. $$
Note that in the special case where C is symmetric then we have C=C^⊤ and the derivative simplifies to
$$ \frac{\text{d}}{\text{d}\mathbf{z}} \mathbf{z}^\top\mathbf{C}\mathbf{z}= 2\mathbf{C}\mathbf{z}. $$
Differentiate the Objective
First, we need to compute the full objective by substituting our prediction function into the objective function to obtain the objective in terms of $\mappingVector$. Doing this we obtain
$$ \errorFunction(\mappingVector)= (\dataVector - \inputMatrix\mappingVector)^\top (\dataVector - \inputMatrix\mappingVector). $$
We now need to differentiate this quadratic form to find the minimum. We differentiate with respect to the vector $\mappingVector$. But before we do that, we’ll expand the brackets in the quadratic
form to obtain a series of scalar terms. The rules for bracket expansion across the vectors are similar to those for the scalar system giving,
which substituting for $\mathbf{a} = \mathbf{c} = \dataVector$ and $\mathbf{b}=\mathbf{d} = \inputMatrix\mappingVector$ gives
$$ \errorFunction(\mappingVector)= \dataVector^\top\dataVector - 2\dataVector^\top\inputMatrix\mappingVector + \mappingVector^\top\inputMatrix^\top\inputMatrix\mappingVector $$
where we used the fact that $\dataVector^\top\inputMatrix\mappingVector=\mappingVector^\top\inputMatrix^\top\dataVector$. Now we can use our rules of differentiation to compute the derivative of this
form, which is,
$$ \frac{\text{d}}{\text{d}\mappingVector}\errorFunction(\mappingVector)=- 2\inputMatrix^\top \dataVector + 2\inputMatrix^\top\inputMatrix\mappingVector, $$
where we have exploited the fact that $\inputMatrix^\top\inputMatrix$ is symmetric to obtain this result.
Update Equation for Global Optimum
Once again, we need to find the minimum of our objective function. Using our likelihood for multiple input regression we can now minimize for our parameter vector $\mappingVector$. Firstly, just as
in the single input case, we seek stationary points by find parameter vectors that solve for when the gradients are zero,
$$ \mathbf{0}=- 2\inputMatrix^\top \dataVector + 2\inputMatrix^\top\inputMatrix\mappingVector, $$
where 0 is a vector of zeros. Rearranging this equation we find the solution to be
$$ \mappingVector = \left[\inputMatrix^\top \inputMatrix\right]^{-1} \inputMatrix^\top \dataVector $$
where A^−1 denotes matrix inverse.
Solving the Multivariate System
The solution for $\mappingVector$ is given in terms of a matrix inverse, but computation of a matrix inverse requires, in itself, an algorithm to resolve it. You’ll know this if you had to invert, by
hand, a 3×3 matrix in high school. From a numerical stability perspective, it is also best not to compute the matrix inverse directly, but rather to ask the computer to solve the system of linear
equations given by
$$\inputMatrix^\top\inputMatrix \mappingVector = \inputMatrix^\top\dataVector$$
for $\mappingVector$. This can be done in numpy using the command
so we can obtain the solution using
We can map it back to the liner regression and plot the fit as follows
Multivariate Linear Regression
A major advantage of the new system is that we can build a linear regression on a multivariate system. The matrix calculus didn’t specify what the length of the vector $\inputVector$ should be, or
equivalently the size of the design matrix.
Movie Body Count Data
Let’s consider the movie body count data.
Let’s remind ourselves of the features we’ve been provided with.
Now we will build a design matrix based on the numeric features: year, Body_Count, Length_Minutes in an effort to predict the rating. We build the design matrix as follows:
Relation to Single Input System
Bias as an additional feature.
Now let’s perform a linear regression. But this time, we will create a pandas data frame for the result so we can store it in a form that we can visualise easily.
We can check the residuals to see how good our estimates are
Which shows our model hasn’t yet done a great job of representation, because the spread of values is large. We can check what the rating is dominated by in terms of regression coefficients.
Although we have to be a little careful about interpretation because our input values live on different scales, however it looks like we are dominated by the bias, with a small negative effect for
later films (but bear in mind the years are large, so this effect is probably larger than it looks) and a positive effect for length. So it looks like long earlier films generally do better, but the
residuals are so high that we probably haven’t modelled the system very well.
Solution with QR Decomposition
Performing a solve instead of a matrix inverse is the more numerically stable approach, but we can do even better. A QR-decomposition of a matrix factorises it into a matrix which is an orthogonal
matrix Q, so that $\mathbf{Q}^\top \mathbf{Q} = \eye$. And a matrix which is upper triangular, R.
$$ \inputMatrix^\top \inputMatrix \boldsymbol{\beta} = \inputMatrix^\top \dataVector $$
$$ (\mathbf{Q}\mathbf{R})^\top (\mathbf{Q}\mathbf{R})\boldsymbol{\beta} = (\mathbf{Q}\mathbf{R})^\top \dataVector $$
$$ \mathbf{R}^\top (\mathbf{Q}^\top \mathbf{Q}) \mathbf{R} \boldsymbol{\beta} = \mathbf{R}^\top \mathbf{Q}^\top \dataVector $$
$$ \mathbf{R}^\top \mathbf{R} \boldsymbol{\beta} = \mathbf{R}^\top \mathbf{Q}^\top \dataVector $$
$$ \mathbf{R} \boldsymbol{\beta} = \mathbf{Q}^\top \dataVector $$
This is a more numerically stable solution because it removes the need to compute $\inputMatrix^\top\inputMatrix$ as an intermediate. Computing $\inputMatrix^\top\inputMatrix$ is a bad idea because
it involves squaring all the elements of $\inputMatrix$ and thereby potentially reducing the numerical precision with which we can represent the solution. Operating on $\inputMatrix$ directly
preserves the numerical precision of the model.
This can be more particularly seen when we begin to work with basis functions in the next session. Some systems that can be resolved with the QR decomposition can not be resolved by using solve
• Section 1.3 of Rogers and Girolami (2011) for Matrix & Vector Review. | {"url":"https://inverseprobability.com/talks/notes/linear-regression.html","timestamp":"2024-11-15T04:40:59Z","content_type":"text/html","content_length":"152094","record_id":"<urn:uuid:33b9c8f5-b477-48e9-b991-c33114fdd8df>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00805.warc.gz"} |
2.8. Discovery of sociological statistics
In social time, in people's employment, SST dialectics finds the basis for the transition from a qualitative description of the social world to a quantitative expression of its variable constants
within an indices system. This basis is the source of TetraSociology's discovery of new, sociological statistics, which do not have analogues. No other sociological theory, to our knowledge, has or
has tried to create a system of its own, sociological, statistical indices. Sociology still does not have indices, which prohibits its from becoming a full-fledged, independent science. If sociology
utilises statistics, then it is only traditional economic statistics. This significantly curtails sociology's abilities and limits its pragmatic potential, making out of it a double of economics' of
little use. The creation of a radically new system of statistics and statistical indices was attempted 20 years ago within the framework of TetraSociology (then known as "The Sphere Approach"). The
following is a general outline of the statistics.
First, the basis for qualitatively new, sociological indices is formed by the 16 SST constants; the indices denote the constants' variable quantitative values. SST constants being macrosociological,
the indices designating them are also macrosociological; the set of the indices forms a sociological macrostatistics called tetrar, or TMS. This statistics is not economic, but sociological, because
it is based on SST's sociological constants, on their common denominator and gauge - people's employment time. This gauge can be expressed in natural, cost-based and temporal units; we do not
discuss here the question of the relation between them, because it is very complex and comprehensive. TMS is the sum of a multitude of specific indices, called "sphere indices", which are discussed
Second, sphere indices are based on the expression of the four PIOT resources. The people resource is designated with the index "P"; the information resource with the index "I"; the organisation
resource with the index "O"; the things resource with the index "T." Because each resource gets reproduced by a relevant sphere to be used in all the four spheres, it is differentiated by SIOT
spheres, each of which gets an index number: 1, 2, 3, 4. The indices denoting PIOT resources' sphere differentiation are called sphere indices. They are designated by numbers and letters, e.g.: P1,
I21, O341, T4123, etc. So, sphere indices are specific statistical indices of a sociological class, denoting different states and intervals of SST's variable constants through PIOT resources'
differentiated indices.
The major form of existence for sphere indices is not separate indices, although they are not entirely absent, but their interlinked clusters in the form of matrices. The sphere indices' basic,
initial matrix, denoting the distribution of 4 resources by 4 spheres - totalling 4x4 - looks like this:
P = P1 + P2 + P3 + P4, where P is population, and P1, P2, P3, P4 - their sphere classes
I = I1 + I2 + I3 + I4, where I is information, and I1, I2, I3, I4 - its clusters
O = O1 + O2 + O3 + O4, where O is organisations, and O1, O2, O3, O4 - their blocks
T = T1 + T2 + T3 + T4, where T is things, material goods, and T1, T2, T3, T4 - their groups
The matrix's lines denote the appropriate spheres' "outputs," i.e. production in them of appropriate products, while the columns denote appropriate spheres' "inputs," a utilisation of appropriate
resources in them. Let us explain. The P line indices denote reproduction of the entire population in the 1st, social sphere, while the index numbers by the letter P designate the classes of people
reproduced for appropriate spheres: P1 - for social, P2 - for informational, P3 - for organisational, P4 - for technical. (These are sphere classes of the population: humanitarian, informational,
organisational, material, engaged in appropriate spheres of reproduction; they are explored below.) The I line indices denote reproduction of all information in the 2nd, informational sphere, while
the index numbers by the "I" designates the clusters of information reproduced for appropriate spheres from the 1st to the 4th. The same applies to the other indices' lines.
Third, based on the basic matrix, a hierarchical system of matrices is created totaling 4У1/4, 4У1, 4У4, 4У16, 4У64, 4У256, etc. The depth of this system (the number of levels) is limited only by
pragmatic considerations and technical possibilities.
Fourth, quantitative changes in the three constants - processes, structures, states - are denoted with PIOT resources' sphere indices. The basic matrix forms the foundation for the matrices of the
indices of processes of production, distribution, exchange, consumption, and aspects thereof: growth, increase, growth rate, efficiency, productivity, etc. The basic matrix forms the foundation for
the matrices of indices of structures (spheres, branches of economy, regions, countries), and aspects thereof: intersphere, interbranch, interregional balances, proportions, growth rate, etc. The
basic matrix forms the foundation for the matrices of the indices of the social world's (its parts') developmental states and its aspects: harmony/disharmony, balance/imbalance, stability/
instability, progress/regress, cyclicity/rhythm, etc. For each of the four spheres, intersphere balances are presented as the "output/expenditures" tables in four quadrants.
Fifth, each sphere index is formed through summation/aggregation of the appropriate active statistical indices: industrial and regional, economic and social, national and international. Experts'
evaluations make up for an absence or limitedness of active indices. So, tetrar macrosociological statistics does not cancel traditional economic statistics, but supplements it and is built over it.
Sphere indices integrate and develop the international statistical systems indices, first of all those of the National Accountancy System (NAS) and "Statistical Package for the Social Sciences"
(SPSS). The sphere statistical indices system produces qualitatively new substantive information about social resources, processes, structures, states. precisely sociological information, as the most
comprehensive information about them. This kind of information - sociological, or to put it more precisely, sociology-statistical, opens up qualitatively new opportunities for the development of both
social thinking and information technologies.
Tetrar macrostatistics includes several algorithms for sphere indices formation and transformation. The set of algorithms is distilled down to four blocks.
First block. A system of algorithms for the selection of operative indices subsets, necessary for the formation of each sphere/sociological index of any level, from the individual to branch, country,
world. It is "Algorithm 1" block.
Second block. A system of algorithms for the aggregation and formation, out of operative indices subsets, of sphere indices. It is "Algorithm 2" block.
Third block. A system of algorithms for the calculation of sphere indices, their matrices, balances, and other models. It is "Algorithm 3" block. This algorithm's calculations/transformations results
are denoted with sphere indices. It is the first output of the results of calculations of sphere indices, their matrices and balances.
Fourth block. A system of algorithms of sphere indices for conversion into operative ones (industrial, regional, national, etc.). This block, "Algorithm 4," is the opposite of Algorithm 2. The
result of sphere indices calculation is presented in operative indices. It is the second output of sphere indices calculations.
Examples of sphere indices matrices and their numerous uses over 20 years are listed in the appropriate listing in the Appendix. The sphere indices matrix for Russia in 1991 and 1996 is provided in
our 1999 book. one of the book's fragments, on Russia's sphere classes, is quoted below. Sphere, sociological statistics reflects a specific, sphere, or aggregated, discreteness of the social
world. This statistics constitutes the product of TetraSociology, its exclusive feature. | {"url":"https://www.peacefromharmony.org/?cat=en_c&key=176","timestamp":"2024-11-11T07:27:23Z","content_type":"text/html","content_length":"34920","record_id":"<urn:uuid:deb0eb00-a71f-4c1e-8355-c2ff5d1156e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00399.warc.gz"} |
Numbers in Portuguese (Portugal)
Learn numbers in Portuguese (Portugal)
Knowing numbers in Portuguese (Portugal) is probably one of the most useful things you can learn to say, write and understand in Portuguese (Portugal). Learning to count in Portuguese (Portugal) may
appeal to you just as a simple curiosity or be something you really need. Perhaps you have planned a trip to a country where Portuguese (Portugal) is the most widely spoken language, and you want to
be able to shop and even bargain with a good knowledge of numbers in Portuguese (Portugal).
It's also useful for guiding you through street numbers. You'll be able to better understand the directions to places and everything expressed in numbers, such as the times when public transportation
leaves. Can you think of more reasons to learn numbers in Portuguese (Portugal)?
Portugues (português) is a romance language from the Indo-European family. Originating from Portugal, it has evolved into different dialects and creoles in Brasil, in five African countries (Angola,
Cape Verde, Guinea-Bissau, Mozambique, São Tomé and Príncipe) as well as in Macau and East Timor. Regulated by the Lisbon Science Academy (Academia das Ciências de Lisboa), it is roughly spoken by 10
million people in Portugal alone and 170 million people in Brasil where Brazilean Portuguese is in use with mostly spelling and pronunciation differences.
List of numbers in Portuguese (Portugal)
Here is a list of numbers in Portuguese (Portugal). We have made for you a list with all the numbers in Portuguese (Portugal) from 1 to 20. We have also included the tens up to the number 100, so
that you know how to count up to 100 in Portuguese (Portugal). We also close the list by showing you what the number 1000 looks like in Portuguese (Portugal).
• 1) um
• 2) dois
• 3) três
• 4) quatro
• 5) cinco
• 6) seis
• 7) sete
• 8) oito
• 9) nove
• 10) dez
• 11) onze
• 12) doze
• 13) treze
• 14) catorze
• 15) quinze
• 16) dezasseis
• 17) dezassete
• 18) dezoito
• 19) dezanove
• 20) vinte
• 30) trinta
• 40) quarenta
• 50) cinquenta
• 60) sessenta
• 70) setenta
• 80) oitenta
• 90) noventa
• 100) cem
• 1,000) mil
• one million) um milhão
• one billion) mil milhões
• one trillion) um bilião
Numbers in Portuguese (Portugal): Portuguese (Portugal) numbering rules
Each culture has specific peculiarities that are expressed in its language and its way of counting. The Portuguese (Portugal) is no exception. If you want to learn numbers in Portuguese (Portugal)
you will have to learn a series of rules that we will explain below. If you apply these rules you will soon find that you will be able to count in Portuguese (Portugal) with ease.
The way numbers are formed in Portuguese (Portugal) is easy to understand if you follow the rules explained here. Surprise everyone by counting in Portuguese (Portugal). Also, learning how to number
in Portuguese (Portugal) yourself from these simple rules is very beneficial for your brain, as it forces it to work and stay in shape. Working with numbers and a foreign language like Portuguese
(Portugal) at the same time is one of the best ways to train our little gray cells, so let's see what rules you need to apply to number in Portuguese (Portugal)
Digits and numbers from zero to fifteen are specific words, namely zero [0], um [1], dois [2], três [3], quatro [4], cinco [5], seis [6], sete [7], oito [8], nove [9], dez [10], onze [11], doze [12],
treze [13], catorze [14], quinze [15]. Sixteen to nineteen are regular numbers, i.e. named after the ten and the digit, and written phonetically: dezasseis [10 and 6], dezassete [10 and 7], dezoito
[10 and 8], dezanove [10 and 9].
The tens have specific names based on the digits roots except for ten and twenty: dez [10], vinte [20], trinta [30], quarenta [40], cinquenta [50], sessenta [60], setenta [70], oitenta [80] and
noventa [90].
The same applies for the hundreds: cem [100] (plural centos), duzentos [200], trezentos [300], quatrocentos [400], quinhentos [500], seiscentos [600], setecentos [700], oitocentos [800], novecentos
Tens and units are linked with e (and), as in trinta e cinco [35], as well as hundreds and tens (e.g.: cento e quarenta e seis [146]), but not thousands and hundreds, unless the number ends with a
hundred with two zeroes (e.g.: dois mil e trezentos [2,300], but dois mil trezentos e sete [2,307]). E is also used to link thousands and units (e.g.: quatro mil e cinco [4,005]).
European Portuguese uses the long scale system in which we alternate between a scale word and its thousand. Thus, we have milhão (10^6, million), mil milhões (10^9, billion), bilião (10^12,
trillion), mil biliões (10^15, quadrillion), trilião (10^18, quintillion), mil triliões (10^21, sextillion)…
Numbers in different languages | {"url":"https://numbersdata.com/numbers-in-portuguese-portugal","timestamp":"2024-11-03T07:36:54Z","content_type":"text/html","content_length":"20782","record_id":"<urn:uuid:c8e107cb-3967-413d-8bac-4a6e0c9fb018>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00765.warc.gz"} |
Road to 3D Gaussian Splatting
A simplified and self-contained tutorial for understanding 3D Gaussian Splatting [Under Construction]
The end goal of this blog is to go over the necessary concepts and explanations to understand 3D Gaussian Splatting. I created this blog for my own learning and to simplify learning for others as
well without the need to go around the internet gathering and intersecting the pieces of knowledge. This tutorial is based on several readings from different sources which will be cited with links.
Problem Overview
3D Gaussian Splatting is a deep learning-based method used to create an implicit 3D representation of a scene, which allows for projecting the scene on to a 2D surface from different view points.The
3D representation is learned from a view available images of the same scene from different view points, along with their camera position information. The goal is to make the representation general
enough so that we can project the scene on to novel view points. Also, whatever representation we have, we need to have an algorithm to allow rendering for rendering to a 2D image.
Representations of a 3D Scene
Classical Representations
A 3D scene can be represented by one of the following:
1. 3D Mesh: a set of vertices, edges, and faces that outline the 3D shape.
2. Point Clouds: a collection of points in 3D that represent an object. They can carry additional numerical information like color, density …etc.
3. Voxel Grids: the 3D version of pixels. Essentially cubes that partition 3D space where each cube can carry additional information.
Implicit Representations
These include Neural Radiance Fields (NeRFs) and Gaussian Splatting, which will be discussed in detail in later sections.
Neural Radiance Fields (NeRFs)
In order to train a NeRF, we need to construct a dataset of \(N\) images, each with its corresponding camera position information. Then, a neural network, (usually an MLP) is trained to take 5
coordinates \((x, y, z, \theta, \phi)\) as input, where \((x, y, z)\) are the 3D location information, and \((\theta, \phi)\) are the angles determining the view point angle. The network outputs the
color and density values for each pixel of an image of a particular view. The network hence can be optimized by matching the ground truth image with the generated one over the views available in the
dataset .
Volumetric Rendering
3D Gaussian Splatting
• Location (\(\mu\)): a tuple \((x,y,z)\) which also represents the “mean” of the Gaussian.
• Covariance (\(\Sigma = RSS^TR^T\)): where \(S \in \mathbb{R}^{3 \times 3}\) is diagonal scaling matrix showing the scale in 3D, and \(R \in \mathbb{R}^{3 \times 3}\) is the rotation matrix. It
can be expressed analytically using 4 quaternions. This factorization is known as the eigendecomposition of a covariance matrix.
• Opacity: a value between 0 and 1
• Color parameters: either the RGB values or the coefficients of spherical harmonics. | {"url":"https://nazirnayal.xyz/blog/2024/road-to-3dgs/","timestamp":"2024-11-13T22:35:50Z","content_type":"text/html","content_length":"24259","record_id":"<urn:uuid:da2fabcc-2c97-4430-a8db-fd63849d20b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00868.warc.gz"} |
Wikipedia, the free encyclopedia
Linear regression
From Wikipedia, the free encyclopedia
In statistics, linear regression is used for two things;
□ to construct a simple formula that will predict what value will occur for a quantity of interest when other related variables take given values.
□ to allow a test to be made of whether a given variable does have an effect on a quantity of interest in situations where there may be many related variables.
In both cases, several sets of outcomes are available for the quantity of interest together with the related variables.
Linear regression is a form of regression analysis in which the relationship between one or more independent variables and another variable, called the dependent variable, is modelled by a least
squares function, called a linear regression equation. This function is a linear combination of one or more model parameters, called regression coefficients. A linear regression equation with one
independent variable represents a straight line when the predicted value (i.e. the dependant variable from the regression equation) is plotted against the independent variable: this is called a
simple linear regression. However, note that "linear" does not refer to this straight line, but rather to the way in which the regression coefficients occur in the regression equation. The results
are subject to statistical analysis.
[edit] Introduction
[edit] Theoretical model
A linear regression model assumes, given a random sample $(Y_i, X_{i1}, \ldots, X_{ip}), \, i = 1, \ldots, n$, a possibly imperfect relationship between Y[i], the regressand, and regressors $X_{i1},
\ldots, X_{ip}$. A disturbance term $\varepsilon_i$, which is a random variable too, is added to this assumed relationship to capture the influence of everything else on Y[i] other than $X_{i1}, \
ldots, X_{ip}$. Hence, the multiple linear regression model takes the following form:
$Y_i = \beta_0 + \beta_1 X_{i1} + \beta_2 X_{i2} + \cdots + \beta_p X_{ip} + \varepsilon_i, \qquad i = 1, \ldots, n$
Note that the regressors are also called independent variables, exogenous variables, covariates, input variables or predictor variables. Similarly, regressands are also called dependent variables,
response variables, measured variables, or predicted variables.
Models which do not conform to this specification may be treated by nonlinear regression. A linear regression model need not be a linear function of the independent variable: linear in this context
means that the conditional mean of Y[i] is linear in the parameters β. For example, the model $Y_i = \beta_1 X_i + \beta_2 X_i^2 + \varepsilon_i$ is linear in the parameters β[1] and β[2], but it is
not linear in $X_i^2$, a nonlinear function of X[i]. An illustration of this model is shown in the example, below.
[edit] Data and estimation
It is important to distinguish the model formulated in terms of random variables and the observed values of these random variables. Typically, the observed values, or data, denoted by lower case
letters, consist of n values $(y_i, x_{i1}, \ldots, x_{ip}), \, i = 1, \ldots, n$.
In general there are p + 1 parameters to be determined, $\beta_0, \ldots, \beta_p$. In order to estimate the parameters it is often useful to use the matrix notation
$Y = X \beta + \varepsilon \,$
where Y is a column vector that includes the observed values of $Y_1, \ldots, Y_n$, $\varepsilon$ includes the unobserved stochastic components $\varepsilon_1, \ldots, \varepsilon_n$ and the matrix X
the observed values of the regressors
$X = \begin{pmatrix} 1 & x_{11} & \cdots & x_{1p} \\ 1 & x_{21} & \cdots & x_{2p}\\ \vdots & \vdots & \ddots & \vdots \\ 1 & x_{n1} & \cdots & x_{np} \end{pmatrix}$
X includes, typically, a constant column, that is, a column which does not vary across observations, which is used to represent the intercept term β[0].
If there is any linear dependence among the columns of X, then the vector of parameters β cannot be estimated by least squares unless β is constrained, as, for example, by requiring the sum of some
of its components to be 0. However, some linear combinations of the components of β may still be uniquely estimable in such cases. For example, the model $Y_i = \beta_1 X_i + \beta_2 2 X_i + \
varepsilon_i$ cannot be solved for β[1] and β[2] independently as the matrix of observations has the reduced rank 2. In this case the model can be rewritten as $Y_i = (\beta_1 + 2\beta_2)X_i + \
varepsilon_i$ and can be solved to give a value for the composite entity β[1] + 2β[2].
Note that to only perform a least squares estimation of $\beta_0, \beta_1, \ldots, \beta_p$ it is not necessary to consider the sample as random variables. It may even be conceptually simpler to
consider the sample as fixed, observed values, as we have done thus far. However in the context of hypothesis testing and confidence intervals, it will be necessary to interpret the sample as random
variables $(Y_i, X_{i1}, \ldots, X_{ip}), \, i = 1, \ldots, n$ that will produce estimators which are themselves random variables. Then it will be possible to study the distribution of the estimators
and draw inferences.
[edit] Classical assumptions
Classical assumptions for linear regression include the assumptions that the sample is selected at random from the population of interest, that the dependent variable is continuous on the real line,
and that the error terms follow identical and independent normal distributions, that is, that the errors are i.i.d. and Gaussian. Note that these assumptions imply that the error term does not
statistically depend on the values of the independent variables, that is, that $\varepsilon_i$ is statistically independent of the predictor variables. This article adopts these assumptions unless
otherwise stated. Note that all of these assumptions may be relaxed, depending on the nature of the true probabilistic model of the problem at hand. The issue of choosing which assumptions to relax,
which functional form to adopt, and other choices related to the underlying probabilistic model are known as specification searches. In particular note that the assumption that the error terms are
normally distributed is of no consequence unless the sample is very small because central limit theorems imply that, so long as the error terms have finite variance and are not too strongly
correlated, the parameter estimates will be approximately normally distributed even when the underlying errors are not.
Under these assumptions, an equivalent formulation of simple linear regression that explicitly shows the linear regression as a model of conditional expectation can be given as
$\mbox{E}(Y_i \mid X_i = x_i) = \alpha + \beta x_i \,$
The conditional expected value of Y[i] given X[i] is an affine function of X[i]. Note that this expression follows from the assumption that the mean of $\varepsilon_i$ is zero conditional on X[i].
[edit] Least-squares analysis
[edit] Least squares estimates
The first objective of regression analysis is to best-fit the data by estimating the parameters of the model. Of the different criteria that can be used to define what constitutes a best fit, the
least squares criterion is a very powerful one. This estimate (or estimator, if we are in the context of a random sample), is given by
$\hat\beta = (X^T X)^{-1}X^T y \,$
For a full derivation see Linear least squares.
[edit] Regression inference
The estimates can be used to test various hypotheses.
Denote by σ^2 the variance of the error term $\varepsilon$ (recall we assume that $\varepsilon_i \sim N(0,\sigma^2)\,$ for every $i=1,\ldots,n$). An unbiased estimate of σ^2 is given by
$\hat \sigma^2 = \frac {S} {n-p} ,$
where $S := \sum_{i=1}^n \hat{\varepsilon}_i^2$ is the sum of square residuals. The relation between the estimate and the true value is:
$\hat\sigma^2 \cdot \frac{n-p}{\sigma^2} \sim \chi_{n-p}^2$
where $\chi_{n-p}^2$ has Chi-square distribution with n − p degrees of freedom.
The solution to the normal equations can be written as ^[1]
This shows that the parameter estimators are linear combinations of the dependent variable. It follows that, if the observational errors are normally distributed, the parameter estimators will follow
a joint normal distribution. Under the assumptions here, the estimated parameter vector is exactly distributed,
$\hat\beta \sim N ( \beta, \sigma^2 (X^TX)^{-1} )$
where N denotes the multivariate normal distribution.
The standard error of a parameter estimator is given by
$\hat\sigma_j=\sqrt{ \frac{S}{n-p-1}\left[\mathbf{(X^TX)}^{-1}\right]_{jj}}.$
The 100(1 − α)% confidence interval for the parameter, β[j], is computed as follows:
$\hat \beta_j \pm t_{\frac{\alpha }{2},n - p - 1} \hat \sigma_j.$
The residuals can be expressed as
$\mathbf{\hat r = y-X \hat\boldsymbol\beta= y-X(X^TX)^{-1}X^Ty}.\,$
The matrix $\mathbf{X(X^TX)^{-1}X^T}$ is known as the hat matrix and has the useful property that it is idempotent. Using this property it can be shown that, if the errors are normally distributed,
the residuals will follow a normal distribution with covariance matrix $(I - X(X^TX)^{-1}X^T)y\,$. Studentized residuals are useful in testing for outliers.
The hat matrix is the matrix of the orthogonal projection onto the column space of the matrix X.
Given a value of the independent variable, x[d], the predicted response is calculated as
Writing the elements $x_{dj},\ j=1,p$ as $\mathbf z$, the 100(1 − α)% mean response confidence interval for the prediction is given, using error propagation theory, by:
$\mathbf{z^T\hat\boldsymbol\beta} \pm t_{ \frac{\alpha }{2} ,n-p} \hat \sigma \sqrt {\mathbf{ z^T(X^T X)^{- 1}z }}.$
The 100(1 − α)% predicted response confidence intervals for the data are given by:
$\mathbf z^T \hat\boldsymbol\beta \pm t_{\frac{\alpha }{2},n-p-1} \hat \sigma \sqrt {1 + \mathbf{z^T(X^TX)^{-1}z}}.$
[edit] Univariate linear case
We consider here the case of the simplest regression model, $Y = \alpha + \beta X + \varepsilon$. In order to estimate α and β, we have a sample $(y_i, x_i), \, i = 1, \ldots, n$ of observations
which are, here, not seen as random variables and denoted by lower case letters. As stated in the introduction, however, we might want to interpret the sample in terms of random variables in some
other contexts than least squares estimation.
The idea of least squares estimation is to minimize the following unknown quantity, the sum of squared errors:
$\sum_{i = 1}^n \varepsilon_i^2 = \sum_{i = 1}^n (y_i - \alpha - \beta x_i)^2$
Taking the derivative of the preceding expression with respect to α and β yields the normal equations:
$\begin{array}{lcl} n\ \alpha + \displaystyle\sum_{i = 1}^n x_i\ \beta = \displaystyle\sum_{i = 1}^n y_i \\ \displaystyle\sum_{i = 1}^n x_i\ \alpha + \displaystyle\sum_{i = 1}^n x_i^2\ \beta = \
displaystyle\sum_{i = 1}^n x_i y_i \end{array}$
This is a linear system of equations which can be solved using Cramer's rule:
$\hat\beta = \frac {n \displaystyle\sum_{i = 1}^n x_i y_i - \displaystyle\sum_{i = 1}^n x_i \displaystyle\sum_{i = 1}^n y_i} {n \displaystyle\sum_{i = 1}^n x_i^2 - \left(\displaystyle\sum_{i = 1}
^n x_i\right)^2} =\frac{\displaystyle\sum_{i = 1}^n(x_i-\bar{x})(y_i-\bar{y})}{\displaystyle\sum_{i = 1}^n(x_i-\bar{x})^2} \,$
$\hat\alpha = \frac {\displaystyle\sum_{i = 1}^n x_i^2 \displaystyle\sum_{i = 1}^n y_i - \displaystyle\sum_{i = 1}^n x_i \displaystyle\sum_{i = 1}^n x_iy_i} {n \displaystyle\sum_{i = 1}^n x_i^2 -
\left(\displaystyle\sum_{i = 1}^n x_i\right)^2}= \bar y-\bar x \hat\beta$
$S = \sum_{i = 1}^n (y_i - \hat{y}_i)^2 = \sum_{i = 1}^n y_i^2 - \frac {n \left(\displaystyle\sum_{i = 1}^n x_i y_i \right)^2 + \left(\displaystyle\sum_{i = 1}^n y_i \right)^2 \displaystyle\sum_
{i = 1}^n x_i^2 - 2 \displaystyle\sum_{i = 1}^n x_i \displaystyle\sum_{i = 1}^n y_i \displaystyle\sum_{i = 1}^n x_i y_i } {n \displaystyle\sum_{i = 1}^n x_i^2 - \left(\displaystyle\sum_{i = 1}^n
$\hat \sigma^2 = \frac {S} {n-2}.$
The covariance matrix is
$\frac{1}{n \displaystyle\sum_{i = 1}^n x_i^2 - \left(\displaystyle\sum_{i = 1}^n x_i\right)^2}\begin{pmatrix} \sum x_i^2 & -\sum x_i \\ -\sum x_i & n \end{pmatrix}$
The mean response confidence interval is given by
$y_d = (\alpha+\hat\beta x_d) \pm t_{ \frac{\alpha }{2} ,n-2} \hat \sigma \sqrt {\frac{1}{n} + \frac{(x_d - \bar{x})^2}{\sum (x_i - \bar{x})^2}}$
The predicted response confidence interval is given by
$y_d = (\alpha+\hat\beta x_d) \pm t_{ \frac{\alpha }{2} ,n-2} \hat \sigma \sqrt {1+\frac{1}{n} + \frac{(x_d - \bar{x})^2}{\sum (x_i - \bar{x})^2}}$
The term $t_{ \frac{\alpha }{2} ,n-2}$ is a reference to the Student's t-distribution. $\hat \sigma$ is standard error.
[edit] Analysis of variance
In analysis of variance (ANOVA), the total sum of squares is split into two or more components.
The "total (corrected) sum of squares" is
$\text{SST} = \sum_{i=1}^n (y_i - \bar y)^2,$
$\bar y = \frac{1}{n} \sum_i y_i$
("corrected" means $\scriptstyle\bar y\,$ has been subtracted from each y-value). Equivalently
$\text{SST} = \sum_{i=1}^n y_i^2 - \frac{1}{n}\left(\sum_i y_i\right)^2$
The total sum of squares is partitioned as the sum of the "regression sum of squares" SSReg (or RSS, also called the "explained sum of squares") and the "error sum of squares" SSE, which is the sum
of squares of residuals.
The regression sum of squares is
$\text{SSReg} = \sum \left( \hat y_i - \bar y \right)^2 = \hat\boldsymbol\beta^T \mathbf{X}^T \mathbf y - \frac{1}{n}\left( \mathbf {y^T u u^T y} \right),$
where u is an n-by-1 vector in which each element is 1. Note that
$\mathbf{y^T u} = \mathbf{u^T y} = \sum_i y_i,$
$\frac{1}{n} \mathbf{y^T u u^T y} = \frac{1}{n}\left(\sum_i y_i\right)^2.\,$
The error (or "unexplained") sum of squares SSE, which is the sum of square of residuals, is given by
$\text{SSE} = \sum_i {\left( {y_i - \hat y_i} \right)^2 } = \mathbf{ y^T y - \hat\boldsymbol\beta^T X^T y}.$
The total sum of squares SST is
$\text{SST} = \sum_i \left( y_i-\bar y \right)^2 = \mathbf{ y^T y}-\frac{1}{n}\left( \mathbf{y^Tuu^Ty}\right)=\text{SSReg}+ \text{SSE}.$
Pearson's coefficient of regression, R^ 2 is then given as
$R^2 = \frac{\text{SSReg}}{{\text{SST}}} = 1 - \frac{\text{SSE}}{\text{SST}}.$
If the errors are independent and normally distributed with expected value 0 and they all have the same variance, then under the null hypothesis that all of the elements in β = 0 except the constant,
the statistic
$\frac{ R^2 / ( m-1 ) }{ ( 1 - R^2 )/(n-m)}$
follows an F-distribution with (m-1) and (n-m) degrees of freedom. If that statistic is too large, then one rejects the null hypothesis. How large is too large depends on the level of the test, which
is the tolerated probability of type I error; see [statistical significance]].
[edit] Example
To illustrate the various goals of regression, we give an example. The following data set gives the average heights and weights for American women aged 30-39 (source: The World Almanac and Book of
Facts, 1975).
Height (m) 1.47 1.5 1.52 1.55 1.57 1.60 1.63 1.65 1.68 1.7 1.73 1.75 1.78 1.8 1.83
Weight (kg) 52.21 53.12 54.48 55.84 57.2 58.57 59.93 61.29 63.11 64.47 66.28 68.1 69.92 72.19 74.46
A plot of weight against height (see below) shows that it cannot be modeled by a straight line, so a regression is performed by modeling the data by a parabola.
$Y_i = \beta_0 + \beta_1 X_i + \beta_2 X^2_i +\varepsilon_i \!$
where the dependent variable Y[i] is weight and the independent variable X[i] is height.
Place the observations $x_i, \ x_i^2, \, i = 1, \ldots, n$, in the matrix X.
$\mathbf{X} = \begin{pmatrix} 1&1.47&2.16\\ 1&1.50&2.25\\ 1&1.52&2.31\\ 1&1.55&2.40\\ 1&1.57&2.46\\ 1&1.60&2.56\\ 1&1.63&2.66\\ 1&1.65&2.72\\ 1&1.68&2.82\\ 1&1.70&2.89\\ 1&1.73&2.99\\ 1&1.75&3.06
\\ 1&1.78&3.17\\ 1&1.80&3.24\\ 1&1.83&3.35\\ \end{pmatrix}$
The values of the parameters are found by solving the normal equations
Element ij of the normal equation matrix, $\mathbf{X^TX}$ is formed by summing the products of column i and column j of X.
$x_{ij} = \sum_{k = 1}^{15} x_{ki} x_{kj}$
Element i of the right-hand side vector $\mathbf{X^Ty}$ is formed by summing the products of column i of X with the column of dependent variable values.
$\left(\mathbf{X^Ty}\right)_i= \sum_{k = 1}^{15} x_{ki} y_k$
Thus, the normal equations are
$\begin{pmatrix} 15&24.76&41.05\\ 24.76&41.05&68.37\\ 41.05&68.37&114.35\\ \end{pmatrix} \begin{pmatrix} \hat\beta_0\\ \hat\beta_1\\ \hat\beta_2\\ \end{pmatrix} = \begin{pmatrix} 931\\ 1548\\
2586\\ \end{pmatrix}$
$\hat\beta_0=129 \pm 16$ (value $\pm$ standard deviation)
$\hat\beta_1=-143 \pm 20$
$\hat\beta_2=62 \pm 6$
The calculated values are given by
$\hat{y}_i = \hat\beta_0 + \hat\beta_1 x_i+ \hat\beta_2 x^2_i$
The observed and calculated data are plotted together and the residuals, $y_i - \hat{y}_i$, are calculated and plotted. Standard deviations are calculated using the sum of squares, S = 0.76.
The confidence intervals are computed using:
$[\hat{\beta_j}-\hat\sigma_j t_{m-n;1-\frac{\alpha}{2}};\hat{\beta_j}+\hat\sigma_j t_{m-n;1-\frac{\alpha}{2}}]$
with α=5%, $t_{m-n;1-\frac{\alpha}{2}}$ = 2.2. Therefore, we can say that the 95% confidence intervals are:
[edit] Examining results of regression models
[edit] Checking model assumptions
Some of the model assumptions can be evaluated by calculating the residuals and plotting or otherwise analyzing them. The following plots can be constructed to test the validity of the assumptions:
1. Residuals against the explanatory variables in the model, as illustrated above. The residuals should have no relation to these variables (look for possible non-linear relations) and the spread of
the residuals should be the same over the whole range.
2. Residuals against explanatory variables not in the model. Any relation of the residuals to these variables would suggest considering these variables for inclusion in the model.
3. Residuals against the fitted values, $\hat\mathbf y\,$.
4. A time series plot of the residuals, that is, plotting the residuals as a function of time.
5. Residuals against the preceding residual.
6. A normal probability plot of the residuals to test normality. The points should lie along a straight line.
There should not be any noticeable pattern to the data in all but the last plot.
[edit] Assessing goodness of fit
1. The coefficient of determination gives what fraction of the observed variance of the response variable can be explained by the given variables.
2. Examine the observational and prediction confidence intervals. In most contexts, the smaller they are the better.
[edit] Other procedures
[edit] Generalized least squares
Generalized least squares, which includes weighted least squares as a special case, can be used when the observational errors have unequal variance or serial correlation.
[edit] Errors-in-variables model
Errors-in-variables model or total least squares when the independent variables are subject to error
[edit] Generalized linear model
Generalized linear model is used when the distribution function of the errors is not a Normal distribution. Examples include exponential distribution, gamma distribution, inverse Gaussian
distribution, Poisson distribution, binomial distribution, multinomial distribution
[edit] Robust regression
A host of alternative approaches to the computation of regression parameters are included in the category known as robust regression. One technique minimizes the mean absolute error, or some other
function of the residuals, instead of mean squared error as in linear regression. Robust regression is much more computationally intensive than linear regression and is somewhat more difficult to
implement as well. While least squares estimates are not very sensitive to breaking the normality of the errors assumption, this is not true when the variance or mean of the error distribution is not
bounded, or when an analyst that can identify outliers is unavailable.
Among Stata users, Robust regression is frequently taken to mean linear regression with Huber-White standard error estimates due to the naming conventions for regression commands. This procedure
relaxes the assumption of homoscedasticity for variance estimates only; the predictors are still ordinary least squares (OLS) estimates. This occasionally leads to confusion; Stata users sometimes
believe that linear regression is a robust method when this option is used, although it is actually not robust in the sense of outlier-resistance.
[edit] Instrumental variables and related methods
The assumption that the error term in the linear model can be treated as uncorrelated with the independent variables will frequently be untenable, as omitted-variables bias, "reverse" causation, and
errors-in-variables problems can generate such a correlation. Instrumental variable and other methods can be used in such cases.
[edit] Applications of linear regression
Linear regression is widely used in biological, behavioral and social sciences to describe possible relationships between variables. It ranks as one of the most important tools used in these
[edit] The trend line
For trend lines as used in technical analysis, see Trend lines (technical analysis)
A trend line represents a trend, the long-term movement in time series data after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock
prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using
statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the
Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event
(such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated
analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.
[edit] Epidemiology
As one example, early evidence relating tobacco smoking to mortality and morbidity came from studies employing regression. Researchers usually include several variables in their regression analysis
in an effort to remove factors that might produce spurious correlations. For the cigarette smoking example, researchers might include socio-economic status in addition to smoking to ensure that any
observed effect of smoking on mortality is not due to some effect of education or income. However, it is never possible to include all possible confounding variables in a study employing regression.
For the smoking example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason, randomized controlled trials are often able to generate more compelling
evidence of causal relationships than correlational analysis using linear regression. When controlled experiments are not feasible, variants of regression analysis such as instrumental variables and
other methods may be used to attempt to estimate causal relationships from observational data.
[edit] Finance
The capital asset pricing model uses linear regression as well as the concept of Beta for analyzing and quantifying the systematic risk of an investment. This comes directly from the Beta coefficient
of the linear regression model that relates the return on the investment to the return on all risky assets.
Regression may not be the appropriate way to estimate beta in finance given that it is supposed to provide the volatility of an investment relative to the volatility of the market as a whole. This
would require that both these variables be treated in the same way when estimating the slope. Whereas regression treats all variability as being in the investment returns variable, i.e. it only
considers residuals in the dependent variable.^[2]
[edit] Environmental science
Linear regression finds application in a wide range of environmental science applications. For example, recent work published in the Journal of Geophysical Research used regression models to identify
data contamination, which led to an overstatement of global warming trends over land. Using the regression model to filter extraneous, nonclimatic effects reduced the estimated 1980–2002 global
average temperature trends over land by about half.^[3]
[edit] See also
[edit] References
[edit] Additional sources
• Cohen, J., Cohen P., West, S.G., & Aiken, L.S. (2003). Applied multiple regression/correlation analysis for the behavioral sciences. (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates
• Charles Darwin. The Variation of Animals and Plants under Domestication. (1869) (Chapter XIII describes what was known about reversion in Galton's time. Darwin uses the term "reversion".)
• Draper, N.R. and Smith, H. Applied Regression Analysis Wiley Series in Probability and Statistics (1998)
• Francis Galton. "Regression Towards Mediocrity in Hereditary Stature," Journal of the Anthropological Institute, 15:246-263 (1886). (Facsimile at: [1])
• Robert S. Pindyck and Daniel L. Rubinfeld (1998, 4h ed.). Econometric Models and Economic Forecasts,, ch. 1 (Intro, incl. appendices on Σ operators & derivation of parameter est.) & Appendix 4.3
(mult. regression in matrix form).
• Kaw, Autar; Kalu, Egwu (2008), Numerical Methods with Applications (1st ed.), www.autarkaw.com .
[edit] External links
Design of experiments
Sample size estimation
Descriptive statistics Continuous data Dispersion
Categorical data
Inferential statistics
General estimation
Specific tests
Survival analysis
Linear models
Regression analysis
Statistical graphics
Category • Portal • Topic outline • List of topics | {"url":"http://taggedwiki.zubiaga.org/new_content/ba88ddbd2b5ea302c14baabfe6e9323a","timestamp":"2024-11-13T13:08:28Z","content_type":"application/xhtml+xml","content_length":"103803","record_id":"<urn:uuid:5b4d5128-71cc-4c0f-951b-3b97f6ee94b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00625.warc.gz"} |
PySDR: A Guide to SDR and DSP using Python
4. Digital Modulation¶
In this chapter we will discuss actually transmitting data using digital modulation and wireless symbols! We will design signals that convey “information”, e.g., 1’s and 0’s, using modulation
schemes like ASK, PSK, QAM, and FSK. We will also discuss IQ plots and constellations, and end the chapter with some Python examples.
The main goal of modulation is to squeeze as much data into the least amount of spectrum possible. Technically speaking we want to maximize “spectral efficiency” in units bits/sec/Hz.
Transmitting 1’s and 0’s faster will increase the bandwidth of our signal (recall Fourier properties), which means more spectrum is used. We will also examine other techniques besides
transmitting faster. There will be many trade-offs when deciding how to modulate, but there will also be room for creativity.
New term alert! Our transmit signal is going to be made up of “symbols”. Each symbol will carry some number of bits of information, and we will transmit symbols back to back, thousands or even
millions in a row.
As a simplified example, let’s say we have a wire and are sending 1’s and 0’s using high and low voltage levels. A symbol is one of those 1’s or 0’s:
In the above example each symbol represents one bit. How can we convey more than one bit per symbol? Let’s study the signals that travel down Ethernet cables, which is defined in an IEEE standard
called IEEE 802.3 1000BASE-T. The common operating mode of Ethernet uses a 4-level amplitude modulation (2 bits per symbol) with 8 ns symbols.
Take a moment to try to answer these questions:
1. How many bits per second are transmitted in the example shown above?
2. How many pairs of these data wires would be needed to transmit 1 gigabit/sec?
3. If a modulation scheme has 16 different levels, how many bits per symbol is that?
4. With 16 different levels and 8 ns symbols, how many bits per second is that?
1. 250 Mbps - (1/8e-9)*2
2. Four (which is what Ethernet cables have)
3. 4 bits per symbol - log_2(16)
4. 0.5 Gbps - (1/8e-9)*4
Wireless Symbols¶
Question: Why can’t we directly transmit the Ethernet signal shown in the figure above? There are many reasons, the biggest two being:
1. Low frequencies require huge antennas, and the signal above contains frequencies down to DC (0 Hz). We can’t transmit DC.
2. Square waves take an excessive amount of spectrum for the bits per second–recall from the Frequency Domain chapter that sharp changes in time domain use a large amount of bandwidth/spectrum:
What we do for wireless signals is start with a carrier, which is just a sinusoid. E.g., FM radio uses a carrier like 101.1 MHz or 100.3 MHz. We modulate that carrier in some way (there are many).
For FM radio it’s an analog modulation, not digital, but it’s the same concept as digital modulation.
In what ways can we modulate the carrier? Another way to ask the same question: what are the different properties of a sinusoid?
1. Amplitude
2. Phase
3. Frequency
We can modulate our data onto a carrier by modifying any one (or more) of these three.
Amplitude Shift Keying (ASK)¶
Amplitude Shift Keying (ASK) is the first digital modulation scheme we will discuss because amplitude modulation is the simplest to visualize of the three sinusoid properties. We literally modulate
the amplitude of the carrier. Here is an example of 2-level ASK, called 2-ASK:
Note how the average value is zero; we always prefer this whenever possible.
We can use more than two levels, allowing for more bits per symbol. Below shows an example of 4-ASK. In this case each symbol carries 2 bits of information.
Question: How many symbols are shown in the signal snippet above? How many bits are represented total?
20 symbols, so 40 bits of information
How do we actually create this signal digitally, through code? All we have to do is create a vector with N samples per symbol, then multiply that vector by a sinusoid. This modulates the signal onto
a carrier (the sinusoid acts as that carrier). The example below shows 2-ASK with 10 samples per symbol.
The top plot shows the discrete samples represented by red dots, i.e., our digital signal. The bottom plot shows what the resulting modulated signal looks like, which could be transmitted over the
air. In real systems, the frequency of the carrier is usually much much higher than the rate the symbols are changing. In this example there are only three cycles of the sinusoid in each symbol, but
in practice there may be thousands, depending on how high in the spectrum the signal is being transmitted.
Phase Shift Keying (PSK)¶
Now let’s consider modulating the phase in a similar manner as we did with the amplitude. The simplest form is Binary PSK, a.k.a. BPSK, where there are two levels of phase:
1. No phase change
2. 180 degree phase change
Example of BPSK (note the phase changes):
It’s not very fun to look at plots like this:
Instead we usually represent the phase in the complex plane.
IQ Plots/Constellations¶
You have seen IQ plots before in the complex numbers subsection of the IQ Sampling chapter, but now we will use them in a new and fun way. For a given symbol, we can show the amplitude and phase on
an IQ plot. For the BPSK example we said we had phases of 0 and 180 degrees. Let’s plot those two points on the IQ plot. We will assume a magnitude of 1. In practice it doesn’t really matter what
magnitude you use; a higher value means a higher power signal, but you can also just increase the amplifier gain instead.
The above IQ plot shows what we will transmit, or rather the set of symbols we will transmit from. It does not show the carrier, so you can think about it as representing the symbols at baseband.
When we show the set of possible symbols for a given modulation scheme, we call it the “constellation”. Many modulation schemes can be defined by their constellation.
To receive and decode BPSK we can use IQ sampling, like we learned about last chapter, and examine where the points end up on the IQ plot. However, there will be a random phase rotation due to the
wireless channel because the signal will have some random delay as it passes through the air between antennas. The random phase rotation can be reversed using various methods we will learn about
later. Here is an example of a few different ways that BPSK signal might show up at the receiver (this does not include noise):
Back to PSK. What if we want four different levels of phase? I.e., 0, 90, 180, and 270 degrees. In this case it would be represented like so on the IQ plot, and it forms a modulation scheme we call
Quadrature Phase Shift Keying (QPSK):
For PSK we always have N different phases, equally spaced around 360 degrees for best results. We often show the unit circle to emphasize that all points have the same magnitude:
Question: What’s wrong with using a PSK scheme like the one in the below image? Is it a valid PSK modulation scheme?
There is nothing invalid about this PSK scheme. You can certainly use it, but, because the symbols are not uniformly spaced, this scheme is not as effective as it could be. Scheme efficiency will
become clear once we discuss how noise impacts our symbols. The short answer is that we want to leave as much room as possible in between the symbols, in case there is noise, so that a symbol is not
interpreted at the receiver as one of the other (incorrect) symbols. We don’t want a 0 being received as a 1.
Let’s detour back to ASK for a moment. Note that we can show ASK on the IQ plot just like PSK. Here is the IQ plot of 2-ASK, 4-ASK, and 8-ASK, in the bipolar configuration, as well as 2-ASK and
4-ASK in the unipolar configuration.
As you may have noticed, bipolar 2-ASK and BPSK are the same. A 180 degree phase shift is the same as multiplying the sinusoid by -1. We call it BPSK, probably because PSK is used way more than ASK.
Quadrature Amplitude Modulation (QAM)¶
What if we combine ASK and PSK? We call this modulation scheme Quadrature Amplitude Modulation (QAM). QAM usually looks something like this:
Here are some other examples of QAM:
For a QAM modulation scheme, we can technically put points wherever we want to on the IQ plot since the phase and amplitude are modulated. The “parameters” of a given QAM scheme are best defined
by showing the QAM constellation. Alternatively, you may list the I and Q values for each point, like below for QPSK:
Note that most modulation schemes, except the various ASKs and BPSK, are pretty hard to “see” in the time domain. To prove my point, here is an example of QAM in time domain. Can you distinguish
between the phase of each symbol in the below image? It’s tough.
Given the difficulty discerning modulation schemes in the time domain, we prefer to use IQ plots over displaying the time domain signal. We might, nonetheless, show the time domain signal if
there’s a certain packet structure or the sequence of symbols matters.
Frequency Shift Keying (FSK)¶
Last on the list is Frequency Shift Keying (FSK). FSK is fairly simple to understand–we just shift between N frequencies where each frequency is one possible symbol. However, because we are
modulating a carrier, it’s really our carrier frequency +/- these N frequencies. E.g.. we might be at a carrier of 1.2 GHz and shift between these four frequencies:
1. 1.2005 GHz
2. 1.2010 GHz
3. 1.1995 GHz
4. 1.1990 GHz
The example above would be 4-FSK, and there would be two bits per symbol. A 4-FSK signal in the frequency domain might look something like this:
If you use FSK, you must ask a critical question: What should the spacing between frequencies be? We often denote this spacing as
IQ plots can’t be used to show different frequencies. They show magnitude and phase. While it is possible to show FSK in the time domain, any more than 2 frequencies makes it difficult to
distinguish between symbols:
As an aside, note that FM radio uses Frequency Modulation (FM) which is like an analog version of FSK. Instead of having discrete frequencies we jump between, FM radio uses a continuous audio signal
to modulate the frequency of the carrier. Below is an example of FM and AM modulation where the “signal” at the top is the audio signal being modulated onto to the carrier.
In this textbook we are mainly concerned about digital forms of modulation.
Differential Coding¶
In many wireless (and wired) communications protocols based on PSK or QAM, you are likely to run into a step that occurs right before bits are modulated (or right after demodulation), called
differential coding. To demonstrate its utility consider receiving a BPSK signal. As the signal flies through the air it experiences some random delay between the transmitter and receiver, causing a
random rotation in the constellation, as we mentioned earlier. When the receiver synchronizes to it, and aligns the BPSK to the “I” (real) axis, it has no way of knowing if it is 180 degrees out
of phase or not, because the constellation is symmetric. One option is to transmit symbols the receiver knows the value of ahead of time, mixed into the information, known as pilot symbols. The
receiver can use these known symbols to determine which cluster is a 1 or 0, in the case of BPSK. Pilot symbols must be sent at some period, related to how fast the wireless channel is changing,
which will ultimately reduce the data rate. Instead of having to mix pilot symbols into the transmitted waveform, we can choose to use differential coding.
The simplest case of differential coding is when used alongside BPSK, which involves one bit per symbol. Instead of simply transmitting a 1 for binary 1, and a -1 for binary 0, BPSK differential
coding involves transmitting a 0 when the input bit is the same as the encoding of the previous bit (not the previous input bit itself), and transmitting a 1 when it differs. We still transmit the
same number of bits, aside from one extra bit that is needed at the beginning to start the output sequence, but now we don’t have to worry about the 180 degree phase ambiguity. This encoding scheme
can be described using the following equation, where
Because the output is based on the previous step’s output, we must start the output with an arbitrary 1 or 0, and as we’ll show during the decoding process, it doesn’t matter which one we
choose (we must still transmit this starter symbol!).
For those visual learners, the differential encoding process can be represented as a diagram, where the delay block is a delay-by-1 operation:
As an example of encoding, consider transmitting the 10 bits [1, 1, 0, 0, 1, 1, 1, 1, 1, 0] using BPSK. Assume we start the output sequence with 1; it actually doesn’t matter whether you use 1 or
0. It helps to show the bits stacked on top of each other, making sure to shift the input to make room for the starting output bit:
Input: 1 1 0 0 1 1 1 1 1 0
Output: 1
Next you build the output by comparing the input bit with the previous output bit, and apply the XOR operation shown in the table above. The next output bit is therefore a 0, because 1 and 1 match:
Input: 1 1 0 0 1 1 1 1 1 0
Output: 1 0
Repeat for the rest and you will get:
Input: 1 1 0 0 1 1 1 1 1 0
Output: 1 0 1 1 1 0 1 0 1 0 0
After applying differential encoding, we would ultimately transmit [1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0]. The 1’s and 0’s are still mapped to the positive and negative symbols we discussed earlier.
The decoding process, which occurs at the receiver, compares the received bit with the previous received bit, which is much simpler to understand:
If you were to receive the BPSK symbols [1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0], you would start at the left and check if the first two match; in this case they don’t so the first bit is a 1. Repeat and
you will get the sequence we started with, [1, 1, 0, 0, 1, 1, 1, 1, 1, 0]. It may not be obvious, but the starter bit we added could have been a 1 or a 0 and we would get the same result.
The encoding and decoding process is summarized in the following graphic:
The big downside to using differential coding is that if you have a bit error, it will lead to two bit errors. The alternative to using differential coding for BPSK is to add pilot symbols
periodically, as discussed earlier, which can also be used to reverse/invert multipath caused by the channel. But one problem with pilot symbols is that the wireless channel can change very quickly,
on the order of tens or hundreds of symbols if it’s a moving receiver and/or transmitter, so you would need pilot symbols often enough to reflect the changing channel. So if a wireless protocol is
putting high emphasis on reducing the complexity of the receiver, such as RDS which we study in the End-to-End Example chapter, it may choose to use differential coding.
Remember that the above differential coding example was specific to BPSK. Differential coding applies at the symbol level, so to apply it to QPSK you work with pairs of bits at a time, and so on for
higher order QAM schemes. Differential QPSK is often referred to as DQPSK.
Python Example¶
As a short Python example, let’s generate QPSK at baseband and plot the constellation.
Even though we could generate the complex symbols directly, let’s start from the knowledge that QPSK has four symbols at 90-degree intervals around the unit circle. We will use 45, 135, 225, and
315 degrees for our points. First we will generate random numbers between 0 and 3 and perform math to get the degrees we want before converting to radians.
import numpy as np
import matplotlib.pyplot as plt
num_symbols = 1000
x_int = np.random.randint(0, 4, num_symbols) # 0 to 3
x_degrees = x_int*360/4.0 + 45 # 45, 135, 225, 315 degrees
x_radians = x_degrees*np.pi/180.0 # sin() and cos() takes in radians
x_symbols = np.cos(x_radians) + 1j*np.sin(x_radians) # this produces our QPSK complex symbols
plt.plot(np.real(x_symbols), np.imag(x_symbols), '.')
Observe how all the symbols we generated overlap. There’s no noise so the symbols all have the same value. Let’s add some noise:
n = (np.random.randn(num_symbols) + 1j*np.random.randn(num_symbols))/np.sqrt(2) # AWGN with unity power
noise_power = 0.01
r = x_symbols + n * np.sqrt(noise_power)
plt.plot(np.real(r), np.imag(r), '.')
Consider how additive white Gaussian noise (AWGN) produces a uniform spread around each point in the constellation. If there’s too much noise then symbols start passing the boundary (the four
quadrants) and will be interpreted by the receiver as an incorrect symbol. Try increasing noise_power until that happens.
For those interested in simulating phase noise, which could result from phase jitter within the local oscillator (LO), replace the r with:
phase_noise = np.random.randn(len(x_symbols)) * 0.1 # adjust multiplier for "strength" of phase noise
r = x_symbols * np.exp(1j*phase_noise)
You could even combine phase noise with AWGN to get the full experience:
We’re going to stop at this point. If we wanted to see what the QPSK signal looked like in the time domain, we would need to generate multiple samples per symbol (in this exercise we just did 1
sample per symbol). You will learn why you need to generate multiple samples per symbol once we discuss pulse shaping. The Python exercise in the Pulse Shaping chapter will continue where we left off | {"url":"https://pysdr.org/content/digital_modulation.html","timestamp":"2024-11-01T22:56:38Z","content_type":"text/html","content_length":"84672","record_id":"<urn:uuid:1985c1ae-5773-456c-bf80-f6af7fadd521>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00871.warc.gz"} |
AL for a big class – Vietnam
Viewing 2 posts - 1 through 2 (of 2 total)
• Posts
The number of students in a class in Vietnam is usually big, sometime 100 or 200 students per class. So, the question is that how to apply efficiently AL method for that a big class?
We known that, with traditional theory class, this is not a big problem. But applying AL for big class will arise many issues:
1) We need to divide class into groups. So, what is the number of students per group? How many groups that we have? How many problems that we need to prepare?
2) We need more class space. We need a big space or we need to organize groups into several sessions
3) It take a lot of time of lecturer. We need more time to guide student, to supervise student, to evaluate result
4) How to evaluate contribution of each student in a group?
• Posts
Viewing 2 posts - 1 through 2 (of 2 total)
• You must be logged in to reply to this topic. | {"url":"https://alien-pbl.fsktm.um.edu.my/forums/topic/al-for-a-big-class-vietnam/","timestamp":"2024-11-11T20:25:49Z","content_type":"text/html","content_length":"63356","record_id":"<urn:uuid:809ad6c3-9d5a-4223-840c-d34da79f8e94>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00052.warc.gz"} |
Local Limit of the Random Degree Constrained Process
Printable PDF
Department of Mathematics,
University of California San Diego
Math 288 - Probability & Statistics
Márton Szőke
Budapest University of Technology
Local Limit of the Random Degree Constrained Process
We show that the random degree constrained process (a time-evolving random graph model with degree constraints) has a local weak limit, provided that the underlying host graphs are high degree almost
regular. We, moreover, identify the limit object as a multi-type branching process, by combining coupling arguments with the analysis of a certain recursive tree process. Using a spectral
characterization, we also give an asymptotic expansion of the critical time when the giant component emerges in the so-called random $d$-process, resolving a problem of Warnke and Wormald for large
Based on joint work with Balázs Ráth and Lutz Warnke; see arXiv:2409.11747
Lutz Warnke
October 31, 2024
11:00 AM
APM 6402
(Zoom-Talk: Meeting ID: 980 5804 6945, Password: 271781)
Research Areas
Combinatorics Probability Theory | {"url":"https://www.math.ucsd.edu/seminar/local-limit-random-degree-constrained-process","timestamp":"2024-11-02T17:16:04Z","content_type":"text/html","content_length":"35339","record_id":"<urn:uuid:769f0544-0725-4ff1-b275-01a58e11e259>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00798.warc.gz"} |
Math | Budgie
Most simple math operations are doable with the operation command. It takes in an odd number of parameters, alternating between values (which can be either direct numbers or variable names) and
operators. Operators are given as plain names with spaces between words. The supported operators are:
Recall that parenthesis are required for arguments with spaces: including operator aliases.
The parenthesis command is also commonly used with math. It takes a single argument and wraps it in () parentheses.
operation : foo times 2
operation : foo (decrease by) bar times { parenthesis : { operation : bar minus 3 } }
variable : bar double { operation : foo (divide by) 3 plus 4 times foo }
In C#:
foo *= 2;
foo -= bar * (bar - 3);
double bar = foo /= 3 + 4 * foo;
In Python:
foo *= 2
foo -= bar * (bar - 3)
bar = foo /= 3 + 4 * foo
Number Types
Some languages recognize a difference between integers, doubles, floats, and other number types. Some do not. For feature parity between other languages, Budgie recognizes only int and double as
valid number types. float, long, ushort, and so on are not supported.
Number Conversions
When you have a double and need an int, use the math as int command to truncate and convert to an int. It behaves similarly to math floor but returns an int instead of a double.
variable : rounded int { math as int : 3.5 }
In C#: int rounded = (int)3.5;
In Python: rounded = math.floor(3.5)
Native Commands
All supported languages provide some amount of built-in math operations beyond the simple arithmetic operators. These are typically encapsulated in some kind of global Math object and/or system
namespace that contains simple functions and constants.
Budgie abstracts away the differences in these "native" commands. For example:
In C#: Math.Max(foo, bar)
All possible native math commands are given below. | {"url":"https://docs.budgielang.org/syntax/math","timestamp":"2024-11-12T17:20:41Z","content_type":"text/html","content_length":"381790","record_id":"<urn:uuid:ba693ae2-6c6a-4a29-83ef-f9320a8c76b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00298.warc.gz"} |
4.5 Quantification of Inversion Quality
4.5 Quantification of Inversion Quality
Several approaches are commonly used to gain insight into the reliability of tomograms. For small inverse problems, it is possible to calculate the model resolution matrix (e.g., Menke, 1984) and
present the diagonals, rows, and columns of these matrices as cross-sectional images. Conceptually, the model resolution matrix is the lens or filter through which the inversion sees the study
region. For a linear inverse problem, the parameter estimates are expressed by Equation 14.
[latex]\displaystyle m=[J^{\mathrm{T}}C{_{D}}^{-1}J+\varepsilon D^{\mathrm{T}}D]^{-1}J^{\mathrm{T}}C{_{D}}^{-1}d_{obs}\approx [J^{\mathrm{T}}C{_{D}}^{-1}J+\varepsilon D^{\mathrm{T}}D]^{-1}J^{\ (14)
In this case, the model resolution matrix R is defined as shown in Equation 15.
[latex]\displaystyle R=[J^{\mathrm{T}}C{_{D}}^{-1}J+\varepsilon D^{\mathrm{T}}D]^{-1}J^{\mathrm{T}}C{_{D}}^{-1}J[/latex] (15)
Consequently, the parameter estimates are the product of the true parameter values and the resolution matrix as shown in Equation 16.
[latex]\displaystyle m=Rm_{true}[/latex] (16)
For linear problems, where J is independent of m[true], R can be calculated prior to data collection. Given an estimate of measurement errors, the model resolution matrix can be calculated using
Equation 15 and used as a tool to assess and refine hypothetical survey designs and regularization criteria. In interpreting inversion results, R is useful for identifying likely inversion artifacts
(Day-Lewis et al., 2005). The model resolution matrix quantifies the spatial averaging inherent to tomography; hence, it gives insight into which regions of a tomogram are well resolved versus poorly
resolved. This information is valuable if tomograms are to be converted to quantitative estimates of porosity, concentration, or other hydrogeologic parameters. Calculation of resolution matrices,
however, remains computationally prohibitive for many problems, particularly those involving 3-D inversion. Hence, few commercially available software packages support calculation of R, and it is
instead more common to look at an inverse problem’s cumulative squared sensitivity vector (S) as shown in Equation 17.
[latex]\displaystyle S=\mathrm{diag}(J^{T}J)[/latex] (17)
Here, J is the sensitivity matrix defined in Equation 10a and diag( ) indicates the diagonal elements of a matrix. The sensitivity matrix can be used to gain semi-quantitative insight into how
resolution varies spatially over a tomogram. Pixels with high values of sensitivity are relatively well informed by the measured data, whereas pixels with low values of sensitivity are poorly
informed. It is important to note that, in contrast to R, S does not account for the effects of regularization criteria (as contained in D) or measurement error (as contained in C[D]). Rather, S is
based only on the survey geometry and measurement sensitivity. An example sensitivity map is provided in the case study in Section 5.2 and qualitatively in Figure 4. Another question is whether
inversion results are consistent with our conceptual models of the site—this is a different definition of inversion quality. A good review exploring this idea is presented by Linde (2014). | {"url":"https://books.gw-project.org/electrical-imaging-for-hydrogeology/chapter/quantification-of-inversion-quality/","timestamp":"2024-11-13T06:17:21Z","content_type":"text/html","content_length":"71115","record_id":"<urn:uuid:914415ca-8c0c-44b2-83e5-300dff178fc0>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00000.warc.gz"} |
Warning: Your Pension & Social Security Are Quoted in FUTURE Dollars – How to Convert to Present Value | Darwin's Money
Warning: Your Pension & Social Security Are Quoted in FUTURE Dollars – How to Convert to Present Value
I was playing around with our firm’s pension tool the other day after plopping in assumptions for retirement date, assumed future salary increases and other factors, the model would return what
various pension options would pay me per month. For instance, if I wanted my survivor (wife) to enjoy the same monthly income after my death, the payment’s lower than if it were payments to me
alone. These numbers are all based on actuarial assumptions about life expectancy. However, what I found to be misleading, and especially so for people who don’t often think in terms of present
versus future value, is that all values are quoted in future dollars. If you don’t get a pension, chances are you’ll be receiving (or at least you’re promised as of now) Social Security. These
projects are also quoted in future dollars.
Future Dollar Quotations are Deceiving
The problem with quoting future obligations in future dollars is, $2,000 in 25 years is worth just a fraction in today’s dollars – so it will be woefully inadequate to live on. For this reason,
looking at these dollar figures is completely futile, useless, unless you’re only a couple years from retirement (or really, receipt of funds). In order to make a meaningful assessment, you need to
convert to present day dollars.
This is a simple as making an assumption on future inflation and then performing a simple excel calculation. If you want to do this on the fly, there are even tricks you can do in your head.
• Excel – There are PV, NPV and FV functions out there, but to really see how present value is calculated, it’s as simple as first identifying the inflation rate and the number of years you have in
mind. If I want to know what $5,000 25 years into the future is worth at an assumed 3% inflation rate, I simply input the following equation: =5000/(1.03)^25 . The answer is $2388. Note
that your future payment is worth only HALF what you were presented with on the screen. This is inflation at work.
• Rule of 72 – The rule of 72, while not perfect, is a pretty good way to determine how many years it takes for a value to double at a particular interest rate. In this case, if you assume 3%
inflation, then, 72/3 = 24. In excel, this works out nicely such that (1.03)^24 = 2.03. That’s good enough for me. What it says is that every 24 years, your purchasing power will be cut in
half at that given interest rate.
So, a nice rule of thumb would be that if you’re say, 22-28 years from retirement and you see a pension estimate of $4,000 a month in retirement, just realize that you’re really looking at about
$2,000 in today’s dollars – ballpark.
These concepts apply to so many facets of personal finance and investing I couldn’t possibly list them here. It’s good to understand the basics because inevitably, you’ll be presented with a
decision or information at some point where this would come in handy.
Do You Always Convert Back to Present Value?
{ 3 comments… read them below or add one }
Is Darwin your real name?
Maybe :>
Why did your parents name you Darwin? Are they “Evolution Extremists”?
Leave a Comment
{ 1 trackback } | {"url":"https://www.darwinsmoney.com/pension-social-security-convert-to-present-value/","timestamp":"2024-11-06T04:07:36Z","content_type":"application/xhtml+xml","content_length":"48613","record_id":"<urn:uuid:c8418549-5e73-4b17-b599-4d9af21256e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00771.warc.gz"} |
Numerical Study on the Dependency of Microstructure Morphologies of Pulsed Laser Deposited TiN Thin Films and the Strain Heterogeneities during Mechanical Testing
Faculty of Metals Engineering and Industrial Computer Science, AGH University of Science and Technology, Mickiewicza 30 Ave., 30-059 Krakow, Poland
Academic Centre for Materials and Nanotechnology, AGH University of Science and Technology, Mickiewicza 30 Ave., 30-059 Krakow, Poland
Author to whom correspondence should be addressed.
Submission received: 22 February 2021 / Revised: 26 March 2021 / Accepted: 27 March 2021 / Published: 30 March 2021
Numerical study of the influence of pulsed laser deposited TiN thin films’ microstructure morphologies on strain heterogeneities during loading was the goal of this research. The investigation was
based on the digital material representation (DMR) concept applied to replicate an investigated thin film’s microstructure morphology. The physically based pulsed laser deposited model was
implemented to recreate characteristic features of a thin film microstructure. The kinetic Monte Carlo (kMC) approach was the basis of the model in the first part of the work. The developed kMC
algorithm was used to generate thin film’s three-dimensional representation with its columnar morphology. Such a digital model was then validated with the experimental data from metallographic
analysis of laboratory deposited TiN(100)/Si. In the second part of the research, the kMC generated DMR model of thin film was incorporated into the finite element (FE) simulation. The 3D film’s
morphology was discretized with conforming finite element mesh, and then incorporated as a microscale model into the macroscale finite element simulation of nanoindentation test. Such a multiscale
model was finally used to evaluate the development of local deformation heterogeneities associated with the underlying microstructure morphology. In this part, the capabilities of the proposed
approach were clearly highlighted.
1. Introduction
The deposition process of thin films by laser ablation has been known since the 1970s [
]. This method is used in a wide range of industrial applications: from superconductor production, through the manufacturing of forming tools coatings, up to the biocompatible materials for medical
implants [
Deposition processes are especially interesting in the latter applications as they can provide thin films at the medical equipment and increase both its strength and bio-protection properties [
]. However, in these applications, development of any internal defects in the thin film due to, e.g., local strain localization occurring during exploitation conditions, is unacceptable. Such defects
can cause a build-up of biological material and deterioration of strength properties, which may be hazardous for a patient. That is why morphology of deposited thin layers must ensure all mechanical
expectations required for their applications. Presently, there is a wide variety of methods that can be used for deposition purposes. The primary representative of these technologies is the pulsed
laser deposition (PLD) method [
], which is a modification of the standard physical vapor deposition (PVD) approach. The main idea of PLD is based on evaporation and ionization of the surface atoms by a high-power laser beam that
is periodically focused on a target. Particles are first detached by a laser and then strike with high speed into the surface of the substrate materials and start to nucleate and grow. This process
provides possibilities to obtain layers at different kinds of engineering materials. The stoichiometry of the target material is reflected in the films very well, improving adhesion between the layer
and the substrate. These properties result mainly from the fact that particles in an atomic beam have considerable kinetic energies (0.1–100 eV), which increase the diffusion rate of adatoms at the
surface [
]. PLD can be divided into three stages: ablation, transportation of atoms through a chamber, and deposition onto a substrate surface. Experimentally, each pulse lasts for a few nanoseconds, and the
time between two pulses is of the order of a second. That is why the thin layers’ growth mechanism by a pulsed laser deposition is an extremely complex process. For this reason, the development of a
specific deposition technology for a given material, often involves long-term research aimed at an empirical determination of the required process parameters.
That is why, at first, authors decided to develop a numerical model of the PLD deposition process, which can support experimental research and also provide reliable data on thin film morphologies for
further studies of their behavior under exploitation conditions. Commonly used numerical models of deformation neglect the inner structure of deposited thin films [
], and is what depreciates the quality of obtained data. Deposited material under, e.g., loading conditions, is usually defined as isotropic, without taking into account columns and surface
wrinkling, which are commonly observed under structural investigation. Thus, simplified models cannot give sufficient information about material resistance to deformation. Therefore, the development
of the model, which precisely maps thin films’ morphologies and inner structures during modelling of exploitation conditions, seems extremely relevant and important, to obtain results comparable with
those from experimental investigations.
Reliable digital material representation (DMR) of the thin film microstructure morphology numerical model of the deposition process was developed first. It provides a digital representation model [
] of layers for subsequent numerical simulations of deformation under nanoindentation conditions [
] as a function of deposition process parameters. As a result, the complex nanoindentation model will provide a basic understanding of local heterogeneous material response to deformation conditions.
2. Numerical Modelling of the PLD Process
The first 2D numerical models of deposition aimed at capturing underlying physics can be found in the literature from early 1970s [
]. Presently, two major types of approach can be distinguished: the deterministic, and the stochastic ones. The most common examples of deterministic models are based on the molecular dynamics method
]. This method describes the movement of individual atoms or material particles by the Newton equation of motion. However, to calculate forces and energies between atoms, a set of interatomic
potentials must be defined. The molecular dynamics (MD) method can directly address mechanisms of deposition at the nanoscale. However, due to a small length scale and available computational
resources, it only allows for investigating a very local material behavior, far beyond the industrial expectations. That is why stochastic models of deposition are more frequently used during an
The first group of stochastic methods is based on the cellular automata (CA) technique. The CA technique’s main idea is to divide a specific part of the material into a one-, two-, or
three-dimensional lattice of finite cells. Each cell in the CA space is also surrounded by neighbors, which affect one another. The cells interactions within the CA space are based on the knowledge
defined while studying a particular phenomenon. An example of the CA’s epitaxially thin layers growth simulation was presented in authors’ earlier works [
]. These approaches can be classified as Random Deposition Models (RDM) [
], and consist of three main steps: random deposition of particles at the growing surface, calculation of the total energy of each particle with the migration of mobile particles along the surface,
and eventually, desorption of particles from the surface.
The second group is based on the Monte Carlo (MC) method. Models, which belong to that group provide a possibility to describe the evolution of complex systems, with some simplifications, during
calculations [
]. They are based on a random sampling of the examined quantity with the use of its analytical distributions. However, the MC method does not directly take into account the elapse of time, which is
particularly important in the description of systems in which events occur concurrently and are mutually dependent. Because a thin film growth belongs to such systems, therefore, a modified version
of the MC method was developed to take into account the process kinetics, and is called the kinetic Monte Carlo (kMC).
A group of the kMC methods describes the progression of complex systems by the identification of all possible events and assigning to them the probability of occurrence. For that, the method requires
knowledge about rates of each event, which are determined based on values of energy barriers that the system has to overcome to move to a new configuration. Statistically, the most likely events are
selected more often. The great advantage of the kMC method is consideration of the physical time of the process during a simulation. With that, it is possible to take into account many competing
mechanisms occurring during a deposition process: atoms deposition, diffusion on the surface, island formation, connection/disconnection to/from existing atoms’ islands, ascending/descending on/from
the existing atoms’ islands and evaporation.
Therefore, the kMC approach was selected for the current investigation focused on the implementation of the PLD deposition model of TiN thin films.
Formulation of the kMC PLD Deposition Model
To describe the evolution of the growing layer during the PLD, mentioned earlier, important processes were grouped into two elementary phenomena:
Adsorption—particles from plasma flux are attached to the substrate surface due to the weak van der Waals forces;
Surface-diffusion—particles at the surface can change their sites in favor of energy minimization;
A concept of all mentioned elementary events, which are considered during the kMC model development, is shown in
Figure 1
As mentioned, the key information required by the kMC algorithm is related to the rates of these fundamental events, which can be described by the following set of equations:
Adsorption rate:
—deposition rate,
—dimension of an elementary cell.
Surface diffusion rate is given by an Arrhenius-type expression:
$r d i f f = α T e − Δ E k T$
—Boltzmann constant,
—relative substrate temperature,
—adatom vibration frequency.
Change of an occupied site is driven by an energy difference (activation energy):
$E 1$
—energy of a particle after a hypothetic change,
$E 0$
—current energy of a particle.
The energy of a given particle is defined as:
$E i$
—binding energy between a considered particle and a particle from a neighborhood, which is placed at a site
The schematic block diagram of the implemented kMC algorithm is presented in
Figure 2
Following the kinetic Monte Carlo method for each calculation step, a list of possible events in the system is computed along with probabilities of their occurrence. Then, based on a random number
from 〈$0 , R$), where $R$ is an accumulated probability of all events, a single event is selected and applied to the system.
Therefore, the kMC consists of several steps:
Creation of a list of all possible events in the system and calculation of a likelihood of their occurrence $r i$.
Calculation of the sum of probabilities of all events $R = ∑ j = 0 i r j$.
Random selection of a number in a range 〈$0 , R$).
Each event is placed on a stack. Graphically (
Figure 3
), the height of a particular event represents its probability of occurrence. An overall height stack is thus equal to a cumulated probability of all considered events—
. A randomly chosen number
unambiguously indicates the event, which will be applied to the system. Selection of the event is shown in
Figure 3
Transposition of the system to a new state by applying the selected event.
Updating the time counter by $Δ t = 1 / R$.
From the implementation point of view, an aspect that requires particular attention is optimising the algorithm towards reducing computing time. In the classical kMC approach, a list of all events is
recreated at each time step, which is a severely CPU time-consuming task. To improve the algorithm’s performance, a list of events can be initialized only once and later, depending on an event which
is selected accordingly, updated by:
• Adding events, which become possible;
• Removing obsolete events;
• Updating probabilities of all events, which could be affected by a previous change in the system.
Keeping track of all events, which will be affected by applying a new event to the system, is challenging. The simplest way to achieve this is to recalculate the probability of directly related
events to the extent of the existing neighborhood. This is because some types of events, i.e., surface diffusion, depend not only on the particles’ actual configuration, but also on the configuration
after a hypothetical movement of the particle. An example of this procedure is shown in
Figure 4
. The considered neighborhood has a radius of a single particle. The red border represents the range of a doubled neighborhood. Affected particles are thus located in the doubled neighborhood before
and after the particle movement.
3. Kinetic Monte Carlo Simulations of the PLD Process
The developed kMC deposition model was used in the work to simulate the growth of the TiN thin film on the single-crystal Si substrate. Deposition process parameters, namely, substrate temperature
$T s u b = 200 ° C$
and deposition rate 0.05 nm/s, were selected to obtain columnar growth. Model parameters presented in
Table 1
were selected based on the literature findings [
] and a series of initial simulations.
The TiN thin film with the dimension of 90 nm × 90 nm × 90 nm was obtained from the kMC simulation. Example of the DMR model of thin film during kMC simulation is shown in
Figure 5
As presented in
Figure 6
, the shape of generated columns in the thin films is highly irregular. Bottom columns are characterized by a lower surface area than the columns in the thin film’s upper part. To show a change in
the columns’ geometry and the height of the final sample, a surface area of each column at subsequent cross-sections (22, 45, and 67 nm) was calculated and presented in
Figure 6
Additionally, in
Figure 7
, it can be seen that the average height of the four highest columns is between 87 to 91 nm. The average width of those columns in the middle is between 18 to 25 nm, but in the top part, an average
width increases and is between 19 to 28 nm (
Figure 6
). All columns have a V shape, which can also be observed through microscopic investigations.
4. Experimental Investigation
To validate the PLD model predictions, a 90 nm TiN thin film was deposited in the laboratory conditions on the Si(100) substrate using the 248 nm excimer laser system (Coherent COMPexPro 110F, Santa
Clara, CA, USA) operated at an energy density of ~3 J·cm
, a pulse width of 20 ns, and a repetition rate of 10 Hz. The target was a disc with 2.54 cm in the diameter and 0.5 cm in the thickness. The initial pressure in the chamber was set to 5 × 10
Torr. The silicon substrate was subjected to an ultrasonic cleaning procedure for 10 min in acetone and 10 min in methanol and finally, etched for 5 min in 10% HF. The substrate was placed parallel
to the target material surface at a distance of 5 cm. The deposition temperature and nitrogen partial pressure were 200 °C and 1 × 10
Torr, respectively [
]. Process settings closely replicated the conditions selected during numerical modelling presented earlier.
Investigation of the TiN thin layer morphology was carried out by the transmission electron microscopy (TEM, Tecnai TF 20 X-TWIN, FEI, Hillsboro, OR, USA). The sample was prepared for this
investigation by a focused ion beam (FIB) technique (Quanta 3D 200i, FEI). The FIB preparation included an electron beam Pt deposition at the beginning and a low accelerating voltage cleaning as the
final step. To evaluate deposited columns’ thickness, a set of dark field images was taken within the same area using a different peak lying on the first ring of the diffraction pattern. Observed
columns with marked investigated locations and corresponding dimensions are presented in
Figure 8
Table 2
, respectively.
As seen in
Figure 7
Figure 8
Table 2
, the numerical model predictions appropriately replicate experimental observations. Therefore, the developed PLD model can be used to generate realistic morphologies of thin films for further
numerical investigations of their behavior under, e.g., loading conditions based on the mentioned digital material representation concept.
5. Numerical Nanoindentation Test Based on the Explicit Representation of Thin Films Morphologies
The nanoindentation test, which is commonly used to evaluate thin films’ mechanical properties, was selected as a case study for the present investigation. The partially-coupled concurrent multiscale
methodology was used due to a significant length scale difference between the microscopic model of the nanoindentation test, and nanoscale model based on the digital material representation. In this
approach, a particular area of interest from the macroscale model is selected and recalculated with a refined mesh to obtain a more detailed solution. The microscale model contains information on the
sample geometry and boundary conditions of the nanoindentation test. Additional features related to the nanoscale model, e.g., columnar morphology, are initially excluded at this length scale. On the
other hand, the nanoscale model is an arbitrary cutout taken from the micromodel, and it takes into account the digital model of the thin film morphology obtained from the developed kMC algorithm. In
this procedure, the microscale model is first calculated within the finite element (FE) ABAQUS/Explicit code. The refined nanoscale model is resubmitted for simulation with displacement boundary
conditions taken from the micro model simulation. A schematic description of this multiscale technique, applied for the nanoindentation test, is presented in
Figure 9
During the simulation, a diamond nanoindenter is assumed to be a deformable body with elastic modulus and Poisson’s ratio taken from the literature [
]. The micromodel of the thin film was discretized with 150,000 8-node tetrahedron elements. Additionally, a displacement of the Si(100) substrate was fixed by the rigid tool situated at the bottom
of the sample.
Examples of results obtained from the micromodel are shown in
Figure 10
. As mentioned, they are then used as boundary conditions for the nanoscale model, which include a columnar morphology of the thin film.
As mentioned, the nanoscale model is based on the thin film morphology presented in
Figure 7
. The possibility of assigning material properties to particular TiN columns was described in the previous work [
]. The generated morphology was also subjected to a discretization algorithm. The non-uniform mesh was created using a
software (v0.9) [
]. The FE mesh (
Figure 11
) is highly refined along the columns’ boundaries to adequately capture solution gradients that are expected in these regions due to differences in the hardening behavior of subsequent columns.
The thin film DMR model was discretized with 631,000 four-node linear tetrahedron (C3D4) elements (
Figure 11
). Such nanoscale model was then located in 9 different location within the deformation area, according to
Figure 9
. Examples of results presenting the influence of a columnar morphology on inhomogeneities in both stress and strain fields that may result in, e.g., local fracture, are presented in
Figure 12
Figure 13
In the TiN film, especially between columns, large strain and stress irregularities can be observed at larger nanoindenter displacements. Prediction of these regions is important, as stress and
strain concentration zones can easily change into fracture initiation zones, and lead to a destruction of the thin film that can be observed experimentally in
Figure 14
6. Discussion
The V shape columnar TiN structure investigated within the work (
Figure 8
) is often reported in the literature when a low Si substrate temperature is considered [
]. As presented, such a columnar structure of a thin film can be quite heterogeneous. Therefore, precise control and understanding of a deposition operation are required to obtain desired thin film
morphology. The developed kMC deposition model can serve as a support tool for such an investigation as it provides a reliable and explicit representation of columns. It can be used as a tool for the
preliminary assessment of applied process parameters to evaluate their influence on the final thin film morphology. Additionally, such digital model can be applied to more complex analysis of film
behavior under further processing conditions, by means of the FE numerical simulations. An example of a numerical analysis of the TiN thin film from the mechanical properties point of view during the
nanoindentation test was presented within the paper as a case study. Such numerical investigation can complement experimental investigations reported, e.g., in [
]. Since the column boundaries have been identified as the fracture initiation zones, the fracture resistance of the inter-column boundaries can be considered as the fracture resistance of the entire
thin film. With the presented concept of combining the kMC deposition model and digital material representation finite element simulations, it is possible to extend research in this area.
7. Conclusions
Based on the presented results, it can be concluded that:
• The kinetic Monte Carlo method is an adequate and feasible technique for numerical simulation of the PLD process and provides a reliable digital representation of microstructure morphology;
• The presented kMC PLD model can be adjusted to design the deposition processes of different nanolayered structures;
• The digital material representation model of the deposited thin films allows predicting of inhomogeneities in stress/strain fields under deformation conditions;
• Predicted local heterogeneities, especially in the interface area and along columns boundaries, can be further used to study fracture initiation and propagation.
Author Contributions
Conceptualization, K.P. and L.M.; methodology, K.P.; software, K.P.; validation, K.P., L.M. and P.B.; formal analysis, K.P. and L.M.; investigation, K.P., L.M., G.S. and G.C.; resources, K.P.,G.S.
and G.C.; data curation, K.P. and L.M.; writing—original draft preparation, K.P., L.M., G.S., G.C. and P.B.; writing—review and editing, K.P., L.M., G.S., G.C. and P.B.; visualization, K.P.;
supervision, L.M. and P.B.; project administration, K.P.; funding acquisition, K.P. All authors have read and agreed to the published version of the manuscript.
This research was supported by the National Science Centre (NCN–Poland) Research Project: No. 2015/17/D/ST8/01278. L.M. and P.B. acknowledge internal funding of the AGH University.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.
Numerical calculations were performed with the use of the PLGrid Infrastructure.
Conflicts of Interest
The authors declare no conflict of interest.
Figure 4. Procedure of choosing particles, and which interrelated events probabilities could be affected (marked as green) after applying a surface diffusion to the system (marked as yellow).
Figure 6. (a) position of the cross-section in the final DMR sample, (b) illustration of columns at particular cross-sections, and (c) diagram representing areas of subsequent columns at particular
Figure 10. Distribution of (a) equivalent stress (MPa) in the model from the side view and, (b) equivalent stress (MPa), (c) displacement (mm) from the top view.
Figure 12. Equivalent stress distribution within the DMR (Digital Material Representation) model after nanoindentation test.
Figure 14. SEM images (Quanta 3D 200i, FEI, Hillsboro, OR, USA) revealing fractures in the (a) Si substrate and (b) TiN thin film after nanoindentation test with force–displacement curves received
during test.
Parameter Value
Domain edge length 90 nm
Elementary cell size 1 nm
Substrate melting temperature 1414 °C
Substrate temperature 200 °C
Binding energy 0.8 eV
Deposition rate 0.05 nm/s
Vibration frequency 1 × 10^13 Hz
Dimension Number 1 2 3 4 5 6 7 8 9 10 11 12
Size (nm) 15.1 25.9 89.1 24.8 33.1 94.4 23.6 30.5 87.6 30.8 53 93.6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Perzynski, K.; Cios, G.; Szwachta, G.; Bała, P.; Madej, L. Numerical Study on the Dependency of Microstructure Morphologies of Pulsed Laser Deposited TiN Thin Films and the Strain Heterogeneities
during Mechanical Testing. Materials 2021, 14, 1705. https://doi.org/10.3390/ma14071705
AMA Style
Perzynski K, Cios G, Szwachta G, Bała P, Madej L. Numerical Study on the Dependency of Microstructure Morphologies of Pulsed Laser Deposited TiN Thin Films and the Strain Heterogeneities during
Mechanical Testing. Materials. 2021; 14(7):1705. https://doi.org/10.3390/ma14071705
Chicago/Turabian Style
Perzynski, Konrad, Grzegorz Cios, Grzegorz Szwachta, Piotr Bała, and Lukasz Madej. 2021. "Numerical Study on the Dependency of Microstructure Morphologies of Pulsed Laser Deposited TiN Thin Films and
the Strain Heterogeneities during Mechanical Testing" Materials 14, no. 7: 1705. https://doi.org/10.3390/ma14071705
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/1996-1944/14/7/1705","timestamp":"2024-11-03T22:12:01Z","content_type":"text/html","content_length":"436945","record_id":"<urn:uuid:f439985f-695a-4d99-9d8e-f23f2df5be42>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00363.warc.gz"} |
Estimating Home Electric Use
Estimating Appliance and Home Electronic Energy Use
Determining how much electricity your appliances and home electronics use can help you understand how much money you are spending to use them. Use the information below to estimate how much
electricity an appliance is using and how much the electricity costs so you can decide whether to invest in a more energy-efficient appliance.
There are several ways to estimate how much electricity your appliances and home electronics use:
• Reviewing the Energy Guide label. The label provides an estimate of the average energy consumption and cost to operate the specific model of the appliance you are using. Note that all not all
appliances or home electronics are required to have an Energy Guide.
• Using an electricity usage monitor to get readings of how much electricity an appliance is using
• Calculating annual energy consumption and costs using the formulas provided below
• Installing a whole-house energy monitoring system.
Electricity Usage Monitors
Electricity usage monitors are easy to use and can measure the electricity usage of any device that runs on 120 volts. (But it can’t be used with large appliances that use 220 volts, such as electric
clothes dryers, central air conditioners, or water heaters.) You can buy electricity usage monitors at most hardware stores for around $25-$50. Before using a monitor, read the user manual.
To find out how many watts of electricity a device is using, just plug the monitor into the electrical outlet the device uses, and then plug the device into the monitor. It will display how many
watts the device uses. If you want to know how many kilowatt-hours (kWh) of electricity the devices uses in an hour, or a day, or longer, just leave everything set up and read the display later.
Monitors are especially useful for finding the amount of kWh used over any period of time for devices that don’t run constantly, like refrigerators. Some monitors will let you enter the amount your
utility charges per kilowatt-hour and provide an estimate of how much it costs to run the device since it was plugged into the monitor.
Many appliances continue to draw a small amount of stand-by power when they are switched "off." These "phantom loads" occur in most appliances that use electricity, such as televisions, stereos,
computers, and kitchen appliances. Most phantom loads will increase the appliance's energy consumption a few watt-hours, and you can use a monitor to estimate those too. These loads can be avoided by
unplugging the appliance or using a power strip and using the switch on the power strip to cut all power to the appliance.
Calculating Annual Electricity Consumption and Costs
Follow these steps for finding the annual energy consumption of a product, as well as the cost to operate it.
1. Estimate the number of hours per day an appliance runs. There are two ways to do this:
- Rough estimate
If you know about how much you use an appliance every day, you can roughly estimate the number of hours it runs. For example, if you know you normally watch about 4 hours of television every day,
you can use that number. If you know you run your whole house fan 4 hours every night before shutting it off, you can use that number. To estimate the number of hours that a refrigerator actually
operates at its maximum wattage, divide the total time the refrigerator is plugged in by three. Refrigerators, although turned "on" all the time, actually cycle on and off as needed to maintain
interior temperatures.
- Keep a log
It may be practical for you to keep a usage log for some appliances. For example, you could record the cooking time each time you use your microwave, work on your computer, watch your television,
or leave a light on in a room or outdoors.
2. Find the wattage of the product. There are three ways to find the wattage an appliance uses:
- Stamped on the appliance
The wattage of most appliances is usually stamped on the bottom or back of the appliance, or on its nameplate. The wattage listed is the maximum power drawn by the appliance. Many appliances have
a range of settings, so the actual amount of power an appliance may consume depends on the setting being used. For example, a radio set at high volume uses more power than one set at a low
volume. A fan set at a higher speed uses more power than one set at a lower speed.
- Multiply the appliance ampere usage by the appliance voltage usage
If the wattage is not listed on the appliance, you can still estimate it by finding the electrical current draw (in amperes) and multiplying that by the voltage used by the appliance. Most
appliances in the United States use 120 volts. Larger appliances, such as clothes dryers and electric cooktops, use 240 volts. The amperes might be stamped on the unit in place of the wattage, or
listed in the owner's manual or specification sheet.
- Use online sources to find typical wattages or the wattage of specific products you are considering purchasing. The following links are good options:
The Home Energy Saver provides a list of appliances with their estimated wattage and their annual energy use, along with other characteristics (including annual energy use, based on "typical"
usage patterns. Continue using the equations here if you want to find energy use based on your own usage patterns).
ENERGY STAR offers energy-use information on specific products that have earned the ENERGY STAR. The information varies across products, but if you are considering purchasing a new, efficient
product, ENERGY STAR allows you to select and compare specific models. In some cases, you can use the provided information to do your own estimates using the equations here. The information may
also help you compare your current appliances with more efficient models, so you understand potential savings from upgrading to a more efficient appliance.
3. Find the daily energy consumption using the following formula:
(Wattage × Hours Used Per Day) ÷ 1000 = Daily Kilowatt-hour (kWh) consumption
4. Find the annual energy consumption using the following formula:
Daily kWh consumption × number of days used per year = annual energy consumption
5. Find the annual cost to run the appliance using the following formula:
Annual energy consumption × utility rate per kWh = annual cost to run appliance
I. Following the steps above, find the annual cost to operate an electric kettle.
1. Estimate of time used: The kettle is used several times per day, for about 1 total hour.
2. Wattage: The wattage is on the label and is listed at 1500 W.
Photo of a label from an electric kettle showing a wattage of 1500.
3. Daily energy consumption:
(1,500 W × 1) ÷ 1,000 = 1.5 kWh
4. Annual energy consumption: The kettle is used almost every day of the year.
1.5 kWh × 365 = 547.5 kWh
5. Annual cost: The utility rate is 11 cents per kWh.
547.5 kWh × $0.11/kWh = $60.23/year
II. Following the steps above, find the annual cost to operate a paper shredder.
1. Estimate of time used: The shredder is used for about 15 minutes per day (0.25 hour).
Label from a paper shredder showing 120 volts and 3 amperes.
2. Wattage: The wattage is not listed on the label, but the electrical current draw is listed at 3 amperes.
120V × 3A = 360W
3. Daily energy consumption:
360 W × .25 ÷ 1000 = 0.09 kWh
4. Annual energy consumption: The shredder is used about once per week (52 days per year).
0.09 kWh × 52 = 4.68 kWh
5. Annual cost to operate: The utility rate is 11 cents per kWh.
4.68 kWh × $0.11/kWh = $0.51/year
Whole-House Energy Monitoring Systems
If you want more detailed data on your home's energy use (as well as the ability to measure the energy use of 240-volt appliances), you might consider installing a whole-house energy monitoring
system. The features of these systems vary, and the cost and complexity depends on the number of circuits you want to monitor, the level of detail of the data, and the features available. The
monitors are often installed directly in the main breaker panel of the home, and some may require an electrician to install. Some monitors must be connected with your home's wireless network and data
is viewed on a computer or smartphone, while others come with a dedicated display.
In addition to providing information on the energy consumption of your appliances, these monitors help you understand where and when you use the most energy, allowing you to develop strategies to
reduce your energy use and costs.
Typical Wattages of Various Appliances
Here are some examples of the range of nameplate wattages for various household appliances:
• Aquarium = 50-1210 Watts
• Clock radio = 10
• Coffee maker = 900-1200
• Clothes washer = 350-500
• Clothes dryer = 1800-5000
• Dishwasher = 1200-2400 (using the drying feature greatly increases energy consumption)
• Dehumidifier = 785
• Electric blanket- Single/Double = 60 / 100
□ Ceiling = 65-175
□ Window = 55-250
□ Furnace = 750
□ Whole house = 240-750
• Hair dryer = 1200-1875
• Heater (portable) = 750-1500
• Clothes iron = 1000-1800
• Microwave oven = 750-1100
□ CPU - awake / asleep = 120 / 30 or less
□ Monitor - awake / asleep = 150 / 30 or less
□ Laptop = 50
• Radio (stereo) = 70-400
• Refrigerator (frost-free, 16 cubic feet) = 725
□ 19" = 65-110
□ 27" = 113
□ 36" = 133
□ 53"-61" Projection = 170
□ Flat screen = 120
• Toaster = 800-1400
• Toaster oven = 1225
• VCR/DVD = 17-21 / 20-25
• Vacuum cleaner = 1000-1440
• Water heater (40 gallon) = 4500-5500
• Water pump (deep well) = 250-1100
• Water bed (with heater, no cover) = 120-380
Source: U.S. Dept. of Energy | {"url":"https://www.energytechhvac.com/webapp/p/246/estimating-home-electric-use","timestamp":"2024-11-09T00:10:47Z","content_type":"text/html","content_length":"61331","record_id":"<urn:uuid:0576d945-07da-4cc6-8708-f88b12c10136>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00710.warc.gz"} |
Algebraic Methods and $q$-Special Functionssearch
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
Algebraic Methods and $q$-Special Functions
Luc Vinet : Université de Montréal, Québec, QC, Canada
A co-publication of the AMS and Centre de Recherches Mathématiques
Softcover ISBN: 978-0-8218-2026-1
Product Code: CRMP/22
List Price: $108.00
MAA Member Price: $97.20
AMS Member Price: $86.40
eBook ISBN: 978-1-4704-3936-1
Product Code: CRMP/22.E
List Price: $101.00
MAA Member Price: $90.90
AMS Member Price: $80.80
Softcover ISBN: 978-0-8218-2026-1
eBook: ISBN: 978-1-4704-3936-1
Product Code: CRMP/22.B
List Price: $209.00 $158.50
MAA Member Price: $188.10 $142.65
AMS Member Price: $167.20 $126.80
Click above image for expanded view
Algebraic Methods and $q$-Special Functions
Luc Vinet : Université de Montréal, Québec, QC, Canada
A co-publication of the AMS and Centre de Recherches Mathématiques
Softcover ISBN: 978-0-8218-2026-1
Product Code: CRMP/22
List Price: $108.00
MAA Member Price: $97.20
AMS Member Price: $86.40
eBook ISBN: 978-1-4704-3936-1
Product Code: CRMP/22.E
List Price: $101.00
MAA Member Price: $90.90
AMS Member Price: $80.80
Softcover ISBN: 978-0-8218-2026-1
eBook ISBN: 978-1-4704-3936-1
Product Code: CRMP/22.B
List Price: $209.00 $158.50
MAA Member Price: $188.10 $142.65
AMS Member Price: $167.20 $126.80
• CRM Proceedings & Lecture Notes
Volume: 22; 1999; 276 pp
MSC: Primary 33; Secondary 05; 43; 81
There has been revived interest in recent years in the study of special functions. Many of the latest advances in the field were inspired by the works of R. A. Askey and colleagues on basic
hypergeometric series and I. G. Macdonald on orthogonal polynomials related to root systems. Significant progress was made by the use of algebraic techniques involving quantum groups, Hecke
algebras, and combinatorial methods.
The CRM organized a workshop for key researchers in the field to present an overview of current trends. This volume consists of the contributions to that workshop. Topics include basic
hypergeometric functions, algebraic and representation-theoretic methods, combinatorics of symmetric functions, root systems, and the connections with integrable systems.
Titles in this series are co-published with the Centre de Recherches Mathématiques.
Graduate students and research mathematicians interested in special functions, combinatorics, representation theory, harmonic analysis, quantum groups, integrable systems, and mathematical
physics; theoretical physicists.
□ Chapters
□ Science fiction and Macdonald’s polynomials
□ On the expansion of elliptic functions and applications
□ Generalized hypergeometric functions–Classification of identities and explicit rational approximations
□ Tensor products of $q$-superalgebras and $q$-series identities. I
□ $q$-Racah polynomials for $BC$ type root systems
□ Intertwining operators of type $B_N$
□ Symmetries and continuous $q$-orthogonal polynomials
□ Addition theorems for spherical polynomials on a family of quantum spheres
□ On a $q$-analogue of the string equation and a generalization of the classical orthogonal polynomials
□ The $q$-Bessel function on a $q$-quadratic grid
□ Three statistics on lattice paths
□ Quantum Grothendieck polynomials
□ $q$-difference raising operators for Macdonald polynomials and the integrality of transition coefficients
□ Great powers of $q$-calculus
□ $q$-special functions: Differential-difference equations, roots of unity, and all that
□ On algebras of creation and annihilation operators
• Book Details
• Table of Contents
• Requests
Volume: 22; 1999; 276 pp
MSC: Primary 33; Secondary 05; 43; 81
There has been revived interest in recent years in the study of special functions. Many of the latest advances in the field were inspired by the works of R. A. Askey and colleagues on basic
hypergeometric series and I. G. Macdonald on orthogonal polynomials related to root systems. Significant progress was made by the use of algebraic techniques involving quantum groups, Hecke algebras,
and combinatorial methods.
The CRM organized a workshop for key researchers in the field to present an overview of current trends. This volume consists of the contributions to that workshop. Topics include basic hypergeometric
functions, algebraic and representation-theoretic methods, combinatorics of symmetric functions, root systems, and the connections with integrable systems.
Titles in this series are co-published with the Centre de Recherches Mathématiques.
Graduate students and research mathematicians interested in special functions, combinatorics, representation theory, harmonic analysis, quantum groups, integrable systems, and mathematical physics;
theoretical physicists.
• Chapters
• Science fiction and Macdonald’s polynomials
• On the expansion of elliptic functions and applications
• Generalized hypergeometric functions–Classification of identities and explicit rational approximations
• Tensor products of $q$-superalgebras and $q$-series identities. I
• $q$-Racah polynomials for $BC$ type root systems
• Intertwining operators of type $B_N$
• Symmetries and continuous $q$-orthogonal polynomials
• Addition theorems for spherical polynomials on a family of quantum spheres
• On a $q$-analogue of the string equation and a generalization of the classical orthogonal polynomials
• The $q$-Bessel function on a $q$-quadratic grid
• Three statistics on lattice paths
• Quantum Grothendieck polynomials
• $q$-difference raising operators for Macdonald polynomials and the integrality of transition coefficients
• Great powers of $q$-calculus
• $q$-special functions: Differential-difference equations, roots of unity, and all that
• On algebras of creation and annihilation operators
Please select which format for which you are requesting permissions. | {"url":"https://bookstore.ams.org/CRMP/22","timestamp":"2024-11-06T08:19:26Z","content_type":"text/html","content_length":"96644","record_id":"<urn:uuid:8cc3506f-10ff-4734-846a-46569864e8bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00145.warc.gz"} |
18.2 Conditional moments and scale | Forecasting and Analytics with the Augmented Dynamic Adaptive Model (ADAM)
\( \newcommand{\mathbbm}[1]{\boldsymbol{\mathbf{#1}}} \)
18.2 Conditional moments and scale
We have already discussed how to obtain conditional expectation and variance in Sections 5.3 and 6.3. However, the topic is worth discussing in more detail, especially for non-Normal distributions.
18.2.1 Conditional expectation
The general rule that applies to ADAM in terms of generating conditional expectations is that if you deal with the pure additive model, then you can produce forecasts analytically. This not only
applies to ETS but also to ARIMA (Subsection 9.2.1) and Regression (Section 10.2). If the model has multiplicative components (such as multiplicative error, or trend, or seasonality) or is formulated
in logarithms (for example, ARIMA in logarithms), then simulations should be preferred (Section 18.1) – the point forecasts from these models would not necessarily correspond to the conditional
18.2.2 Explanatory variables
If the model contains explanatory variables, then the \(h\) steps ahead conditional expectations should use them in the calculation. The main challenge in this situation is that future values might
not be known in some cases. This has been discussed in Section 10.2. Practically speaking, if the user provides the holdout sample values of explanatory variables, the forecast.adam() method will use
them in forecasting. If they are not provided, the function will produce forecasts for each of the explanatory variables via the adam() function and use the conditional \(h\) steps ahead expectations
in forecasting.
18.2.3 Conditional variance and scale
Similar to conditional expectations, as we have discussed in Sections 5.3 and 6.3, the conditional \(h\) steps ahead variance is in general available only for the pure additive models. While the
conditional expectation might be required on its own to use as a point forecast, the conditional variance is typically needed to produce prediction intervals. However, it becomes useful only in cases
of distributions that support convolution (addition of random variables), which limits its usefulness to pure additive models and to additive models applied to the data in logarithms. For example, if
we deal with Inverse Gaussian distribution, then the \(h\) steps ahead values will not follow Inverse Gaussian distribution, and we would need to revert to simulations in order to obtain the proper
statistics for it. Another situation would be a multiplicative error model that relies on Normal distribution – the product of Normal distributions is not a Normal distribution, so the statistics
would need to be obtained using simulations again.
If we deal with a pure additive model with either Normal, Laplace, S, or Generalised Normal distributions, then the formulae derived in Section 5.3 can be used to produce \(h\) steps ahead
conditional variance. Having obtained those values, we can then produce conditional \(h\) steps ahead scales for the distributions (which would be needed, for example, to generate quantiles), using
the relations between the variance and scale in those distributions (discussed in Section 5.5):
1. Normal: scale is \(\sigma^2_h\);
2. Laplace: \(s_h = \sigma_h \sqrt{\frac{1}{2}}\);
3. Generalised Normal: \(s_h = \sigma_h \sqrt{\frac{\Gamma(1/\beta)}{\Gamma(3/\beta)}}\);
4. S: \(s_h = \sqrt{\sigma_h}\sqrt[4]{\frac{1}{120}}\).
If the variance is needed for the other combinations of model/distributions, simulations would need to be done to produce multiple trajectories, similar to how it was done in Section 18.1. An
alternative to this would be the calculation of in-sample multistep forecast errors (similar to how it was discussed in Sections 11.3 and 14.7.3) and then calculating the variance based on them for
each horizon \(j = 1 \dots h\).
In the smooth package for R, there is a multicov() method that allows extracting the multiple steps ahead covariance matrix \(\hat{\boldsymbol{\Sigma}}\) (see Subsection 11.3.5). The method can
estimate the covariance matrix using analytical formulae (where available), or via empirical calculations (based on multiple steps ahead in-sample error), or via simulation. Here is an example for
one of the models in R:
## h1 h2 h3 h4 h5 h6 h7
## h1 1.836 2.280 2.672 3.019 3.325 3.596 3.836
## h2 2.280 4.667 5.598 6.421 7.148 7.791 8.359
## h3 2.672 5.598 8.556 9.992 11.260 12.381 13.372
## h4 3.019 6.421 9.992 13.520 15.458 17.172 18.687
## h5 3.325 7.148 11.260 15.458 19.541 21.971 24.118
## h6 3.596 7.791 12.381 17.172 21.971 26.584 29.482
## h7 3.836 8.359 13.372 18.687 24.118 29.482 34.595
18.2.4 Scale model
In the case of the scale model (Chapter 17), the situation becomes more complicated because we no longer assume that the variance of the error term is constant (residuals are homoscedastic) – we now
assume that it is a model on its own. In this case, we need to take a step back to the recursion (5.10) and when taking the conditional variance, introduce the time-varying variance \(\sigma_{t+h}^2
Remark. Note the difference between \(\sigma_{t+h}^2\) and \(\sigma_{h}^2\) in our notations – the former is the variance of the error term for the specific step \(t+h\), while the latter is the
conditional variance \(h\) steps ahead, which is derived based on the assumption of homoscedasticity.
Making that substitution leads to the following analytical formula for the \(h\) steps ahead conditional variance in the case of the scale model: \[$$\text{V}(y_{t+h}|t) = \sum_{i=1}^d \left(\mathbf
{w}_{m_i}^\prime \sum_{j=1}^{\lceil\frac{h}{m_i}\rceil-1} \mathbf{F}_{m_i}^{j-1} \mathbf{g}_{m_i} \mathbf{g}^\prime_{m_i} (\mathbf{F}_{m_i}^\prime)^{j-1} \mathbf{w}_{m_i} \sigma_{t+h-j}^2 \right) + \
sigma_{t+h}^2 . \tag{18.1}$$\] This variance can then be used, for example, to produce quantiles from the assumed distribution.
As mentioned above, in the case of the not purely additive model or a model with other distributions than Normal, Laplace, S, or Generalised Normal, the conditional variance can be obtained using
simulations. In the case of the scale model, the principles will be the same, just assuming that each error term \(\epsilon_{t+h}\) has its own scale, obtained from the estimated scale model. The
rest of the logic will be exactly the same as discussed in Section 18.1. | {"url":"https://openforecast.org/adam/ADAMForecastingMoments.html","timestamp":"2024-11-05T13:51:28Z","content_type":"text/html","content_length":"76517","record_id":"<urn:uuid:c1a8ea08-5502-4369-8684-42b52ca2dcb4>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00662.warc.gz"} |
8.4 Conditional Plots | An Introduction to Spatial Data Science with GeoDa
8.4 Conditional Plots
Conditional plots, also known as small multiples (Tufte 1983), Trellis graphs (Becker, Cleveland, and Shyu 1996), or facet graphs (Wickham 2016), provide a means to assess interactions among up to
four variables. Multiple graphs are constructed for different subsets of the observations, obtained as a result of conditioning on the value of one or two variables, different from the variable(s) of
interest. The graphs can be any kind of statistical graph, such as a histogram, box plot, or scatter plot. The same principle can be applied to choropleth maps, resulting in so-called micromaps (Carr
and Pickle 2010).
With one conditioning variable, the observations are grouped into subsets according to the values taken by the conditioning variable, organized along the x-axis from low to high, e.g., below or above
the median. For each of these subsets, a separate graph or map is drawn for the variable(s) of interest. The same principle is applied when there are two conditioning variables, resulting in a matrix
of graphs, each corresponding to a subset of the observations that fall in the specified intervals for the conditioning variables on the x and y-axis. Of course, for this to be meaningful, one has to
make sure that each of the subsets contains a sufficient number of observations.
The point of departure in the conditional plots is that the subsetting of observations should have no impact on the statistic in question. In other words, all the graphs or maps should look more or
less the same, irrespective of the subset. Systematic variation across subsets indicates the presence of heterogeneity, either in the form of structural breaks (discrete categories), or suggesting an
interaction effect between the conditioning variable(s) and the statistic under scrutiny.
In GeoDa, conditional graphs are implemented for the histogram, box plot, scatter plot and thematic map.
8.4.1 Implementation
A conditional plot is invoked from the menu by selecting Explore > Conditional Plot. This brings up a list of four options: Map, Histogram, Scatter Plot, and Box Plot. The same list of four options
is also created after selecting the Conditional Plot icon, the right-most in the multivariate EDA subset in the toolbar shown in Figure 8.1. In addition, the conditional map can also be started as
the third item from the bottom in the Map menu.
Next follows a dialog to select the conditioning variables and the variable(s) of interest. The conditioning variables are referred to as Horizontal Cells for the x-axis, and Vertical Cells for the
y-axis. They do not need to be chosen both. Conditioning can also be carried out for a single conditioning variable, on either the horizontal or on the vertical axis alone.
The remaining columns in the dialog are either for a single variable (histogram, box plot, map), or for both the Independent Var (x-axis) and the Dependent Var (y-axis) in the scatter plot option.
8.4.1.1 Conditional plot options
Two important options, special to the conditional plot, are Vertical Bins Breaks and Horizontal Bins Breaks. These options provide a way to select the observation subsets for each conditioning
variable. They include all the same options used in map classification (see Figure 4.2 in Chapter 4). Of special interest are the Unique Values option and the Custom Breaks options. Unique Values is
particularly appropriate when the conditioning variable takes on discrete values, in which case other classifications may result in meaningless subsets (see the discussion of Figure 8.15). Custom
Breaks is useful when subclasses for the conditioning variable were determined previously, preferably contained in a project file (see Section 4.6.3).
Each graph also has its usual range of options available, with the exception of Regimes Regression for the conditional scatter plot. Selection of observations is implemented, but it does not result
in a re-computation of the linear regression fit. On the other hand, the conditional scatter plot does include the LOWESS Smoother option.
8.4.2 Conditional statistical graphs
To illustrate these concepts, first, a set of conditional scatter plots is created for the relationship between peduc_20 (x-axis) and pfood_20 (y-axis). This is conditioned by three subcategories
(the default number) for pch12 in the horizontal dimension (an indicator variable for whether the population increased, =1, or decreased, =0, between 2010 and 2020). The conditioning variable in the
vertical dimension is ALTID. The default classification is to use Quantile with three categories for both conditioning variables.
This yields the plot shown in Figure 8.15. Something clearly is wrong, since there turns out to be only one subcategory for the horizontal conditioning variable. This is because a 3-quantile
classification of the indicator variable does not provide meaningful categories. This will be dealt with next.
However, first consider the substantive interpretation of this graph. The three plots show a strong positive and significant relationship in each case (significance indicated by **). In other words
less access to education is linearly related to less food security, more or less in line with our prior expectations. The lack of difference between the graphs would suggest that the relationship is
stable across all three ranges for altitude, the conditioning variable.
Figure 8.16 results after changing the Horizontal Bins Breaks option to Unique Values. At this point, the two categories for the horizontal conditioning variable correspond to the values of 0 and 1
for the population change indicator variable.
Conditioning on this variable does provide some indication of interaction. For pch12 = 1, the linear relationship between peduc_20 and pfood_20 is strong and positive, and not affected by altitude.
However, for pch12 = 0, there does not seem to be a significant slope for any of the subgraphs, suggesting a lack of relationship between the two variables in those municipalities suffering from
population loss. The change in sign of the slope along altitude is not meaningful since the slopes are not significant.
If only the horizontal classification had been used as a condition, the result would be the same as a Regimes Regression in a standard scatter plot. However, in order to consider the simultaneous
conditioning on two variable,six separate selection sets would be required to create six individual scatter plots with regimes, which is not very practical. In addition, the double conditioning
allows for the investigation of more complex interactions among the variables.
An alternative as well as generalization of the consideration of spatial heterogeneity by means of the Averages Chart (see Section 7.5.1) is the application of a Conditional Box Plot with a
conditioning variable that corresponds to spatial subsets of the data. In Figure 8.17, this is illustrated for pheal_20 (percent population that lacks access to health care), using the Region
classification as the conditioning variable along the horizontal axis. In the graph, Valles Centrales (8) shows a median value that is higher than the other regions, with Canada (1) having the best
median health outcome (lowest value for lack of access). While the conditional box plot is not an alternative to a formal difference in means (or medians) test, it provides a visual overview of the
extent to which heterogeneity may be present.
8.4.3 Conditional Maps
A final example is the conditional map or micromap matrix shown in Figure 8.18. Four box maps are included for ppov_20, conditioned by peduc_20 on the horizontal axis and pheal_20 on the vertical
axis. The graph is obtained after changing the bin breaks to two quantiles, i.e., below and above the median for each of the conditioning variable.
As before, the interest lies in the extent to which the maps represent similar spatial patterns in each of the subcategories. The maps suggest an interaction with peduc, where more of the brown
values (more poverty) are found for above median values for lack of education, and more blue values (less poverty) in the lower median (better education). A potential interaction with health access
is less pronounced.
As in any EDA exercise, considerable experimentation may be needed before any meaningful patterns are found for the right categories of conditioning variables. Of course, this runs the danger than
one ends up finding what one wants to find, an issue which revisited in Chapter 21. | {"url":"https://lanselin.github.io/introbook_vol1/conditionalplots.html","timestamp":"2024-11-08T18:34:31Z","content_type":"text/html","content_length":"121833","record_id":"<urn:uuid:dccc5493-f56f-470e-9d4f-1e34a176a515>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00800.warc.gz"} |
Inductance Calculator
Calculates the external inductance of various wire loop configurations and the inductance per-unit-length of various transmission line geometries. Formulas are provided for each configuration. These
formulas can be found in various texts and most of them are derived in the book "Inductance Calculations" by F. W. Grover.
External Inductance of Wire Loops
Select the box with the loop geometry that you would like to evaluate. The calculator determines the external inductance of the loop.
Inductance per Unit Length of Transmission Line Geometries
Select the box with the transmission geometry that you would like to evaluate. The calculator determines the external per-unit-length inductance.
©2024 LearnEMC, LLC | {"url":"https://learnemc.com/ext/calculators/inductance/index.html","timestamp":"2024-11-11T10:54:41Z","content_type":"text/html","content_length":"6644","record_id":"<urn:uuid:a3704604-d735-427e-b515-3a8a79cb7580>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00771.warc.gz"} |
[Towertalk] Measuring/ calculating Inductance
<prev [Date] next> <prev [Thread] next>
[Towertalk] Measuring/ calculating Inductance
There was a set of DOS programs around several years back called HamCalc
which included a program to do that calculation. I'm sure they must still
be available on the net.
73 de Terry KK6T
At 02:40 PM 3/13/2002 -0600, Red wrote:
>Hi, Steve and all;
>ARRL handbooks provide formulas and graphs for calculating inductance and
>designing coils.
> From the 1994 handbook:
>"The approximate inductance of a single layer air-core coil....
>L = d^2 times n^2) ÷ (18 times d + 40 times l)
>L = inductance in microhenrys
>d = coil diameter in inches (measured to outside diameter)
>l = coil length in inches (measured to greatest length)
>n = number of turns
>(d^2 = d squared and n^2 = n squared)
>This formula is a close approximation for coils having a length equal to
>or greater than 0.4d."
>The handbook presents several graphs and tables to simplify the
>calculations for coils of selected diameters and turns per inch.
>Hope this helps.
>73 de WOØW
>ve6wz_steve wrote:
> > I am looking for some advice or references on how to calculate and/or
> measure the inductance of a loading coil. I have been modelling a loaded
> yagi (for 80m) and know the reactance I require for the loading coils,
> and want to design the coils. I have experimented with a coil on a 6 ''
> form and have tried to measure the inductance with my MFJ analyser....the
> MFJ will not measure over 650 ohms reactance. Is there another way to
> either accurately measure the inductance or calculate the winding
> required for "Y" inductance using "X" gauge wire on "X" diameter form? I
> realize experimentation will ultimately be required but I'd like to get
> close on my first design.
> >
> > de steve VE6WZ.
> >
>Towertalk mailing list
<Prev in Thread] Current Thread [Next in Thread> | {"url":"http://lists.contesting.com/archives/html/Towertalk/2002-03/msg00194.html","timestamp":"2024-11-10T06:32:58Z","content_type":"text/html","content_length":"10209","record_id":"<urn:uuid:45383839-5a91-405d-8e60-b6cb97145a9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00143.warc.gz"} |
ECG signal processing: practical noise reduction tips - ASN Home
In ECG signal processing, the Removal of 50/60Hz powerline interference from delicate information rich ECG biomedical waveforms is a challenging task! The challenge is further complicated by
adjusting for the effects of EMG, such as a patient limb/torso movement or even breathing. A traditional approach adopted by many is to use a 2nd order IIR notch filter:
\(\displaystyle H(z)=\frac{1-2cosw_oz^{-1}+z^{-2}}{1-2rcosw_oz^{-1}+r^2z^{-2}}\)
where, \(w_o=\frac{2\pi f_o}{fs}\) controls the centre frequency, \(f_o\) of the notch, and \(r=1-\frac{\pi BW}{fs}\) controls the bandwidth (-3dB point) of the notch.
What’s the challenge?
As seen above, \(H(z) \) is simple to implement, but the difficulty lies in finding an optimal value of \(r\), as a desirable sharp notch means that the poles are close to unit circle (see right).
In the presence of stationary interference, e.g. the patient is absolutely still and effects of breathing on the sensor data are minimal this may not be a problem.
However, when considering the effects of EMG on the captured waveform (a much more realistic situation), the IIR filter’s feedback (poles) causes ringing on the filtered waveform, as illustrated
Contaminated ECG with non-stationary 50Hz powerline interference (IIR filtering)
As seen above, although a majority of the 50Hz powerline interference has been removed, there is still significant ringing around the main peaks (filtered output shown in red). This ringing is
undesirable for many biomedical applications, as vital cardiac information such as the ST segment cannot be clearly analysed.
The frequency reponse of the IIR used to filter the above ECG data is shown below.
IIR notch filter frequency response
Analysing the plot it can be seen that the filter’s group delay (or average delay) is non-linear but almost zero in the passbands, which means no distortion. The group delay at 50Hz rises to 15
samples, which is the source of the ringing – where the closer to poles are to unit circle the greater the group delay.
ASN FilterScript offers designers the notch() function, which is a direct implemention of H(z), as shown below:
ClearH1; // clear primary filter from cascade
ShowH2DM; // show DM on chart
interface BW={0.1,10,.1,1};
Num = getnum(Hd); // define numerator coefficients
Den = getden(Hd); // define denominator coefficients
Gain = getgain(Hd); // define gain
Savitzky-Golay FIR filters
A solution to the aforementioned mentioned ringing as well as noise reduction can be achieved by virtue of a Savitzky-Golay lowpass smoothing filter. These filters are FIR filters, and thus have no
feedback coefficients and no ringing!
Savitzky-Golay (polynomial) smoothing filters or least-squares smoothing filters are generalizations of the FIR average filter that can better preserve the high-frequency content of the desired
signal, at the expense of not removing as much noise as an FIR average. The particular formulation of Savitzky-Golay filters preserves various moment orders better than other smoothing methods, which
tend to preserve peak widths and heights better than Savitzky-Golay. As such, Savitzky-Golay filters are very suitable for biomedical data, such as ECG datasets.
Eliminating the 50Hz powerline component
Designing an 18th order Savitzky-Golay filter with a 4th order polynomial fit (see the example code below), we obtain an FIR filter with a zero distribution as shown on the right. However, as we wish
to eliminate the 50Hz component completely, the tool’s P-Z editor can be used to nudge a zero pair (shown in green) to exactly 50Hz.
The resulting frequency response is shown below, where it can be seen that there is notch at exactly 50Hz, and the group delay of 9 samples (shown in purple) is constant across the frequency band.
FIR Savitzky-Golay filter frequency response
Passing the tainted ECG dataset through our tweaked Savitzky-Golay filter, and adjusting for the group delay we obtain:
Contaminated ECG with non-stationary 50Hz powerline interference (FIR filtering)
As seen, there are no signs of ringing and the ST segments are now clearly visible for analysis. Notice also how the filter (shown in red) has reduced the measurement noise, emphasising the
practicality of Savitzky-Golay filter’s for biomedical signal processing.
A Savitzky-Golay may be designed and optimised in ASN FilterScript via the savgolay() function, as follows:
ClearH1; // clear primary filter from cascade
interface L = {2, 50,2,24};
interface P = {2, 10,1,4};
Hd=savgolay(L,P,"numeric"); // Design Savitzky-Golay lowpass
This filter may now be deployed to variety of domains via the tool’s automatic code generator, enabling rapid deployment in Matlab, Python and embedded Arm Cortex-M devices.
https://www.advsolned.com/wp-content/uploads/2019/05/biomedical_ex_fir_td.png 407 746 Dr. Sanjeev Sarpal https://www.advsolned.com/wp-content/uploads/2018/02/ASN_logo.jpg Dr. Sanjeev Sarpal2019-05-22
16:34:202023-07-10 10:07:57Practical noise reduction tips for biomedical ECG filters | {"url":"https://www.advsolned.com/noise-reduction-tips-biomedical-ecg-data/","timestamp":"2024-11-13T01:44:12Z","content_type":"text/html","content_length":"104610","record_id":"<urn:uuid:9432e22e-f662-4977-bddd-21d7640c9d14>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00246.warc.gz"} |
Determine the original size of the FFT used in OFDM?
6 years ago
●4 replies●
latest reply 6 years ago
381 views
Given an unknown OFDM symbol consisting of 9144 samples, interested to know the size of the FFT used in the TX/Number of bins.
Downsampling to 512,256,128,64,32 yields the following.
Do not see how can this information be derived based on this data.
As a result, have tried to analyze the time domain of the downsampled vectors, and detect samples that were removed by the downsampling process.
Seems that the third graph represents the original FFT size – 128, as the sample at 277 has not been witnessed before(fft=64,32), and is witnessed at higher sampling rates.
Wonder if this is correct and whether there is a more accurate approach to determine the FFT size used?
[ - ]
Reply by ●October 11, 2018
little bit of an odd question the modem design is normally predetermined, i.e. you know the transmitters IFFT size. You also haven't provided any information on what it is you are receiving -standard
WiFi packets? Typically its 64. Also unclear why you are downsampling at the possible FFT size, not sure what this would tell you, especially if you are not sample aligned to the start of a symbol to
begin with and not sampling at the correct bandwidth.
If this is a standard WLAN packet as described by
then you can note the cyclic prefix will have some predetermined length in time. Try sampling at the max channel bandwidth possible for WLAN (160MHz, but probably only need to go up to 20-80MHz) and
note the cyclic prefix duration which should give you the channel bandwidth being used, which should be using a standard sub-carrier spacing and from that fact you can solve for N, the fft size from:
$$\Delta_t = \frac{1}{N T}$$
[ - ]
Reply by ●October 11, 2018
This is a propriety video link being BLINDLY demodulated.
No parameters are given/known.
There was however a progress with the problem, no bits yet though. Will update when there will be.
[ - ]
Reply by ●October 11, 2018
As, spetcavich pointed typically FFT size is already decided during modem design.
But would be interested to know how your blind demodulation turns out.
[ - ]
Reply by ●October 11, 2018
May be you can use the cyclic prefix as your flag. For example correlate the stream in a sliding manner with a sliding delayed version and see if you get peaks of correlation where cyclic prefix is
aligned with end of symbol then it will tell you how many samples are between peaks. Next you need to identify where samples are equal or so i.e. end of symbol Vs cyclic prefix. | {"url":"https://dsprelated.com/thread/6826/determine-the-original-size-of-the-fft-used-in-ofdm","timestamp":"2024-11-13T09:07:33Z","content_type":"text/html","content_length":"35935","record_id":"<urn:uuid:1764142d-ba59-4f42-96f8-7022dc7e0161>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00651.warc.gz"} |
On fixed points and phase transitions in five dimensions
TYPE High Energy Physics Seminar
Speaker: Francesco Mignosa
Affiliation: SISSA
Organizer: Yotam Soreq
Date: 08.11.2021
Time: 11:30 - 12:45
Location: Lidow Nathan Rosen (300)
Abstract: Supersymmetric gauge theories in five dimensions, although power counting non-renormalizable, are known to be in some cases UV completed by a superconformal field theory. Many tools, such
as M-theory compactification and pq-web constructions, were used in recent years in order to deepen our understanding of these theories. This framework gives us a concrete way in which we
can try to search for additional IR conformal field theory via deformations of these well-known superconformal fixed points. Recently, the authors of 2001.00023 proposed a supersymmetry
breaking mass deformation of the E_1 theory which, at weak gauge coupling, leads to pure SU(2) Yang-Mills and which was conjectured to lead to an interacting CFT at strong coupling. During
this talk, I will provide an explicit geometric construction of the deformation using brane-web techniques and show that for large enough gauge coupling a global symmetry is spontaneously
broken and the theory enters a new phase which, at infinite coupling, displays an instability. The Yang-Mills and the symmetry broken phases are separated by a phase transition. Quantum
corrections to this analysis are discussed, as well as possible outlooks. Based on arXiv: 2109.02662.
Join Zoom Meeting
Meeting ID: 952 6803 9390 | {"url":"https://phys.technion.ac.il/he/events/future-events/mevent/2488:on-fixed-points-and-phase-transitions-in-five-dimensions?tmpl=component","timestamp":"2024-11-06T04:52:16Z","content_type":"application/xhtml+xml","content_length":"7848","record_id":"<urn:uuid:78ccab6e-8e5f-43a3-a36b-dda51c6a5921>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00870.warc.gz"} |
Lesson Plan #549. MATH CENTERS(Elementary, Mathematics)
Math Centers (Part 1)
MATH CENTERS (A Collaborated Effort of the Primary Board Summer 1998)
Posted by Mona on 7/24/98
Estimation Station.......... One of the easiest ways to do this is to have a weekly estimation jar. Every week you could change the contents of the jar or the size of the jar. Sometimes put large
objects and sometimes small. Let children examine the jar and write their estimations. You can go further by having them write a paragraph about "how" they got their guess. At the end of the week
count the objects together or have the last center count them. Teach them some estimating skills. Also at the estimation stationyou can have them estimate the capacity of a container.Have a
container and the material (clothespins). Have then make a prediction of how many and then actually fill the jar to check their predictions. Small manipulatives could be used for this and you
could change the size of the container weekly. (c/o Mona)
Posted by Michael in Toronto on 7/29/98 .
I have an estimation and Venn diagram centre which is changed every day. I found one week was too long, and if you do a "theme" for the estimation it can last 5 days (e.g estimating the plastic
small teddy bears... changes the size of container over 5 days). You can also ask questions like :"Yesterday the jar was full of plastic teddy bears. Today it is half full of teddy bears. How
many teddy bears are in the half full jar?" I always write the questions this way like word problems. They write their estimate on a sheet, circle it and write their names next to it. The child
with the closest estimate wins a prize. We always count the objects together and I use this as an opportunity of demonstrating division, multiplication etc. Some days I even ask them to write
their estimations in their math notebooks and explain how they got their answer using words, numbers and pictures.
The Venn Diagrams can also be changed every day and then when you've got enough of them they can be placed at a Graphing centre and children can each choose one Venn diagram to graph on their
own. Venn Diagrams can also be made into a book and sent home each night with a different child. Most parents haven't a clue what a Venn diagram is. You can have a couple of incomplete venn's for
parents to sign at the back of the book. Venn Diagram results can also be reported in the class newsletter. Michael in Toronto
P.S. Strongly recommend new primary teachers try this starting the first day. Just set the centre up at a small table. You can cover a lot without having to do a lot of preparation or direct
Venn Diagrams - any easy way to make a versatile and portable Venn Diagram is to use air hose tubes from aquariums. The tubes are really cheap and you can buy connectors for them as cheaply. Make
two circles using the tubes and you can overlap the circles and place the items in the appropriate place. These can be labeled with 3x5 cards. You can also have a group make a Venn diagram ( 2nd
grade) and leave off the labels. Have other students look at the diagram and try to guess how it has been sorted. (c/o Djinn)
I do a Venn diagram almost everyday as a warm-up when they come in. I have their names laminated with a magnet on the back and place on their lockers. I use the cute patterns you see on note pads
that go with my class theme (ocean life, teddy bears, hockey pucks...)
I write a question of the day on the board and they put their name either inside or outside the circle (yes/no) to answer the question.This is a spring board for many math ideas and comparisons.
My circle is a hoola hoop that I place on a shower hook and magnet man on my magnetic chalkboard (lucky me!)As the year goes on I add another hoola hoop for the Venn diagram. I was asked to get
to three circles this year for first grade, but it wasn't happening. I have friends who say it works great with
2-5...it just confused the kids and me. No math major here. tee
hee. Kara
Posted by abby on 7/24/98
Found this in an old book:
For addition problems with regrouping
Cut an egg carton in half:
Open it, on the side that is the top write a problem like
On the side where the eggs would be, students use chips to represent each number...So 2 chips in on egg section, 5 chips in the egg section next to it
then underneath, 4 chips, then 6 chips.
Students use chips in the sections to solve problem
Tell them the rule, if you have ten you trade...
You could also , on the egg section write a + sign and draw a line to represent where the answer goes...
Hope this makes sense! (c/o Abby)
Posted by Marva/Texas/3rd on 7/24/98
Math Their Way Tubs are very appropriate.
Posted by Amy/WA on 7/24/98
Another center could be patterning. There are many activities that could be done with pattern blocks. Another source for centers is Box It or Bag It. Amy
Posted by Gisha on 7/24/98
I've used Math Their Way for seven years and love it! After attending the first workshop,I felt I just had to have all the suggestions in place when school started and just about did myself in.
My advice is to get the book, look it over, and then do your own thing! Several of our staff bought note pads (Carson Dellosa) and shared with each other so we all had approximately ten sets of
work mats and then added our own ideas for manipulatives. For instance, with a crayola box workjob I added red and yellow straws cut into pieces (I got straws from Sonic for free), with a dog
note pad I used small bone treats, with the apple tree note pad I used lima beans--one side spray painted red and the other side painted pink, and another one was Ariel (The Little Mermaid) note
pad and I used a necklace of seashells(cut-apart) from a good will store for manipulatives. My "work jobs" are not as fancy as those in the book, but my kids love them. I will add more, if anyone
wants them. I have forty-five work jobs in my classroom and I'd love to share them!
Posted by julie 3/CO on 7/24/98
You could have a math literacy center that include literature and laminated activity cards. I'm thinking mostly of Marilyn Burns literature collections, there's k-3 and 4-6.
the Family Math book also might have some ideas that would be good centers.
Posted by JAB, juanbro@nxi.com, on 7/24/98
Another great resource for math center ideas for K-1 is the Math In My World book put out, I think, by Creative Teaching Press. It is divided into 5 sections: All About Me, Nature, Playground,
Food & Nutrition, and
something else, it might be Families. It also includes ideas on how to involve parents in each theme. JuliAnne
Posted by S.J. on 7/24/98
Since I teach first grade my all my centers would not be
applicable, but the following might help and could be
1. Beat the calculator(credit to Chicago Math)
Materials: deck of math facts to be studied, calculator
Students work in pairs, one student uses the calculator, theother student uses their brain--turn over a card with math fact--the first to give the correct answer keeps the card--the winner is the
student with the most cards--second gamereverse roles.
2. Problem solving: a problem is posed/could be written on
note cards/or posted--children solve problem and using largedrawing paper illustrate how the solved the
problem--procedure can be shared
3. multiplication (or in my case addition and subtraction)
bingo--make your own cards or purchase game
4. Teacher Created Materials has a set of resources called
Science in a Bag, Math in a Bag etc. These were written withthe intention of being homework/bring back to school and use--ideas, they can be adapted to the classroom and become centertype
activities--I have not done this yet--one of my goalsfor this year
5. tangram puzzles
6. tessellation's
Posted by Praline/3rd/LA on 7/24/98
Here is a center that my students enjoyed. I bought some wooden dowels. I had someone cut them in different lengths. I labeled each one A,B,C,..etc.The task was estimating. The students picked a
dowel,estimated how long it was,measured it then looked at the letter and used the answer key to check their answer.The measurement used was inches.Another center is graphing.When we did it as a
class,we used m&m,skittles or some other edibles.In the center, you can use different color buttons,different kinds of beans or different shape pasta.The students pick a container ,group the
items then use graph paper to draw a graph to represent the items.What I like about this center is that it can accomodate several students at the same time all drawing each their own graphs in
their group.Of course some students prefer working on their own graph. The drawing can be checked in a second and problems noted for small group reteaching later.
Hope someone can use these ideas.
Posted by Amy/WA on 7/24/98
One other activity that I use is to make measurement boxes. I put several things in a shoe box. The student measure the objects and records the answer. I have the answers in the lid of the box. I
show the students how to open the box, put the lid under the box and then measure. Another idea is to find petri dishes (the kind that the science teacher at the high school might have) that are
divided into two sections. I have students put in beans of the number that we are working on and shake the dish and then write the equation. For example if we are working on 9, they would put in
nine beans, shake and record 5+4=9, because there will be 4 beans on one side and 5 on the other, shake and record again. These dishes are great because they can be put on the overhead. Amy
Posted by Amy/WA on 7/24/98
I just thought of another idea that I do. When I introduce "doubles", I have cloth bags with the following objects: a car, plastic insect, plastic spider, a picture of a hand, an egg carton, a
calendar page, a box of 16 crayons, and a picture of a semi truck. Students reach into the bag and record a double, for example if they pick a car, they record 2+2=4, because there are 2 wheels
on each side. The insect is 3+3 because of the legs etc. They love to work with the bags and will record answers over and over. Am
Posted by Mel on 7/25/98
There is a new Math program out called Quest 2000. It has
excellent ideas and they refer to their math centers as tubbing. They start with a whole class lesson and then reinforce math skills through 6 to 8 tubs. The ideas are amazing and the students
love to get into the tubs. Mel
Other Math Centers (posted by Mona 1998)
Math Games---Commercial games such as Yahtzee, dominoes, chess, and bingo develop math and logic skills.
Money Activities: Children can sort, identify and graph coins. They can count by ones (pennies), fives (nickels), and tens (dimes). A set of money stamps is great for recording work. Using the
stamp students can create money rebus stories.
Dollar Activities-There are 293 ways to change a dollar bill. Let the students use the money stamps to stamp out the different ways on butcher paper.
Set up a toy store or mini grocery store.
Graph of the day is a great center to have at a math center. Examples are:
Favorite Subjects
Favorite Sports
Pets You Have
Letters in Your Name
Let the children decide on some graphs.
Fractions-- Have manipulatives to explore fractions as equal parts of a whole. Provide paper shapes and graph paper to fold and cut. Have things like coins, buttons, crayons, counters for finding
equal parts of a set.
Measurement-Have students use measuring cups to measure rice to fill containers ½, ¼, 1/3 full. Measuring spoons can also be used to explore equal measures.
Math in a Bag-Lunch bags can be used to put different math activities in. These are especially useful if you have games with many pieces. Students pour out the activity and then place it back in
the bag when complete.
Math Geo Safari-- Expense here, but can be used for individualizing skills and fast practice.
Manipulatives for Math Center:
Posted by kj house on 7/27/98
This is just a take off of the red/white lima beans from Math Their Way--instead of painting the lima beans red on one side, I spray painted them sunburst yellow (both sides, but you could leave
one side white) then I drew smiling faces (just eyes and mouth) on one side with a black sharpie marker. It's hard to look at over 300 smiley faces and not smile. I will use them for patterning,
making equations, seeing how many times you have to roll a dice before you get to pick 30 smiling faces, keeping track of rolls with tally marks. Basically whatever you use a two sided counter
for. I guess it's time to start school, I'm starting to have too much time on my hands.
Posted by Darcy on 7/28/98
Another idea like this that I just recently read on this site
is to spray lima beans is to. . ... 1. Spray one side of the beans
orange and draw jack-o-lantern faces on them for the fall.
and 2. Either spray on side white (or leave unsprayed) and
draw ghost eyes on a vertically held bean.
I did this years ago and the beans hold up great. I sprayed both sides orange and a mother put the faces on one side. So we had pumpkins and jack o' lanterns. Then she took plain beans and put
ghost faces on one side.
I have glued two beans, one on top of the other, and sprayed them green, then glued little wiggly eyes on them. They make cute counters and last for years. Nancy
If you have math centers to share please consider posting them to this board. Teachers are always on the LOOK for new center ideas!! If you use centers in your classroom you may want to also
check out Lori V's literacy centers ,
Science Centers by LuAnn and the lesson called Discovery Bottles! | {"url":"https://teachers.net/lessons/posts/549.html","timestamp":"2024-11-12T16:55:46Z","content_type":"text/html","content_length":"37784","record_id":"<urn:uuid:4df4794e-4858-4f6c-8091-ec288916df89>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00302.warc.gz"} |
Lesson 8
Exponential Situations as Functions
8.1: Rainfall in Las Vegas (5 minutes)
The goal of this warm-up is to review the meaning of a function presented graphically. While students do not need to use function notation here, interpreting the graph in terms of the context will
prepare them for their work with functions in the rest of the unit.
Tell students to close their books or devices. Display the graph for all to see. Ask students to observe the graph and be prepared to share one thing they notice and one thing they wonder.
Select students to briefly share one thing they noticed or wondered to help ensure all students understand the information conveyed in the graph. Ask students to open their books or devices and
answer the questions about the graph. Follow with a whole-class discussion.
Student Facing
Here is a graph of the accumulated rainfall in Las Vegas, Nevada, in the first 60 days of 2017.
Use the graph to support your answers to the following questions.
1. Is the accumulated amount of rainfall a function of time?
2. Is time a function of accumulated rainfall?
Anticipated Misconceptions
If students struggle to see from the graph how the accumulated rainfall is a function of time but time is not a function of accumulated rainfall, consider displaying the data in a table. Shown here
is the data for the first 20 days of 2017. Help students see that for every value of \(t\), the time in days, there is one value of \(r\), the accumulated rainfall in inches, but this is not true the
other way around.
│ \(t\) (days) │1│2│3│4│5│6│7│8│9│10│11│12 │13 │14 │15 │16 │17 │18 │19 │20 │
│\(r\) (inches) │0│0│0│0│0│0│0│0│0│0 │0 │0.03│0.1│0.1│0.1│0.1│0.1│0.1│0.11│0.38│
Activity Synthesis
Make sure students understand why the accumulated rain is a function of time but not the other way around.
Then, recall the notation for writing, using function notation, accumulated rain fall as a function of time. If \(r\) represents the amount of rainfall in inches and \(t\) is time in days, then \(r
(t)\) is the amount of rain that has fallen in the first \(t\) days of 2017. For example, \(r(2) = 0\) tells us that the accumulated rainfall after in the first two days of the year was 0 inches. \(r
(48)=1\) means that there was 1 inch of accumulated rain in the first 48 days of the year. Ask students to write and explain the meaning of a few other statements using function notation.
8.2: Moldy Bread (15 minutes)
In this activity, students represent a situation using a table of values, a graph, and an equation. From the exponential equation, it is a short step to thinking of the relationship between the
quantities as a function.
Note that it is possible and acceptable to think of time as a function of area, but expressing this using an equation is out of the scope of this course. Students could, however, represent such a
function with a graph, table, or description.
Making graphing technology available gives students an opportunity to choose appropriate tools strategically (MP5).
Representation: Internalize Comprehension. Represent the same information through different modalities by drawing a diagram. Encourage students who are unsure where to begin to sketch a diagram of a
slice of bread on graph paper and to shade the area that is covered in mold after 1 day, then 2 days...until they reach the day when the slice of bread is completely covered in mold.
Supports accessibility for: Conceptual processing; Visual-spatial processing
Student Facing
Clare noticed mold on the last slice of bread in a plastic bag. The area covered by the mold was about 1 square millimeter. She left the bread alone to see how the mold would grow. The next day, the
area covered by the mold had doubled, and it doubled again the day after that.
1. If the doubling pattern continues, how many square millimeters will the mold cover 4 days after she noticed the mold? Show your reasoning.
2. Represent the relationship between the area \(A\), in square millimeters, covered by the mold and the number of days \(d\) since the mold was spotted using:
1. A table of values, showing the values from the day the mold was spotted through 5 days later.
2. An equation
3. A graph
3. Discuss with your partner: Is the relationship between the area covered by mold and the number of days a function? If so, write ____ is a function of ____. If not, explain why it is not.
Student Facing
Are you ready for more?
What do you think an appropriate domain for the mold area function \(A\) is? Explain your reasoning.
Anticipated Misconceptions
Students may have trouble understanding how to account for time in the first question. They may benefit from writing the area after 1 day has passed, 2 days have passed, etc. A table is a convenient
way to gather this information.
Activity Synthesis
Discuss why the area covered by mold is a function of the number of days that have passed. Attend explicitly to language students learned in the prior unit on functions: The area of the mold \(A\) is
a function of the number of days \(d\) since the mold was spotted, \(A = f(d)\). The function \(f\) expressing the mold relationship can be written as \(f(d) = 1 \boldcdot 2^d\), where \(d\) measures
days since the mold was spotted and \(f(d)\) gives the area covered by the mold in square millimeters.
Discuss whether a discrete graph or a curve is more appropriate and what domain would be suitable in this context. Ask questions such as:
• “Can the independent variable be something like 1.5, a number that is not a whole number? Is there an area that is associated with 1.5 days?” (Yes, some area of the bread is covered by mold at
any point in time. The mold doesn't disappear after being spotted and then reappear at exactly 1 full day, 2 full days, etc.)
• “What would be the meaning of a point on the graph where the value of \(d\) is, for instance, between 2 and 3?” (It would mean the area covered by mold at some point longer than 2 days but less
than 3 days after mold was spotted.)
• “What domain would be appropriate for this function? Can the mold grow indefinitely?” (Since the area of the bread (the range) is limited, the exponential growth cannot continue indefinitely. By
the end of one week, more than 1 square cm will be covered and, by the end of the second week, the values of the function will be close to or will exceed the total area of the bread.)
Students using paper and pencil may decide that it makes sense to connect the points on the graph but they will not yet know how to do so. Consider stating that they are connected (in a very specific
way) and their properties will be studied later.
Students using the digital version (or graphing technology along with the paper and pencil version) will see the continuous graph. If desired, you may want to demonstrate how using function notation
to write the equation like \(f(d) = 1 \boldcdot 2^d\) can be put to use. Try typing \(A(2)\) or \(A(\text-1)\).
Conversing: MLR2 Collect and Display. During the discussion, listen for and collect the language students use to describe the situation as a function. Call students’ attention to language such as
“independent or dependent variable” or “input or output value.” Write the students’ words and phrases on a visual display and update it throughout the remainder of the lesson. Remind students to
borrow language from the display as needed. This will help students use mathematical language for describing an exponential function and determining which variable is a function of the other.
Design Principle(s): Maximize meta-awareness
8.3: Functionally Speaking (15 minutes)
Students have described and analyzed situations involving exponential change, using graphs, tables, and equations. Now they revisit several of these contexts, viewing them as functions and expressing
them using function language and notation.
Each situation involves two quantities and students will need to choose one of these to be the independent variable and one to be the dependent variable. For all of these relationships, it is
possible to choose either variable as the independent but one choice gives a logarithmic function (which is out of the scope of this course), while the other gives an exponential function. In each
case, however, students have previously worked with the context.
Look for students who explicitly state the meaning of their variables, including units, and invite them to share during the discussion. For example, in the second situation, if \(t\) represents the
number of years and \(v\) represents the value of the car, in dollars, after \(t\) years then \(v\) can be viewed as a function \(f\) of \(t\) where \(f(t) = 18,\!000 \boldcdot \left(\frac{2}{3}\
right)^t\). The meaning and units for both \(t\) and \(f(t)\) (or \(v\)) are vital elements to answering the question fully.
Tell students that they will now revisit some previously seen situations. Ask students to read the three situations in the task, then solicit a few ideas on why all of them can be seen as functions.
Representation: Develop Language and Symbols. Create a display of important terms and vocabulary. Invite students to brainstorm vocabulary and phrases that they associate with each variable of \(y=a
(b)^x\). Compile words and phrases that will assist students with setting up equations on a display. This may include labels such as “independent variable,” “dependent variable,” “multiplier,” and
“starting point,” and also specific words explored in previous examples in order to incorporate prior knowledge. If referencing the moldy bread scenario, the labels of “output” and “input” can also
be paired with “area” and “time,” respectively.
Supports accessibility for: Conceptual processing; Memory
Student Facing
Here are some situations we have seen previously. For each situation:
• Write a sentence of the form "\(\underline{\hspace{0.5in}}\) is a function of \(\underline{\hspace{0.5in}}\)."
• Indicate which is the independent and which is the dependent variable.
• Write an equation that represents the situation using function notation.
1. In a biology lab, a population of 50 bacteria reproduce by splitting. Every hour, on the hour, each bacterium splits into two bacteria.
2. Every year after a new car is purchased, it loses \(\frac13\) of its value. Let’s say that the new car costs $18,000.
3. In order to control an algae bloom in a lake, scientists introduce some treatment products. The day they begin treatment, the area covered by algae is 240 square yards. Each day since the
treatment began, a third of the previous day’s area (in square yards) remains covered by algae. Time \(t\) is measured in days.
Anticipated Misconceptions
Students may confuse the terms "independent" and "dependent." Help them to think about which variable depends on the other in context.
Activity Synthesis
Invite selected students to present their equations, making sure to indicate what each variable represents, as well as the units of the variable.
For the third question, point out that it is a short step from an equation for the area covered by the algae \(A =240 \boldcdot \left(\frac{1}{3}\right)^t\) to a similar equation with function
notation. The equation gives the area covered by the algae at each time \(t\), so a function \(f\) can be defined using the same expression: \(f(t) =240 \boldcdot \left(\frac{1}{3}\right)^t\) and \(A
= f(t)\).
Speaking, Representing: MLR8 Discussion Supports. Use this routine to support whole-class discussion. After each student shares, provide the class with the following sentence frames to help them
respond: "I agree because….” or "I disagree because….” If necessary, revoice student ideas to demonstrate mathematical language use by restating a statement as a question in order to clarify, apply
appropriate language, and involve more students.
Design Principle(s): Support sense-making
8.4: Deciding on Graphing Window (20 minutes)
Optional activity
This optional activity further addresses the skill of choosing an appropriate graphing window when using graphing technology. In an earlier activity, students looked at how adjusting the graphing
window affects the usefulness of the graph. Here they gauge the reasonableness of a graphing window given an equation and a description of a function.
Because exponential functions eventually grow very quickly, the graph tends to quickly go off the screen if the domain is too large. The graphing window can be adjusted to display large values for
the vertical axis, but in doing so, the output values for most of the domain will all look like they are essentially 0.
To decide on the reasonableness of the given graphing boundaries, students may evaluate the function at the endpoints of the domain for the graphing window. Look for students who think carefully
about the domain, observing that, based on the context, the equation probably does not model even modest negative values of the input variable. This activity represents scaffolded practice for an
important aspect of mathematical modeling (MP4).
Provide access to graphing technology.
Student Facing
The equation \(m = 20\boldcdot (0.8)^h\) models the amount of medicine \(m\) (in milligrams) in a patient’s body as a function of hours, \(h\), after injection.
1. Without using a graphing tool, decide if the following horizontal and vertical boundaries are suitable for graphing this function. Explain your reasoning.
\(\displaystyle \text-10< h<100\)
\(\displaystyle \text-100< m<1,000\)
2. Verify your answer by graphing the equation using graphing technology, and using the given graphing window. What do you see? Sketch or describe the graph.
3. If your graph in the previous question is unhelpful, modify the window settings so that the graph is more useful. Record the window settings here. Convince a partner why the horizontal and
vertical boundaries that you set are better.
Activity Synthesis
Select students to share their graphs, the one created using the given horizontal and vertical boundaries, as well as the improved versions. Or display the graphs in the sample responses for all to
see. Discuss questions such as:
• “Why were the given boundaries not good?” (The domain and range are both too wide. The range is so large that almost all the points look like they're about 0. It is even hard to tell what the
starting value is.)
• “How can the context help us choose an appropriate graphing window?” (Negative values of time probably are not relevant because the amount of medicine in the body before the injection is likely
insignificant or is not modeled by the equation. Large values of time (say \(t=100\)) are not likely important because the effects of the medicine after 100 hours are likely to be minimal.)
Graphing exponential functions can be challenging because they can increase or decrease very quickly. Emphasize that an appropriate graphing window can often be selected using the context. Once the
relevant domain of a function and the horizontal boundaries of a graph are chosen, the vertical boundaries can be selected based on the output values for that domain so that any meaningful trends
(for example, exponential decay) are visible.
Lesson Synthesis
Many of the situations we have seen that are characterized by exponential growth can also be viewed as functions. Review why we can think of them as functions and how they compare to other functions
they have studied.
Consider a bacteria population \(p\), described by the equation \(p = 1,\!000 \boldcdot 2^t\), where \(t\) is the number of hours after it is first measured.
• “Complete the sentence for this situation: _____ is a function of _____.” (The number of bacteria is a function of time, or \(p\) is a function of \(t\).)
• “How is \(p\) related to the function \(f\) given by \(f(t) = 1,\!000 \boldcdot 2^t\)?” (\(p = f(t)\))
• “How are these equations like or different from the equations you've written previously, without function notation?” (They express the relationships the same way. Both equations can be used to
produce a table or graph or to answer questions about the bacteria population. The notation \(p = f(t)\) makes explicit that \(p\) depends on \(t\).)
• “How are the exponential functions here like or different from linear functions we saw earlier in the course?” (Both functions represent relationships where one quantity is determined by another
quantity, and there is only one possible output for every input. They are different in that linear functions grow by addition and exponential functions grow by multiplication.)
8.5: Cool-down - Beaver Population (5 minutes)
Student Facing
The situations we have looked at that are characterized by exponential change can be seen as functions. In each situation, there is a quantity—an independent variable—that determines another
quantity—a dependent variable. They are functions because any value of the independent variable corresponds to one and only one value of the dependent variable. Functions that describe exponential
change are called exponential functions.
For example, suppose \(t\) represents time in hours and \(p\) is a bacteria population \(t\) hours after the bacteria population was measured. For each time \(t\), there is only one value for the
corresponding number of bacteria, so we can say that \(p\) is a function of \(t\) and we can write this as \(p = f(t)\).
If there were 100,000 bacteria at the time it was initially measured and the population decreases so that \(\frac{1}{5}\) of it remains after each passing hour, we can use function notation to model
the bacteria population:
\(\displaystyle f(t) = 100,\!000 \boldcdot \left(\frac{1}{5}\right)^t\)
Notice the expression in the form of \(a \boldcdot b^t\) (on the right side of the equation) is the same as in previous equations we wrote to represent situations characterized by exponential change. | {"url":"https://im-beta.kendallhunt.com/HS/teachers/1/5/8/index.html","timestamp":"2024-11-07T22:51:00Z","content_type":"text/html","content_length":"110476","record_id":"<urn:uuid:271d79d2-3901-4293-b26b-d706d88e16fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00193.warc.gz"} |
Robust Optimization
Understanding the variation of the optimal value with respect to change in the data is an old problem of mathematical optimisation. This paper focuses on the linear problem f(λ) = min ctx such that
(A+λD)x ≤ b, where λ is an unknown parameter that varies within an interval and D is a matrix modifying the … Read more | {"url":"https://optimization-online.org/category/robust-optimization/","timestamp":"2024-11-08T17:13:22Z","content_type":"text/html","content_length":"109198","record_id":"<urn:uuid:6eb53960-d375-4581-8823-c41ccbeee3c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00117.warc.gz"} |
Business Jargons
Definition: The Regression Equation is the algebraic expression of the regression lines. It is used to predict the values of the dependent variable from the given values of independent variables. If
we take two regression lines, say Y on X and X on Y, then there will be two regression equations: Regression Equation of Y on X: This is used to describe the variations in … [Read more...] about
Regression Equation | {"url":"https://businessjargons.com/page/59","timestamp":"2024-11-13T14:21:41Z","content_type":"text/html","content_length":"45860","record_id":"<urn:uuid:d59fa3ac-f12a-4244-b81e-bc60d1d6e3bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00652.warc.gz"} |
The human brain builds structures in 11 dimensions, discover scientistsThe human brain builds structures in 11 dimensions, discover scientists
Neuroscientists have used a classic branch of maths in a totally new way to peer into the structure of our brains. What they've discovered is that the brain is full of multi-dimensional geometrical
structures operating in as many as 11 dimensions.
We're used to thinking of the world from a 3-D perspective, so this may sound a bit tricky, but the results of this new study could be the next major step in understanding the fabric of the human
brain - the most complex structure we know of.
This latest brain model was produced by a team of researchers from the Blue Brain Project, a Swiss research initiative devoted to building a supercomputer-powered reconstruction of the human brain.
The team used algebraic topology, a branch of mathematics used to describe the properties of objects and spaces regardless of how they change shape. They found that groups of neurons connect into
'cliques', and that the number of neurons in a clique would lead to its size as a high-dimensional geometric object.
"We found a world that we had never imagined. There are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found
structures with up to 11 dimensions." says lead researcher, neuroscientist Henry Markram from the EPFL institute in Switzerland.
Human brains are estimated to have a staggering 86 billion neurons, with multiple connections from each cell webbing in every possible direction, forming the vast cellular network that somehow makes
us capable of thought and consciousness. With such a huge number of connections to work with, it's no wonder we still don't have a thorough understanding of how the brain's neural network operates.
But the new mathematical framework built by the team takes us one step closer to one day having a digital brain model.
To perform the mathematical tests, the team used a detailed model of the neocortex the Blue Brain Project team published back in 2015. The neocortex is thought to be the most recently evolved part of
our brains, and the one involved in some of our higher-order functions like cognition and sensory perception.
After developing their mathematical framework and testing it on some virtual stimuli, the team also confirmed their results on real brain tissue in rats. According to the researchers, algebraic
topology provides mathematical tools for discerning details of the neural network both in a close-up view at the level of individual neurons, and a grander scale of the brain structure as a whole.
By connecting these two levels, the researchers could discern high-dimensional geometric structures in the brain, formed by collections of tightly connected neurons (cliques) and the empty spaces
(cavities) between them.
"We found a remarkably high number and variety of high-dimensional directed cliques and cavities, which had not been seen before in neural networks, either biological or artificial," the team
writes in the study.
"Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures, the trees in the forest, and see the empty spaces, the clearings, all
at the same time." says one of the team, mathematician Kathryn Hess from EPFL.
Those clearings or cavities seem to be critically important for brain function. When researchers gave their virtual brain tissue a stimulus, they saw that neurons were reacting to it in a highly
organized manner.
"It is as if the brain reacts to a stimulus by building [and] then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex
geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates." says one of the
team, mathematician Ran Levi from Aberdeen University in Scotland.
These findings provide a tantalizing new picture of how the brain processes information, but the researchers point out that it's not yet clear what makes the cliques and cavities form in their highly
specific ways and more work will be needed to determine how the complexity of these multi-dimensional geometric shapes formed by our neurons correlates with the complexity of various cognitive tasks.
But this is definitely not the last we'll be hearing of insights that algebraic topology can give us on this most mysterious of human organs - the brain.
No comments | {"url":"https://amazingastronomy.thespaceacademy.org/2022/08/the-human-brain-builds-structures-in-11.html","timestamp":"2024-11-03T01:26:08Z","content_type":"application/xhtml+xml","content_length":"504966","record_id":"<urn:uuid:51dd68af-67be-4b60-9786-3bc29f92eb3a>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00636.warc.gz"} |
Discover How to Use the OR Function of Conditional Formatting | Excelchat
Conditional Formatting is an excellent way to visualize the data based on certain criteria. OR function in the Conditional Formatting highlights the data in the table if at least one of the defined
conditions is met. This step by step tutorial will assist all levels of Excel users in creating a Conditional Formatting OR formula rule.
Figure 1. Final result
Syntax of the OR formula
=OR(logical1,[logical2], …)
The parameters of the OR function are:
• Logical1, logical2 – the conditions that we want to test
The output of the formula is value TRUE if just one condition is met. If neither of the conditions is met formula result will be FALSE
Highlight cell values using OR function
To mark the rows in the table based on the certain criteria we can use formula rules in the Conditional Formatting. In our example, we want to emphasize the rows in the table where Net Margin is over
8% or Sales are over 600. For this purpose, we will use OR function since it’s enough that only one condition is met.
Create an OR formula rule in the Conditional Formatting
To create a Conditional Formatting rule based on the formula we should follow the steps below:
• Select the cell, cell range or table in Excel where we what to apply the Conditional Formatting
• Find Conditional Formatting button tab and choose New Rule
• Choose Use a formula to determine which cells to format
• Enter formula rule under the section Format values where this formula is true
The formula above highlights the rows in the example table where Net Margin is over 8% or Sales is over 600. OR function checks if at least one condition is met and returns the value TRUE. This value
triggers the Conditional Formatting rule.
In our OR formula example there are two logical tests:
• Logical1 is $C3>8% – checks if Net Margin is over 8%
• Logical2 is $D3>600 – examines if Sales is over 600
• Under the Format tab, we can define the visual appearance of the cells if the OR formula output is TRUE
• In Fill tab choose the background color (here you can also choose pattern style and color)
• In the Font tab we can define font style and Bold cell text
• After choosing the format in section Preview we can see how Conditional Formatting cells will look like if the rule is met
• Rows in the table are highlighted whenever the Net Margin is over 8% or Sales is over 600. Product F in the table is the only product that meets neither Net Margin nor Sales condition.
Most of the time, the problem you will need to solve will be more complex than a simple application of a formula or function. If you want to save hours of research and frustration, try our live
Excelchat service! Our Excel Experts are available 24/7 to answer any Excel question you may have. We guarantee a connection within 30 seconds and a customized solution within 20 minutes.
Leave a Comment | {"url":"https://www.got-it.ai/solutions/excel-chat/excel-tutorial/conditional-formatting/conditional-formatting-or","timestamp":"2024-11-06T14:40:44Z","content_type":"text/html","content_length":"94943","record_id":"<urn:uuid:9bbf3191-059e-4398-b760-19c61b5430f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00108.warc.gz"} |
• A new function Gini() computes the Gini coefficient. (See the document named Heterogeneity.)
• L1cent() now returns an object of class L1cent. summary() and print() methods are implemented for this new class.
• L1centGROUP() now returns an object of class L1centGROUP. print() method is implemented for this new class.
• L1centLOC() now returns an object of class L1centLOC. summary() and print() methods are implemented for this new class.
• L1centNB() now returns an object of class L1centNB. summary() and print() methods are implemented for this new class.
• L1centEDGE() now returns an object of class L1centEDGE. summary() and print() methods are implemented for this new class.
• Implementation of L1centEDGE() corrected.
• print() methods is implemented for the L1centMDS class. The L1centMDS() function formerly returned a length four list with label of the vertices as one component. This component is now an
attribute of the returned list, i.e., the L1centMDS() now returns a length three list. (See the document for L1centMDS().) As a result, the plot() method for the L1centMDS is modified as well. | {"url":"https://cloud.r-project.org/web/packages/L1centrality/news/news.html","timestamp":"2024-11-11T03:12:10Z","content_type":"application/xhtml+xml","content_length":"6279","record_id":"<urn:uuid:072b2425-b29c-4a44-a5e9-1cd1c6138648>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00004.warc.gz"} |
Scatterplots (2 of 5)
Learning Objectives
• Use a scatterplot to display the relationship between two quantitative variables. Describe the overall pattern (form, direction, and strength) and striking deviations from the pattern.
Interpreting the Scatterplot
How do we describe the relationship between two quantitative variables using a scatterplot? We describe the overall pattern and deviations from that pattern.
This is the same way we described the distribution of one quantitative variable using a dotplot or a histogram in Summarizing Data Graphically and Numerically. To describe the overall pattern of the
distribution of one quantitative variable, we describe the shape, center, and spread. We also describe deviations from the pattern (outliers).
Similarly, in a scatterplot, we describe the overall pattern with descriptions of direction, form, and strength. Deviations from the pattern are still called outliers.
• The direction of the relationship can be positive, negative, or neither:
A positive (or increasing) relationship means that an increase in one of the variables is associated with an increase in the other.
A negative (or decreasing) relationship means that an increase in one of the variables is associated with a decrease in the other.
Not all relationships can be classified as either positive or negative.
• The form of the relationship is its general shape. To identify the form, describe the shape of the data in the scatterplot. In practice, forms that we commonly use have mathematical equations. We
look at a few of these equations in this course. For now, we simply describe the shape of the pattern in the scatterplot. Here are a couple of forms that are quite common:Linear form: The data
points appear scattered about a line. We use a line to summarize the pattern in the data. We study the equation for a line in this module.
Curvilinear form: The data points appear scattered about a smooth curve. We use a curve to summarize the pattern in the data. We study some specific types of curvilinear forms with their
equations in Modules 4 and 12.
• The strength of the relationship is a description of how closely the data follow the form of the relationship. Let’s look, for example, at the following two scatterplots displaying positive,
linear relationships:
In the top scatterplot, the data points closely follow the linear pattern. This is an example of a strong linear relationship. In the bottom scatterplot, the data points also follow a linear
pattern, but the points are not as close to the line. The data is more scattered about the line. This is an example of a weaker linear relationship.
Labeling a relationship as strong or weak is not very precise. We develop a more precise way to measure the strength of a relationship shortly.
Outliers are points that deviate from the pattern of the relationship. In the scatterplot below, there is one outlier.
Learn By Doing
A: X = month (January = 1), Y = rainfall (inches) in Napa, CA in 2010 (Note: Napa has rain in the winter months and months with little to no rainfall in summer.)
B: X = month (January = 1), Y = average temperature in Boston MA in 2010 (Note: Boston has cold winters and hot summers.)
C: X = year (in five-year increments from 1970), Y = Medicare costs (in $) (Note: the yearly increase in Medicare costs has gotten bigger and bigger over time.)
D: X = average temperature in Boston MA (°F), Y = average temperature in Boston MA (°C) each month in 2010
E: X = chest girth (cm), Y = shoulder girth (cm) for a sample of men
F: X = engine displacement (liters), Y = city miles per gallon for a sample of cars (Note: engine displacement is roughly a measure of engine size. Large engines use more gas.) | {"url":"https://courses.lumenlearning.com/atd-herkimer-statisticssocsci/chapter/scatterplots-2-of-5/","timestamp":"2024-11-11T07:32:05Z","content_type":"text/html","content_length":"54244","record_id":"<urn:uuid:b54f340f-cc8a-471a-bde6-e8dd7f39153b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00125.warc.gz"} |
Agda Beginner(-ish) Tips, Tricks, and Pitfalls
Posted on September 20, 2018
I’m in the middle of quite a large Agda project at the moment, and I’ve picked up a few tips and tricks in the past few weeks. I’d imagine a lot of these are quite obvious once you get to grips with
Agda, so I’m writing them down before I forget that they were once confusing stumbling blocks. Hopefully this helps other people trying to learn the language!
Parameterized Modules Strangeness
Agda lets you parameterize modules, just as you can datatypes, with types, values, etc. It’s extremely handy for those situations where you want to be generic over some type, but that type won’t
change inside the generic code. The keys to dictionaries is a good example: you can start the module with:
And now, where in Haskell you’d have to write something like Ord a => Map a… in pretty much any function signature, you can just refer to Key, and you’re good to go. It’s kind of like a dynamic type
synonym, in that way.
Here’s the strangeness, though: what if you don’t supply one of the arguments?
This won’t give you a type error, strange as it may seem. This will perform lambda lifting, meaning that now, every function exported by the module will have the type signature:
Preceding its normal signature. In other words, it changes it into what you would have had to write in Haskell.
This is a powerful feature, but it can also give you some confusing errors if you don’t know about it (especially if the module has implicit arguments).
If you’ve got a hole in your program, you can put the cursor in it and press SPC-m-a (in spacemacs), and Agda will try and find the automatic solution to the problem. For a while, I didn’t think much
of this feature, as rare was the program which Agda could figure out. Turns out I was just using it wrong! Into the hole you should type the options for the proof search: enabling case-splitting
(-c), enabling the use of available definitions (-r), and listing possible solutions (-l).
Well-Founded Recursion
Often, a program will not be obviously terminating (according to Agda’s termination checker). The first piece of advice is this: don’t use well-founded recursion. It’s a huge hammer, and often you
can get away with fiddling with the function (try inlining definitions, rewriting generic functions to monomorphic versions, or replacing with-blocks with helper functions), or using one of the more
lightweight techniques out there.
However, sometimes it really is the best option, so you have to grit your teeth and use it. What I expected (and what I used originally) was a recursion combinator, with a type something like:
So we’re trying to generate a function of type A → B, but there’s a hairy recursive call in there somewhere. Instead we use this function, and pass it a version of our function that uses the supplied
function rather than making a recursive call:
In other words, instead of calling the function itself, you call recursive-call above. Along with the argument, you supply a proof that it’s smaller than the outer argument (y < x; assume for now
that the definition of < is just some relation like _<_ in Data.Nat).
But wait! You don’t have to use it! Instead of all that, you can just pass the Acc _<_ x type as a parameter to your function. In other words, if you have a dangerous function:
Instead write:
Once you pattern match on the accessibility relation, the termination checker is satisfied. This is much easier to understand (for me anyway), and made it much easier to write proofs about it.
Thanks to Oleg Grenrus (phadej) on irc for helping me out with this! Funnily enough, he actually recommended the Acc approach, and I instead originally went with the recursion combinator. Would have
saved a couple hours if I’d just listened! Also worth mentioning is the approach recommended by Guillaume Allais (gallais), detailed here. Haven’t had time to figure it out, so this article may be
updated to recommend it instead in the future.
Don’t Touch The Green Slime!
This one is really important. If I hadn’t read the exact explanation here I think I may have given up with Agda (or at the very least the project I’m working on) out of frustration.
Basically the problem arises like this. Say you’re writing a function to split a vector in two. You can specify the type pretty precisely:
Try to pattern-match on xs, though, and you’ll get the following error:
I'm not sure if there should be a case for the constructor [],
because I get stuck when trying to solve the following unification
problems (inferred index ≟ expected index):
zero ≟ n + m
when checking that the expression ? has type Vec .A .n × Vec .A .m
What?! That’s weird. Anyway, you fiddle around with the function, end up pattern matching on the n instead, and continue on with your life.
What about this, though: you want to write a type for proofs that one number is less than or equal to another. You go with something like this:
And you want to use it in a proof. Here’s the example we’ll be using: if two numbers are less than some limit u, then their maximum is also less than that limit:
max : ℕ → ℕ → ℕ
max zero m = m
max (suc n) zero = suc n
max (suc n) (suc m) = suc (max n m)
max-≤ : ∀ n m {u} → n ≤ u → m ≤ u → max n m ≤ u
max-≤ n m (proof k) m≤u = {!!}
It won’t let you match on m≤u! Here’s the error:
I'm not sure if there should be a case for the constructor proof,
because I get stuck when trying to solve the following unification
problems (inferred index ≟ expected index):
m₁ + k₂ ≟ n₁ + k₁
when checking that the expression ? has type max n m ≤ n + k
What do you mean you’re not sure if there’s a case for the constructor proof: it’s the only case!
The problem is that Agda is trying to unify two types who both have calls to user-defined functions in them, which is a hard problem. As phrased by Conor McBride:
When combining prescriptive and descriptive indices, ensure both are in constructor form. Exclude defined functions which yield difficult unification problems.
So if you ever get the “I’m not sure if…” error, try either to:
1. Redefine the indices so they use constructors, not functions.
2. Remove the index, instead having a proof inside the type of equality. What does that mean? Basically, transform the definition of ≤ above into the one in Data.Nat.
The use-case I had for this is a little long, I’m afraid (too long to include here), but it did come in handy. Basically, if you’re trying to prove something about a function, you may well want to
run that function and pattern match on the result.
This is a little different from the normal way of doing things, where you’d pattern match on the argument. It is a pattern you’ll sometimes need to write, though. And here’s the issue: that y has
nothing to do with f x, as far as Agda is concerned. All you’ve done is introduced a new variable, and that’s that.
This is exactly the problem inspect solves: it runs your function, giving you a result, but also giving you a proof that the result is equal to running the function. You use it like this:
f-is-the-same-as-g : ∀ x → f x ≡ g x
f-is-the-same-as-g x with f x | inspect f x
f-is-the-same-as-g x | y | [ fx≡y ] = {!!}
Because the Agda standard library is a big fan of type synonyms (Op₂ A instead of A → A → A for example), it’s handy to know that pressing SPC-G-G (in spacemacs) over any identifier will bring you to
the definition. Also, you can normalize a type with SPC-m-n.
This one is a little confusing, because Agda’s notion of “irrelevance” is different from Idris’, or Haskell’s. In all three languages, irrelevance is used for performance: it means that a value
doesn’t need to be around at runtime, so the compiler can elide it.
That’s where the similarities stop though. In Haskell, all types are irrelevant: they’re figments of the typechecker’s imagination. You can’t get a type at runtime full stop.
In dependently typed languages, this isn’t a distinction we can rely on. The line between runtime entities and compile-time entities is drawn elsewhere, so quite often types need to exist at runtime.
As you might guess, though, they don’t always need to. The length of a length-indexed vector, for instance, is completely determined by the structure of the vector: why would you bother storing all
of that information at runtime? This is what Idris recognizes, and what it tries to remedy: it analyses code for these kinds of opportunities for elision, and does so when it can. Kind of like
Haskell’s fusion, though, it’s an invisible optimization, and there’s no way to make Idris throw a type error when it can’t elide something you want it to elide.
Agda is totally different. Something is irrelevant in Agda if it’s unique. Or, rather, it’s irrelevant if all you rely on is its existence. It’s used for proofs that you carry around with you: in a
rational number type, you might use it to say that the numerator and denominator have no common factors. The only information you want from this proof is whether it holds or not, so it’s the perfect
candidate for irrelevance.
Weirdly, this means it’s useless for the length-indexed vector kind of stuff mentioned above. In fact, it doe exactly the opposite of what you might expect: if the length parameter is marked as
irrelevant, the the types Vec A n and Vec A (suc n) are the same!
The way you can use it is to pattern-match if it’s impossible. Again, it’s designed for eliding proofs that you may carry with you otherwise.
Future Tips
Once I’m finished the project, I’ll try write up a guide on how to do literate Agda files. There were a couple of weird nuances that I had to pick up on the way, mainly to do with getting unicode to | {"url":"https://doisinkidney.com/posts/2018-09-20-agda-tips.html","timestamp":"2024-11-07T23:48:52Z","content_type":"application/xhtml+xml","content_length":"26031","record_id":"<urn:uuid:0d334718-b7b3-4d04-a158-528720671d8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00864.warc.gz"} |
Kids and Mathematics - MitchRyan23
Kids and Mathematics
Posted on by Ms. Elle
Most people find Mathematics a difficult subject and irrelevant to everyday life.
I remember a friend of mine saying “What in the world do I need X and Y for? Why do I have to burn the midnight oil studying sine, cosine, and tangent? Am I going to use XY to buy something in the
market, or tell the driver I am going to angle of depression ABC?
I also had a love hate relationship with Mathematics, those love moments were when my teachers made it look and feel easy for me  which  is what I want my kids to feel about Mathematics.  I am a
Biology teacher, helping my daughter with Mathematics is difficult for me. I run out of strategies or the strategies I know do not work well with Mathematics.
In this post, I would like to  ask for help from you.  Please share your ideas, strategies, Math websites, and other methods.  I will organise all the ideas from here into another post so that
other parents may benefit as well.
By the way, my daughter is a kinesthetic learner I would appreciate if you could include strategies for this kind of learner.
Let’s help our kids from this…
To this….
Leave your  comment below.  Thank you
50 thoughts on “Kids and Mathematics”
1. Hahah. Me too. I had a love hate relationship with Math for as long as I remember.
2. Have you tried looking for Math for kids instructional videos on YouTube? I think i’d be helpful. 🙂
3. I hated math when I was a kid, lol – I guess, there are a lot of people who do and still do – though now – it all seem normal to me.
4. Math was among my favorite subject when I was studying and when it comes to Math it’s all about strengthening fundamentals by doing math drills early.
5. My Chief Officer told me that his 2 sons are enrolled in KUMON. They can solve mathematics problem easily now.
1. I have so many students who are doing great because of kumon. But I am looking for some alternative. Thank you Ralph
6. My favorite subject is math. : ) Continue to study it and learn to love it. It will help you in our future I tell you. In work, personal stuffs and all. Earn it the hard way by studying and rip
its benefits in the end.
7. Well math is also not one of my favorite subjects. Although I realize the importance of making children learn math as this will be useful even later in life.
1. Indeed it is useful. This is the reason why I am exhausting all means possible to teach Maths to my kids effectively. Thanks
8. It is important to teach kids how they can love math at a young age.
1. Can you suggest ways on how we can make them love Maths?
9. To improve in Math, I suggest your kid to enroll in Kumon. They have tutoring services that are guaranteed for students to have fast computing abilities.
1. I know about kumon, but if there are ways on doing it on our own I think that would be a lot better.
10. Wow, it’s a great thing that you know what type of learner your kid is. And with that, I think you should focus on play learning… But really, math drives me crazy, too, so I can’t really give you
any specific tip on how to help your daughter. Good luck, man!
1. Thank you for the suggestion ^-^
11. This is very nice, introducing Math at a very young age is really the perfect way to make our children not only genius at Math itself but also letting them know that learning must not be a hard
1. Hi, thanks for dropping by. Do you have any suggestion to make learning Maths fun?
1. actually I am not fond of Math the subject literally and I just enjoyed it when I got into college, i think your kids will enjoy learning math and faster if the teacher is likeable and
also has a good strategy – math needs a slow and sure process before getting onto next level.
12. I can’t directly give advice since I don’t have kids yet ;( But I think incorporating math in everyday activities would be a great idea! Maybe let them help measure ingredients in the kitchen. It
works with my nephews.
1. Incorporating Maths in everyday activities would be great. Thank you
13. I too have had a stressful relationship with math.Little did I know that It wasn’t my fault that I struggled with it. I was assessed with dyscalculia. I enrolled my daughter in a math enrichment
program. Its a 100% commitment as we do it 12 months of the year. My daughter get the summer off from this.
1. Hi Leira. Can you give us a bit of insight on what exactly a dyscalculia is? Thank you
14. I don’t hate math but at times, it’s needed for our course so I have to bear the pain that it delivers to my brain.
Advise: Enroll your child that has math subject as their specialty. That way, it could help your kid appreciate more about math. 🙂
1. Thank you, but I am looking for a creative alternative. Enrolling in a special Maths class is very conventional. Do you know of some techniques we could use at home?
15. I remember I love computing numbers when I was still in HS. But I never thought to teach Math so I opted for Filipino instead.
1. Lol, ironically for me Filipino was my most disliked subject. I’d rather deal with Maths than this subject.^_^
16. Tell me about it! I have a nephew who doesn’t like Math one bit… It’s a challenge looking for ways to make him like the subject. Hehe!
1. I am soliciting suggestions from everyone. I will gather all suggestions and put them in one post to help other moms out there.
17. I agree! I studied math just for the sake of the grades and forgot about all those formulas after the class year haha. Im also worried about my son as he will be enrolling this school year
already, good thing though my dad and husband are math whiz 🙂 i remeber my dad buying all those educational computer games which made learning math fun and easy when I was in elementary. Maybe
you could try that too 🙂
1. Educational computer games might work well with my daughter. thank you for the suggestion..cheers
18. I’m so bad at math, and so I’m glad that my son seems to really get it. So far, we’ve not had that much of a problem with school. I tend to stick with whatever handouts are given by his teachers,
though. I also enrolled him in a Galileo Math Class when he was younger, which might explain why he understands the subject pretty well now.
19. I can totally relate. I hate math but my daughter doesn’t and I’m just glad she does. I don’t really do anything special for her but she seems to get it just from school.
1. Thank you for dropping by
20. Mathematics is my waterloo, my achilles heel, my cryptonite! Being my wife you know that! ^_^ Cheers!
21. I once taught my nephew multiplication through his matchbox cars. He understood it a little faster than teaching it thru paper. I also help him sing songs to remember the multiplication table. I
dont know if this could apply to your kids intelligence. But I hope I gave you a little idea on it.
1. Thank you so much. These are really helpful. I hope I’d find some song about the times table in the internet. Do you still remember that multiplication table song? Can you share it with us?
22. Awww, I wish I had some tips to give you. I don’t exactly hate Math but I don’t love it either. With my kids, I always just printed out worksheets and had them practice based on the steps
provided in their books. Seemed to work fine for them. Good luck to you!
1. Thank you Janice. It is good that worksheets work for your kiddos. It is a lot easier on your part. My daughter on the other hand is a kinesthetic learner. They learn quite differently from
most audio-visual learners.
23. I’ve always hated math.. still do… thank god my kids are smarter than I was as a kid, my oldest daughter helps my son, because, sadly.. it’s been so long, lol
24. I hated math as a kid but that’s because I never understood it
1. Had you met a teacher or someone who made Maths fun for you, you would have a totally different perception towards Maths. This is something I want to create for my kids. Thanks for dropping
25. i’m still not crazy ’bout math….
26. Like most of the people, Math is one subject that I hate. Give me books to read, I am fine with that, But with numbers? All I am interested in numbers is when it shows the numbers rising in my
Paypal account! LOL
Good thing, though, my kids have no difficulties with the subject or I am doomed!
27. I’m no Math Whiz but I heard that Math-U-See is great for kinesthetic learners. I have no experience with that yet, not until my youngest starts on his academics 2 years from now. My other two
are auditory and visual so the traditional way of teaching them works. My eldest gets excited when his dad starts asking him application questions when we’re out. Like when we ordered at a
restaurant, my hubby would ask him to calculate how much it all costs and how much change to expect. 🙂
1. Great! Thank you May. I will check out Math-U-See.
28. i had the same question some years ago when i was toiling over advance Math subjects lie Differential Calculus + Differential Equation when one my classmates told me that they are studied to help
with exercising our brains + I guess I agree.
Math is really my most hated subject but I had to change that especially now that my little man is ready to go to big school. we ought to make numbers appealing to the little ones so they won’t
have the same attitude towards numbers + Math!
29. Wish I could help, but I’ve never liked math or been particularly good at it! I was always much better at language arts. Your little ones are adorable.
1. Thank you Elisebet
30. I really connected with this post, I hated mathematics and always do. But it has a lot of applications in many fields which is inevitable.
1. It is also a great exercise for the mind. But the thing is if as a kid you had a pleasant experience with Mathematics, then no matter how difficult it is you will face it positively. For most
kids though, Mathematics is always associated to some unpleasant experience, such that even the mention of the word they would already give a sense of dislike.
Leave a Reply Cancel reply | {"url":"https://www.mitchryan23.com/kids-and-mathematics/","timestamp":"2024-11-06T02:26:40Z","content_type":"text/html","content_length":"239532","record_id":"<urn:uuid:3725f153-dca7-46b3-83a4-da7fddadf3ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00730.warc.gz"} |
Unscramble WAEFUL
How Many Words are in WAEFUL Unscramble?
By unscrambling letters waeful, our Word Unscrambler aka Scrabble Word Finder easily found 34 playable words in virtually every word scramble game!
Letter / Tile Values for WAEFUL
Below are the values for each of the letters/tiles in Scrabble. The letters in waeful combine for a total of 12 points (not including bonus squares)
What do the Letters waeful Unscrambled Mean?
The unscrambled words with the most letters from WAEFUL word or letters are below along with the definitions.
• waeful () - Sorry, we do not have a definition for this word | {"url":"https://www.scrabblewordfind.com/unscramble-waeful","timestamp":"2024-11-06T23:53:59Z","content_type":"text/html","content_length":"44007","record_id":"<urn:uuid:5a034a35-942a-4bf2-8c72-274a9f83d5e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00135.warc.gz"} |
shadenormal.fnc: Shade rejection region for normal probability density... in languageR: Analyzing Linguistic Data: A Practical Introduction to Statistics
This function plots the standord normal probability density function and shades the rejection region.
qnts A numeric vector with the Z-scores of the boundaries of the lower and upper rejection regions.
A numeric vector with the Z-scores of the boundaries of the lower and upper rejection regions.
Type shadenormal.fnc to see the code. The polygon() function used for the shaded areas takes a sequence of X and Y coordinates, connects the corresponding points, and fills the area(s) enclosed with
a specified color. To understand the use of polygon(), one can best think of making a polygon with a set of pins, a thread, and a board. Outline the polygon by placing the pins on the board at the
corners of the polygon. First fasten the thread to one of the pins, then connect the thread to the second pin, from there to the third pin, and so on, until the first pin has been reached. What
polygon() requires as input is a vector of the X-coordinates of the pins, and a vector of their Y-coordinates. These coordinates should be in exactly the order in which the thread is to be connected
from pin to pin.
For shading the left rejection area, we specify the vectors of X and Y coordinates, beginning at the leftmost point of the tail, proceding to the right edge of the shaded area, then up, and finally
to the left and down to the starting point, thereby closing the polygon. The X-coordinates are therefore specified from left to right, and then from right to left. The corresponding Y-coordinates are
all the zeros necessary to get from $-3$ to $1.96$ (the default, qnorm(0.025)), and then the Y-coordinates of the density in reverse order to return to where we began.
1 ## Not run:
2 shadenormal.fnc()
4 ## End(Not run)
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/languageR/man/shadenormal.fnc.html","timestamp":"2024-11-11T21:06:56Z","content_type":"text/html","content_length":"34839","record_id":"<urn:uuid:19d29d1b-e9ed-4529-8c5e-c5e920aa12e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00572.warc.gz"} |
Causal discovery based on non-Gaussianity and nonlinearity
structures. Behaviormetrika, 41(1):65–98, 2014. Shohei Shimizu. Statistical Causal Discovery: LiNGAM Approach. Springer, Tokyo, 2022. Shohei Shimizu, Patrik O. Hoyer, Aapo Hyv¨ arinen, and Antti
Kerminen. A linear non-Gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7:2003–2030, 2006. Peter Spirtes, Clark Glymour, and Richard Scheines. Causation, Prediction,
and Search. MIT Press, 2001. 2nd ed. D. Takahashi, S. Shimizu, and T. Tanaka. Counterfactual explanations of black-box machine learning models using causal discovery with applications to credit
rating. In Proc. Int. Joint Conf. on Neural Networks (IJCNN2024), part of the 2024 IEEE World Congress on Computational Intelligence (WCCI2024), 2024. Masayuki Takayama, Tadahisa Okuda, Thong Pham,
Tatsuyoshi Ikenoue, Shingo Fukuma, Shohei Shimizu, and Akiyoshi Sannai. Integrating large language models in causal discovery: A statistical causal approach. arXiv preprint arXiv:2402.01454, 2024.
Tatsuya Tashiro, Shohei Shimizu, Aapo Hyv¨ arinen, and Takashi Washio. ParceLiNGAM: A causal ordering method robust against latent confounders. Neural Computation, 26(1): 57–83, 2014. Y. Samuel Wang
and Mathias Drton. Causal discovery with unobserved confounding and non-gaussian data. Journal of Machine Learning Research, 24(271):1–61, 2023. URL http://jmlr.org/papers/v24/21-1329.html. SHIMIZU
Shohei (Shiga Univ & RIKEN) 5th July 2024 16 / 17 | {"url":"https://d1eu30co0ohy4w.cloudfront.net/sshimizu2006/causal-discovery-based-on-non-gaussianity-and-nonlinearity","timestamp":"2024-11-15T01:15:00Z","content_type":"text/html","content_length":"129984","record_id":"<urn:uuid:0270e860-da53-4c56-9180-8561cbd2c43e>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00181.warc.gz"} |
Cylinder - ConceptEra
Q3. The length of the diameter of a drum made of steel covered with a lid is 28 cm. if a 2816 sq cm steel sheet is required to make the drum, let us write by calculating the height of the drum.
Q4. Let us write by calculating how many cubic decimeters of concrete material will be necessary to construct two cylindrical pillars each of whose diameter is 5:6 decimeter and height is 2.5
meter.Let us write by calculating the cost of plastering the two pillars at ₹ 125 per square meter.
Q5. If a gas cylinder for fuel purpose having the length of 7.5 dcm and the length of the inner diameter of 2.8 dcm carries 15.015 kg of gas, let us write by calculating the weight of the gas of per
cubic dcm.
Q6. Out of three jars of equal diameter and height, 2/3 part of the first, 5/6 part of the second and 7/9 part of the third were filled with dilute sulphuric acid. Whole of acid in the three jars
were poured into a jar of 2.1 dcm deiameter as a result the height of acid in the jar becomes 4.1 dcm. If the length of diameter of each of the three equal jars is 1.4 dcm, let us write by
calculating the height of the three jars.
Q7. Total surface area of a right circular pot open at one end is 2002 sq am. If the length of diameter of base of the pot is 14 cm let us write by calculating, how much liter of water may the drum
Q8. If a pump set with a pipe of 14 cm diameter can drain 2500-meter water per minute. Let us write by calculating, how much kilometer water will that pump drain per hour [1 liter = 1 cubic dcm].
Q9. There is some water in a long gas jar of 7cm diameter. If a solid right circular cylindrical pipe of iron having 5cm diameter he immersed completely in that water, let us write by calculating how
much the level of water will rise.
Q10. If the surface area of a right circular cylindrical pillar is 264 sq meter and volume is 924 cubic meters. Let us write by calculating Hight and length of diameter of this pillar.
Q11. A right circular cylindrical tank of 9-meter height is filled with water. Water comes out from there through a pipe having length of 6cm diameter with a speed of 225 meter per minute and the
tank becomes empty after 36 minutes, let us write by calculating the length of diameter of the tank.
Q12. Curved surface area of a right circular cylindrical log of wood of uniform density is 440 sq dcm. If 1 cubic dcm of wood weight 1.5 kg and weight of the log is 9.24 quintals. Let us write by
calculating the length of diameter of log and its height.
Q13. The length of inner and outer diameter of a right circular cylindrical pipe open at two ends are 30 cm and 26 cm respectively and length of pipe is 14.7 meter. Let us write by calculating the
cost of painting its all surfaces with coaltar at ₹2.25 per square dcm.
Q14. Height of a hollow right circular cylinder, open at both ends, is 2.8 meter. If length of inner diameter of the cylinder is 4.6 dcm and the cylinder is made up of 84.48 cubic dcm of iron, let us
calculate the length of outer diameter of the cylinder.
Q15. Height of a right circular cylinder is twice of its radius. If the height would be 6 times of its radius then the volume of the cylinder would be greater by 539 cubic dcm, let us write by
calculating the height of the cylinder.
Q16. A group of firebrigade personal carried a right circular cylindrical tank filled with water and pumped out water at a speed of 420-meter pet minute put out the fire in 40 minutes by their pipes
of 2cm diameter each. If the diameter of tank is 2.8 meter and its length is 6 meters, then let us calculate
(i) What volume of water has been spent in putting out the fire and
(ii) The volume of water that still remains in the tank.
Q17. It is required to make a plastering of sand and cement with 3.5 cm thick, surrounding four cylindrical pillars, each of whose diameter is 17.5cm.
(i) If each pillar is of 3 meter height, let us write by calculating how many cubic dcm of plaster materials will be needed.
(ii) If the ratio of sand and cement in the plaster material be 4:1, let us write how many cubic dcm of cement will be needed.
Q18. The length of outer and inner diameter of hollow right circular cylinder are 16cm and 12cm respectively. Height of Cylinder is 36cm. let us calculate how many solid cylinders of 2cm diameter and
6cm length may be often by melting this cylinder. | {"url":"https://conceptera.in/cylinder/","timestamp":"2024-11-06T19:54:37Z","content_type":"text/html","content_length":"179786","record_id":"<urn:uuid:38877547-e485-4492-93db-94fa3bada9d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00253.warc.gz"} |
What is the best strategy to prepare for the GATE Mechanical Engineering(ME) Paper? - GATE SchoolBest Strategy to Prepare for GATE Mechanical Engineering (M.E.)
“Strategy without a process is a little more than a wish list” – Robert Filek.
Hence following the above statement, we will come up with a process.
Keep in mind that GATE is a competitive exam, hence apart from good knowledge of the subject a good strategy will be required to ace the exam. Assuming that you have the right resources in terms of
books and materials, let us dive into the “study smart aspect”.
We would first like you to take a look at the topic wise weightage analysis that we have done for GATE Mechanical Engineering (M.E.)
[table “4” not found /]
These values have been obtained by taking the average of the marks allotment per topic over the last three years. From these values we can see the most to the least important topics that come in the
GATE Mechanical Engineering Paper.
Now will go a little more in depth to see the important and frequently asked parts in the first few important topics.
Problems on finding rank of matrix, type of solution (unique and infinitely many), integration using Trapezoidal or/and Simpson’s 1/3rd rule, formula based Laplace Transforms, probability
distributions (normal, poisons, binomial distribution , uniform distribution ), directional derivatives, divergence, curl, simple dice problems, limits, simple PDE are the common concepts that
are tested in the questions.
• Theory of Machines and Vibrations
Single Degree of Freedom and Finding natural frequencies (underdamped, critically damped, overdamped) comprise of the frequently asked topics.
• Manufacturing Engineering
Machining (milling, drilling, EDM, ECM), Machining time, current, tool geometry, punching, blanking, force; conceptual & theoretical questions; orthogonal machining, Â heat & power required in
welding, transformations (rotations, scaling etc.), solidification time, CNC machine G codes and M Codes , sheet metal, various types of fits, clearance calculation, arc welding, cutting time are
the topics from which formula based questions are asked.
Forecasting model, CPM, PERT, Depreciation, Cost (labour, tool grinding cost, waiting cost etc.), P-chart, C-chart, R-chart, X-chart, Time study, EOQ, standard time, demand, Poisson’s arrivals
& departures, LPP problems (graphical), simplex method, GO, NO-GO gauges, Machine allocation problems, Sequencing problems , EDD, SPT rule (scheduling) are the most frequently covered concepts
from this topic.
Simple & Differential Piezometer, Pressure Column Height Calculation, Reynold’s number calculation, Hydraulic power, nozzle velocity of turbines, continuity equation, Bernoulli’s law, tapered
sections, Pump Parameters (speed, power, discharge), Gates(of dams) Force Calculation, Venturimeter, Laminar Flow are the frequently asked topics.
One can find problems on FBD for finding force, Conservation of Momentum, Block and Slope Type (average stress and maximum stress for different sections like rectangle, triangle etc.),
Macaulay’s Theorem (slope & deflection), a question on thin pressure vessels, problems on theories of failure, torques, bending moment calculation, truss.
And now, after collating all this information, prepare a study plan accordingly and follow it with complete discipline. You can see this article to help yourself prepare a study plan for yourself:Â
Study planning and Schedule for GATE Exam PreparationÂ
Let us know your views in the comments section below. You can connect with us on our Facebook Page too.
Have a good time studying, best of luck! | {"url":"https://gateschool.co.in/content/best-strategy-prepare-gate-mechanical-engineering-analysis-strategy/","timestamp":"2024-11-03T02:42:06Z","content_type":"text/html","content_length":"198874","record_id":"<urn:uuid:edc35567-ae97-4b67-a358-8c9ab9f7f1e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00353.warc.gz"} |
Understanding Quaternions
When describing rotations of an object, the typical way would be through linear algebra and trigonometry. However, similarly to how complex numbers can be used to describe rotations in 2D,
quaternions allow efficient and more pragmatic methods describing rotations in 3D.
How It’s Done With Trigonometry
Depending on the framework or software using, a lot of the work is already done for you with API available to you to perform rotations on shapes. But for a simple point in space rotating in about an
origin, what would that look like?
You can represent the x and y position of a point as a function of the angle and radius of the point on a unit circle. That is using our good friend SOH CAH TOA, you can represent x and y as:
x = r x cos(\theta)
y = r x sin(\theta)
Then you can sub into \theta, the angle around the origin you’d like the point to sit. However, this requires knowing the current angle the point sits at which adds another layer of computation. I
got this proof from Khan Academy that gives the rotated point in relation to it’s original position and the desired angle of rotation.
From there, you can apply this formula to every point and depending on the axes you’re rotating in space, we just swap the coordinates in the equation to the ones that change such as the z
For example, instead of rotating about the Z-axis to rotate about the X-axis you’d do the following.
In the case of a framework like ThreeJS, you’d represent each object rotation as a Vector3 containing all three axes. You can determine set each individual rotation directly without doing any of the
So far, we’ve been looking at 3D rotations as a manipulation of 3 axes of rotation also known as Euler Angles. However, this method of handling rotations is susceptible to 2 main probles.
• Gimbal Lock - When two axes of rotation line up, two of the rotation axes perform the same rotation causing a loss of a dimension of rotation. This is shown beautifully by GuerrillaCG
• Interpolation - It’s exceedingly difficult to accurately move smoothly between different rotations
At first, the idea of using a whole different number system just for rotations can be quite abstract and even visualizing it is even more so. But, once the fundamentals are understood of how and why
this works so efficiently, quaternions can be used very easily in software packages like Unity that handle the math calculations for you. A great explanation of application can be found here while a
more comprehensive look at the underlying math is given by the always fantastic 3Blue1Brown
Visual Understanding in Software
Applied in blender, you may notice that influencing a single quat value doesn’t rotate the way you expect. Instead, altering a lone quat value will only flip the mesh into different orientations as
follows, no rotations in between.
Smooth rotation starts happening when you influence multiple values at once. In essence, the orientations are mixing depending on the weight you give to each value. Therefore, when thinking about how
each quaternion value relates to its 3D rotation, you can think of a quaternion rotation as mixing together these four orientations in order to create the desired rotation. Of course, it’s not that
simple as the way the mixing and rotating happens is a result of the underlying math but I think this way of thinking offers better visual and it can aid in understanding what’s really happening.
For instance, if you start with the model facing towards us with the quarternion (0, 0, 0, 1) and we want to flip it 90deg about the X-axis. What we should really be thinking is “Which two
orientations can I mix to get what I want”. You might notice in the above example that a quaternion of (1, 0, 0, 0) rotates 180deg about the X-axis. So, wouldn’t we be wanting some rotation in
between these two? This is precisely the answer. (1, 1, 0 0 ) mixes evenly the starting position and the 180deg rotation to result in a 90deg rotation.
For more complex rotations, you can imagine mixing in the other values to get further rotations. Here I add to Y and the mesh rotation mixes towards the upside-down, front facing position.
it turns out you can achieve every 3D rotation this way without any of the drawbacks of Euler rotations.
One thing to notice is when mixing rotations, you can scale each value infinitely and the dominant value will mix the rotation the most. But to get a precise orientation, for example facing forwards,
it’s best to return to whole numbers and setting unneeded rotations to 0 instead of battling out which rotation wins.
So Why Should I Use It?
It comes down to if and when you may encounter the draw backs of using Euler angles. Euler angles are much easier for understand and manipulate mentally but come with the Gimbal drawbacks and
interpolation issues that can become a larger issue in physics and animations.
Typically, physics won’t be an issue as you can allow game engines to take care of that for you. For animations, it can be a benefit to use Quaternions over Euler angles to circumvent undesirable
rotation paths. Otherwise, if you’re only performing 2D linear rotations, Euler angles can perform this perfectly and it only really comes to a mathematical and performance benefit with Quaternions.
In summary, Quaternions are superior in every way except for human readability. | {"url":"https://hylu.dev/posts/2023/old/dev/quaternion/","timestamp":"2024-11-07T14:21:24Z","content_type":"text/html","content_length":"27238","record_id":"<urn:uuid:4473438c-2f74-4ebf-8ac9-3289a6baf866>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00449.warc.gz"} |
Constraints on deep-Earth properties from space-geodetic data
Mathews, P. M. ; Shapiro, I. I. (1995) Constraints on deep-Earth properties from space-geodetic data Physics of The Earth and Planetary Interiors, 92 (1-2). pp. 99-107. ISSN 0031-9201
Full text not available from this repository.
Official URL: http://linkinghub.elsevier.com/retrieve/pii/003192...
Related URL: http://dx.doi.org/10.1016/0031-9201(95)03064-4
The amplitude of the Earth's nutation driven by a given component of the tidal potential is governed primarily by three parameters p[i] which are composites of a larger number of 'basic' Earth
parameters (ellipticities, compliances, moments of inertia, etc., of the Earth and its core regions). We obtain estimates of the p[i] by least-squares fitting of nutation amplitudes estimated from
very long baseline interferometry (VLBI) data to theoretical expressions based on an analytical formulation of nutation theory which incorporates the role of the solid inner core. We show how the
estimates obtained, as well as the overall fit, vary with the ellipticity assumed for the inner core, and examine how the results are affected when otherwise unmodelled effects of ocean tides and
mantle anelasticity are taken into account. Considering two anelasticity models, we find that the fit obtained with the use of one of them is noticeably worse than if the other is used or if no
anelasticity correction is made. Independent of the corrections applied, the χ^2 of the fit is found to be smallest if the ellipticity e[s] of the solid inner core is taken as about half that of the
Preliminary Reference Earth Model (PREM). Independent estimates of the ellipticity e[f] of the fluid core and other basic parameters on which the p[i] depend cannot be obtained from the estimates of
the p[i] alone. Nevertheless, with certain assumptions that are less restrictive than those hitherto employed in the literature, we find e[f] to be 5.0% higher than the PREM value if the best-fit
value is assigned to e[s], and 4.7% higher if e[s] = e[s(PREM)];; these values for e[f] are in accord with the estimate of Gwinn et al. (J. Geophys. Res., 91: 4755-4765, 1986), and correspond to a
nonhydrostatic flattening of about 465 m and 435 m, respectively, of the core-mantle boundary. Our parameter estimates have implications for the value of the static part k^0 of the second-degree Love
number k which seem to be hard to reconcile with information from other sources. Observational estimates of the amplitudes of the 18.6 year nutations are also found to be not satisfactorily matched
with theoretical expectations. A careful re-examination of data analysis and theory is needed to resolve these problems.
Item Type: Article
Source: Copyright of this article belongs to Elsevier Science.
ID Code: 20516
Deposited On: 20 Nov 2010 14:23
Last Modified: 06 Jun 2011 08:55
Repository Staff Only: item control page | {"url":"https://repository.ias.ac.in/20516/","timestamp":"2024-11-09T03:16:00Z","content_type":"application/xhtml+xml","content_length":"22242","record_id":"<urn:uuid:1e6eb012-455a-4a49-ae63-d09305550b30>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00860.warc.gz"} |
I’m an external lecturer at the Hamburg University of Technology. Besides several guest contributions to lectures of the Institute of Fluid Mechanics and Ship Theory (FDS, M-8), I’m teaching two
classes on numerical simulation and Lattice Boltzmann methods.
Modellierung und Simulation maritimer System (Lehrauftrag, ab SoSe 2018)
Modeling and simulation of maritime systems
In the scope of this lecture, students learn to model and solve selected maritime problems with the help of numerical software and scripts. First, basic concepts of computational modeling are
explained, from the physical modeling and discretization to the implementation and actual numerical solution of the problem. Then, available tools for the implementation and solution process are
discussed, including high-level compiled programming languages on the one hand, and interpreted programming languages and computer algebra systems on the other hand. In the second half of the class,
selected maritime problems will be discussed and subsequently solved numerically by the students.
Lattice-Boltzmann-Methoden für die Simulation von Strömungen mit freien Oberflächen (Lehrauftrag, ab WiSe 2018/19)
Lattice-Boltzmann methods for the simulation of free surface flows
This lecture addresses Lattice Boltzmann Methods for the simulation of free surface flows. After an introduction to the basic concepts of kinetic methods (LGCAs, LBM, ….), recent LBM extensions for
the simulation of free-surface flows are discussed. Parallel to the lecture, selected maritime free-surface flow problems are to be solved numerically.
Further information can be found in Stud.IP.
Past experience
Previous activities include teaching in the fields of partial differential equations, numerical methods, and thermodynamics at Braunschweig Univ of Tech in the group of Prof. Krafczyk (2007-2010),
and teaching in the fields of fluid mechanics, numerical methods, and high performance computing at Hamburg Univ of Tech in the group of Prof. Rung (2012-2017). | {"url":"https://www.christian-janssen.de/teaching/","timestamp":"2024-11-11T17:20:59Z","content_type":"text/html","content_length":"35524","record_id":"<urn:uuid:d2128c05-4baf-48bd-b67c-0772803840b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00157.warc.gz"} |
Digital Electronics – A Book of Videos
Introduction to Digital Electronics
This is very much work-in-progress and is very much incomplete
This is a book of video chapters that describe digital electronics and analogue electronics concepts in order to provide an introduction to electronics. Every part of this course is firmly rooted in
practice, allowing for theoretical concepts to be practiced through the use of video tutorials and video supported laboratories.
We are very comfortable with the decimal number system – in part due to the fact that we have ten fingers. Unfortunately, this is not a good system of numeration for digital computers and circuits,
as our circuits are firmly based in binary – a system that works with two states – on and off. This video chapter looks at how we can bridge the gap between our needs for decimal and the need for
negative numbers and the fact that digital circuits are firmly based in binary.
Chapter 2: Boolean Algebra
Boolean algebra was developed in 1854 by George Boole (while he was working in University College Cork, Ireland). It is a branch of mathematical algebra where everything is evaluated as either true
or false (1 or 0). Little did he know how important his work would be to modern day digital computer systems where the condition of being on or off, true or false, 0 or 1 is so important to the ways
that we design computers and work with digital data. This video chapter looks at Boolean Algebra as a concept and how we can minimize logic expressions
Coming soon!
Chapter 3: Digital Logic Gates
Once we understand Boolean algebra we would like to apply it to build circuits that use gates such as AND, OR, NOT gates. In this chapter we are going to look at the basic principles of how digital
logic gates work and how we can apply them to build complex systems. We also look at logic minimization.
Link to the Chapter
Chapter 4: Combinational Logic
Chapter 5: Sequential Logic
The clock – 555 Timers
Shift Registers
Chapter 6: Digital Applications
Analogue to Digital Conversion
Further Reading:
From this point I would recommend you to continue reading materials on different micro-controllers. I have pages on several microcontrollers. If you are getting started, your first stop should be the
Arduino, but if you are looking for a bit of a challenge have a look at the others:
• The Arduino
• AVR Programming | {"url":"https://derekmolloy.ie/home-2/digital-electronics/","timestamp":"2024-11-03T16:06:34Z","content_type":"text/html","content_length":"42764","record_id":"<urn:uuid:3ff5c48b-5e26-40c4-9d6a-f4ebca8214b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00208.warc.gz"} |
The difference between Markup and Margin (and how to calculate it on the fly)
Calculating profit margin is one of the most critical tools you’ll need to understand as a business owner. Unfortunately, most independent marketers don’t have a clue of what it is, confuse it with
markup, and even if they know it, can’t calculate it quickly when needed. So I’m going to solve all that for you today!
#1 Why’s margin so important?
Well, profit margin (often referred to simply as “margin”) is a number that can almost instantly gauge the potential of success for your business. Low-margin businesses RARELY succeed in the long
term unless they are extremely systemized. It is very, very difficult to run a business in the long-haul when it has low margins becuase the slightest things that can go wrong will end up gobbling
profit like an animal.
One of the most useful things you can use margin for is for pricing. As marketers, we’re often selling customized solutions rather than commodities, so our prices aren’t always set like a retail
environment would be. Understanding margin means you can estimate what to charge quickly and accurately. It’s a tool that helps you run your business as a real business that’s designed for success.
Enough about what it can do, here’s what it is and what it’s not …
#2 What exactly is margin and how’s it different from markup?
Margin is the percentage of a selling price that’s profit. For example, if you sell something for $120 and it costs you $60, you made a 50% margin (because 50% of $120 is $60).
Now here’s where most people get tripped up … it’s easy to confuse margin with markup because they sound and seem so similar … but they’re entirely different. Markup is a percentage of cost where
margin is a percentage of selling price. I see people all the time base their pricing on markup and end up really screwing themselves over as a result, for example:
If something costs you $50 and you add a 50% markup, it means your sales price is $75. You can do it on your phone’s calculator by keying 50 + 50% and it will equal 75.
—-> But adding 50% to a price does not mean you make 50% profit. <—–
Your cost was $50 and your selling price was $75 (a $25 profit) …. which means your margin (the profit percentage) is only 33%, not 50%.
Where this becomes a huge problem is when someone thinks “I’ll bank $100k of profit this year if I sell $200k worth of business so long as I mark everything up 50%.”
To probably 80% or more marketers, that statement above makes complete sense, but it’s way off base because marking everything up 50% only gives you a 33.3% profit margin, not 50%. If you had $100k
in costs and marked up everything 50%, you’d only profit $50k …. HALF of what you thought you would!
Hopefully that all makes sense.
How Do You Calculate Margin Quickly?
The problem once you understand margin is that it’s not as easy to calculate on the fly. I carry an HP calculator around everywhere I go that has cost, price, and margin buttons but I can also do it
anywhere with a regular calculator like my phone’s too, using my rudimentary formula that I’ve been using forever without fail. All you have to do is:
1. Convert your desired profit margin to a decimal and determine the number that would make it equal “one”. So say your desired margin is 30%, you’d convert it to a decimal (.3) and then convert it
to the number that would make it a whole number (.7). That’s the best way I can explain it since I’m not a math geek.For example if want to make sure you’re making 35% margin, you’d first turn
it to .35 then flip it to .65 since that number would make it a whole number of one (100%).
2. Take your cost, then divide it by the number you got in step 1.So if you needed 30% profit margin and your cost is $$350, you would punch in your calculator: 350 divided by .7 and it will equal
$500. Exactly 30% margin!
While this formula may seem a little bit confusing at first, it’s REALLY simple, especially when you know what margin you always have to get. If you know your business needs to make 40% profit, then
just remember that you always just have to divide any cost by .6 in order to get the selling price. It’s that simple! | {"url":"https://highresponsemarketing.com/the-difference-between-markup-and-margin-and-how-to-calculate-it-on-the-fly/","timestamp":"2024-11-05T12:18:19Z","content_type":"text/html","content_length":"75575","record_id":"<urn:uuid:8a25ab4c-1d50-4ca0-a2a5-ca1289624f91>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00382.warc.gz"} |
Loss Reserving Models: Granular and Machine Learning Forms
School of Risk and Actuarial Studies, University of New South Wales, Kensington, NSW 2052, Australia
Submission received: 10 May 2019 / Revised: 12 June 2019 / Accepted: 18 June 2019 / Published: 19 July 2019
The purpose of this paper is to survey recent developments in granular models and machine learning models for loss reserving, and to compare the two families with a view to assessment of their
potential for future development. This is best understood against the context of the evolution of these models from their predecessors, and the early sections recount relevant archaeological
vignettes from the history of loss reserving. However, the larger part of the paper is concerned with the granular models and machine learning models. Their relative merits are discussed, as are the
factors governing the choice between them and the older, more primitive models. Concluding sections briefly consider the possible further development of these models in the future.
1. Background
The history of loss reserving models, spanning 50-odd years, displays a general trend toward ever-increasing complexity and data-intensity. The objectives of this development have been broadly
two-fold, both drawing on increased richness of the data. One objective has been increased predictive power; the other the enablement of modelling of the micro-mechanisms of the claim process (which
may also enhance predictive power).
Two families of model that have undergone development within this context over the past decade are granular models (GMs) and machine learning models (MLMs). The first of these, also known as
micro-models, is aimed at the second objective above. As the complexity of model structures increases, feature selection and parameter estimation also become more complex, time-consuming and
expensive. MLMs are sometimes seen as a suitable means of redress of these difficulties.
The purpose of the present paper is to survey the history of loss reserving models, and how that history has led to the most recent types of model, granular forms and machine learning forms. History
has not yet resolved whether one of these forms is superior to the other, or whether they can coexist in harmony. To some extent, therefore, they are currently in competition with each other.
Claim models may be developed for purposes other than loss reserving, with different imperatives. For example, pricing will require differentiation between individual risks, which loss reserving may
or may not require. Here, emphasis will be placed on loss reserving applications throughout. The performance of the models considered here might be evaluated differently in relation to other
Much of the historical development of loss reserving models has been, if not driven, at least enabled by the extraordinary increase in computing capacity that has occurred over the past 50 years or
so. This has encouraged the analysis of more extensive data and the inclusion of more features in models.
Some of the resulting innovations have been of obvious benefit. However, the advantages and disadvantages of each historical model innovation will be discussed here, and this will create a
perspective from which one may attempt to anticipate whether one of the two model forms is likely to gain ascendancy over the other in the near future.
Section 3
Section 4
Section 5
Section 6
proceed through the archaeology of loss reserving models. Archaeological ages are identified, marking fundamental breaks in model evolution. These sections proceed roughly chronologically, discussing
many of the families of models contained in the literature, identifying their relative advantages and disadvantages.
These historical perspectives sharpen one’s perspective on the issues associated with the more modern GMs and MLMs. They expose the strengths and weaknesses of earlier models, and place in focus
those areas where the GMs and MLMs might have potential for improved methodology.
Against this background,
Section 7
discusses the criteria for model selection, and
Section 8
concentrates on the predictive efficiency of GMs and MLMs.
Section 8
also discusses one or two aspects of MLMs that probably require resolution before those models will be widely accepted, and
Section 9
Section 10
draw the discussion of the previous sections together to reach some conclusions and conjectures about the future.
It is not the purpose of this paper to provide a summary of an existing methodology. This is provided by various texts. The real purpose is set out in the preceding paragraph, and the discussion of
historical model forms other than GMs or MLMs is introduced only to provide relevant context to the GM–MLM comparison.
Thus, a number of models will be introduced without, or with only brief, description. It is assumed that the reader is either familiar with the relevant detail or can obtain it from the cited
2. Notation and Terminology
This paper will consider numerous models, with differing data requirements. The present section will establish a relatively general data framework that will serve for most of these models. All but
the most modern of these are covered to some degree in the standard loss reserving texts,
) and
Wüthrich and Merz
Claim data may relate to individual or aggregate claims, but will often be labelled by accident period and development period. These periods are not assumed to be years, but it is assumed that they
are all of equal duration, e.g., accident quarter and development quarter. Other cases are possible, e.g., accident year and development quarter, but add to the notational complexity while adding
little insight to the discussion.
Let $Y i j [ n ]$ denote claim payments in development period $j$ in respect of claim $n$, which was incurred in accident period $i$. The couple $( i , j )$ will be referred to as a cell. Also,
define the total claim payments associated with the $( i , j )$ cell as
$Y i j = ∑ n Y i j [ n ]$
Usually, $Y i j [ n ]$ will be considered to be a random variable, and a realization of it will be denoted by $y i j [ n ]$. Likewise, a realisation of $Y i j$ will be denoted by $y i j$. As a matter
of notation, $E [ Y i j [ n ] ] = μ i j [ n ] , V a r [ Y i j [ n ] ] = σ i j 2 [ n ]$ and $E [ Y i j ] = μ i j , V a r [ Y i j ] = σ i j 2$.
Many simple claim models use the conventional data triangle, in which cells exist for
$i = 1 , 2 , … , I$
$j = 1 , 2 , … , I − i + 1$
, which may be represented in triangular form with
indexing rows and columns, respectively, as illustrated in
Figure 1
It is useful to note at this early stage that the $( i , j )$ cell falls on the $( i + j − 1 )$-th diagonal of the triangle. Payments occurring anywhere along this diagonal are made in the same
calendar period, and accordingly diagonals are referred to as calendar periods or payment periods.
It will be useful, for some purposes, to define cumulative claim payments. For claim
, from accident period
, the cumulative claim payments to the end of development period
are defined as
$X i j [ n ] = ∑ k = 1 j Y i k [ n ]$
and the definition is extended in the obvious way to
$X i j$
, the aggregate, for all claims incurred in accident period
, of cumulative claim payments to the end of development period
A quantity of interest later is the operational time (OT) at the finalisation of a claim. OT was introduced to the loss reserving literature by
), and is discussed by
) and
Taylor and McGuire
Let the OT for claim $n$ be denoted $τ [ n ]$, defined as follows. Suppose that claim $n$ belongs to accident period $i [ n ]$, and that $N ^ i [ n ]$ is an estimator of the number of claims incurred
in this accident period. Let $F i [ n ] [ n ]$ denote the number of claims from the accident period finalised up to and including claim $n$. Then $τ [ n ] = F i [ n ] [ n ] / N ^ i [ n ]$. In other
words, $τ [ n ]$ is the proportion of claims from the same accident period as claim $n$ that are finalised up to and including claim $n$.
3. The Jurassic Period
The earliest models date generally from the late 1960s. These include the chain ladder and the separation method, and all their derivatives, such as Bornhuetter–Ferguson and Cape Cod. They are
discussed in
) and
Wüthrich and Merz
). The chain ladder’s provenance seems unclear, but it may well have preceded the 1960s.
These models were based on the notion of “development” of an aggregate of claims over time, i.e., the tendency for the total payments made in respect of those claims to increase over time in
accordance with some recognisable pattern. They therefore fall squarely in the class of phenomenological, or non-causal, models, in which attention is given to only mathematical patterns in the data
rather than the mechanics of the claim process or any causal factors.
Figure 2
is a slightly enhanced version of
Figure 1
, illustrating the workings of the chain ladder. It is assumed that a cell
$( i , j )$
develops to its successor
$( i , j + 1 )$
in accordance with the rule
$x i , j + 1 = f j x i j ,$
$f j$
is a parameter describing development, and referred to as a development factor or an age-to-age factor.
Forecasts are made according to this rule. The trick is to estimate factors $f j$ from past experience, and in practice they were typically estimated by some kind of averaging of past observations on
these factors, i.e., observed values of $x i , j + 1 / x i j$.
Models of this type are very simple, but their most interesting quality is that they are not, in fact, models at all. The original versions of these models were not stochastic, as is apparent from
(1). Nor is (1) even true over the totality of past experience; it is not the case for a typical data set that $x i , j + 1 / x i j = f j$, constant for fixed $j$, but varying $i$. So, the “models”
in this group are actually algorithms rather than models in the true sense.
Of course, this fault has been rectified over the subsequent years, with (1) replaced by the genuine model defined by the following conditions:
Each row of the triangle is a Markov chain.
Distinct rows of the triangle are stochastically independent.
$X i , j + 1 | X i j$ is subject to some defined distribution for which $E [ X i , j + 1 | X i j ] = f j X i j$, where $f j$ is a parameter to be estimated from data.
A model of this sort was proposed by
) (“the Mack model”), and much development of it has followed, though the earliest stochastic formulation of the chain ladder (
Hachemeister and Stanard 1975
) should also be credited.
While the formulation of a genuine chain ladder model was immensely useful, the fundamental structure of the model retains some shortcomings. First, in statistical parlance, it is a multiplicative
row-and-column effect model. This is a very simple structure, in which all rows are just, in expectation, scalar multiples of one another. This lacks the complexity to match much real-life claim
For example, a diagonal effect might be present, e.g., $E [ X i , j + 1 | X i j ] = f j g i + j − 1 X i j ,$ in (c), where $g i + j − 1$ is a parameter specific to diagonal $i + j − 1$. A variable
inflationary effect would appear in this form, but cannot be accommodated in the chain ladder model formulated immediately above. One can add such parameters to the model, but this will exacerbate
the over-parameterisation problem described in the next main dot point.
Rates of claim settlement might vary from one row to another, causing variation in the factors
$f j$
Fisher and Lange 1973
). Again, one can include additional effects in the models, but at the expense of additional parameters.
Second, even with this simple form, it is at risk of over-parameterisation. The model of an $n × n$ triangle and the associated forecast are characterised by $2 ( n − 1 )$ parameters, $f 1 , … , f I
− 1 , Y 2 , I − 1 , X 3 , I − 2 , … , X I 1$ (actually, the last $n − 1$ of these are conditioning observations but function essentially as parameters in the forecast). For example, a $10 × 10$
triangle would contain 55 observations, would forecast 45 cells, and would require 18 parameters. Over-parameterisation can increase forecast error.
The Jurassic continued through the 1970s and into the 1980s, during which time it spawned mainly non-stochastic models. It did, however, produce some notably advanced creatures.
Hachemeister and Stanard
) has already been mentioned. A stochastic model of claim development essentially by curve fitting was introduced by
), and
) constructed a stochastic model of individual claim development.
4. The Cretaceous Period—Seed-Bearing Organisms Appear
The so-called models of the Jurassic period assumed the general form:
is some real-valued function,
is the vector containing the entire set of observations as its components, and α is some set of parameters, either exogenous or estimated from
. The case of the chain ladder represented by (1) is an example in which
$α = { f 1 , … , f I − 1 }$
Although (2) is not a stochastic model, it may be converted to one by the simple addition of a stochastic error
$Y i j = g ( Y , α ) + ε i j , E [ ε i j ] = 0$
Note that the Mack model of
Section 3
is an example. In addition, with some limitation of
, (3) becomes a Generalised Linear Model (GLM) (
McCullagh and Nelder 1989
), specified as follows:
$Y i j ~ F ( μ i j , φ / w i j ) where μ i j = E [ Y i j ]$
is a distribution contained in the exponential dispersion family (EDF) (
Nelder and Wedderburn 1972
) with dispersion parameter
and weights
$w i j$
$μ i j$ takes the parametric form $h ( μ i j ) = x i j T β$ for some one–one function $h$ (called the link function), and where $x i j$ is a vector of covariates associated with the $( i , j )$
cell and $β$ the corresponding parameter vector.
Again, the chain ladder provides an example. The choices
$h = l n , x i , j + 1 T = [ 0 , … 0 , X i , j , 0 … 0 ] , β = [ f 1 , f 2 , … ] T$
yield the Mack model of
Section 3
The Cretaceous period consisted of such models. The history of actuarial GLMs is longer than is sometimes realised. Its chronology is as follows:
• in 1972, the concept was introduced by Nelder and Wedderburn;
• in 1977, modelling software called GLIM was introduced;
• in 1984, the Tweedie family of distributions was introduced (
Tweedie 1984
), simplifying the modelling software;
GLMs were not widely used in an actuarial context until 1990, and to some extent this reflected the limitations of earlier years’ computing power. It should be noted that their actuarial introduction
to domestic lines pricing occurred as early as 1979 (
Baxter et al. 1980
). I might be permitted to add here a personal note that they were heavily used for loss reserving in all the consultancies with which I was associated from the early 1980s.
The range of GLM loss reserving applications has expanded considerably since 1990. A few examples are:
It is of note that chain ladder model structures may be regarded as special cases of the GLM. Indeed, these chain ladder formulations may be found in the literature (
Taylor 2011
Taylor and McGuire 2016
Wüthrich and Merz 2008
). However, these form a small subset of all GLM claim models.
5. The Paleogene—Increased Diversity in the Higher Forms
5.1. Adaptation of Species—Evolutionary Models
Recall the general form of GLM set out in
Section 4
, and note that the parameter vector
is constant over time. It is possible, of course, that it might change.
Consider, for example, the Mack model of
Section 3
. One might wish to adopt such a model but with parameters
, …,
varying stochastically from one row to the next. This type of modelling can be achieved by a simple extension of the GLM framework defined in
Section 4
. The resulting model is the following.
Evolutionary (or adaptive) GLM. For brevity here, adopt the notation $t = i + j − 1$, so that $t$ indexes payment period. Let the observations $Y i j$ satisfy the conditions:
$Y i j ~ F ( μ i j ( t ) , φ / w i j ) where μ i j ( t ) = E [ Y i j ]$;
$μ i j ( t )$ takes the parametric form $h ( μ i j ( t ) ) = x i j T β ( t )$, where the parameter vector is now $β ( t )$ in payment period $t$; and
The vector $β ( t )$ is now random: $β ( t ) ~ P ( . ; β ( t − 1 ) , ψ )$, which is a distribution that is a natural conjugate of $F ( . , . )$ with its own dispersion parameter $ψ$.
If this is compared with the static GLM of
Section 4
, then the earlier model can be seen to have been adjusted in the following ways:
• all parameters have been superscripted with a time index;
• the fundamental parameter vector $β ( t )$ is now randomised, with a prior distribution that is conditioned by $β ( t − 1 )$, the parameter vector at the preceding epoch.
The model parameters evolve thus through time, allowing for the model to adapt to changing data trends. A specific example of the evolution (c) would be a stationary random walk in which $β ( t ) = β
( t − 1 ) + η ( t )$ with $η ( t ) ~ P * ( . ; ψ )$, with $P *$ now a prior on η^(t) and subject to $E [ η ( t ) ] = 0$.
The mathematics of evolutionary models were investigated by
) and numerical applications given by
Taylor and McGuire
). Their structure is reminiscent of the Kalman filter (
Harvey 1989
) but with the following important difference that the Kalman filter is the evolutionary form of a general linear model, whereas the model described here is the evolutionary form of a GLM.
• the Kalman filter requires a linear relation between observation means and parameter vectors, whereas the present model admits nonlinearity through the link function;
• the Kalman filter requires Gaussian error terms in respect of both observations and priors, whereas the present model admits non-Gaussian within the EDF.
One difficulty arising within this type of model is that the admission of nonlinearity often causes the posterior of $β ( t )$ in (c) to lie outside the family of conjugate priors of $F$ at the next
step of the evolution, where $β ( t )$ evolves to $β ( t + 1 )$. This adds greatly to the complexity of its implementation.
The references cited earlier (
Taylor 2008
Taylor and McGuire 2009
) proceed by replacing the posterior for
$β ( t )$
, which forms the prior for
$β ( t + 1 )$
, by the natural conjugate of
that has the same mean and covariance structure as the actual posterior. This is reported to work reasonably well, though with occasional stability problems in the conversion of iterates to parameter
5.2. Miniaturisation of Species—Parameter Reduction
The Jurassic models were lumbering, with overblown parameter sets. The GLMs of
Section 4
were more efficient in limiting the size of the parameter set, but without much systematic attention to the issue. A more recent approach that brings the issue into focus is regularised regression,
and specifically the least absolute shrinkage and selection operator (LASSO) model (
Tibshirani 1996
Consider the GLM defined by (a) and (4) in
Section 4
. At this point, let the data set be quite general in form. It might consist of the
$Y i j$
, as in (3); or of the
$Y i j [ n ]$
defined in
Section 2
; or, indeed, of any other observations capable of forming the independent variable of a GLM. Let this general data set be denoted by
The parameter vector $β$ of the GLM is typically estimated by maximum likelihood estimation. For this purpose, the negative log-likelihood (actually, negative log-quasi-likelihood) of the
observations $Y$ given $β$ is calculated. This is otherwise known as the scaled deviance, and will be denoted $D ( Y ; β )$. The estimate of $β$ is then
$β ^ = argmin β D ( Y ; β ) .$
Here, the deviance operates as a loss function. Consider the following extension of this loss function:
$L ( Y ; β ) = D ( Y ; β ) + λ | | β | | p$
$| | | | p$
denotes the
$L p$
norm and
$λ > 0$
is a constant, to be discussed further below.
This inclusion of the additional member in (5) converts the earlier GLM to a regularised GLM. In parallel with (4), its estimate of
$β ^ = argmin β L ( Y ; β ) .$
Certain special cases of regularised regression are common in the literature, as summarised in
Table 1
The case of particular interest here is the lasso. According to (5), the loss function is
$L ( Y ; β ) = D ( Y ; β ) + λ | | β | | 1 = D ( Y ; β ) + λ ∑ k | β k |$
where the
$β k$
are the components of
A property of this form of loss function is that it can force many components of $β ^$ to zero, rendering the lasso an effective tool for elimination of covariates from a large set of candidates.
The term $λ | | β | | 1$ in (7) may be viewed as a penalty for every parameter included in the model. Evidently, the penalty increases with increasing $λ$, with the two extreme cases recognisable:
• $λ → 0$
: no elimination of covariates (ordinary GLM—see also
Table 1
• $λ → ∞$: elimination of all covariates (trivial regression).
Thus, the application of the lasso may consist of defining a GLM in terms of a very large number of candidate covariates, and then calibrating by means of the lasso, which has the effect of selecting
a subset of these candidates for inclusion in the model.
The prediction accuracy of any model produced by the lasso is evaluated by cross-validation, which consists of the following steps:
Randomly delete one $n$-th of the data set, as a test sample;
Fit the model to the remainder of the data set (the training set);
Generate fitted values for the test sample;
Compute a defined measure of error (e.g., the sum of squared differences) between the test sample and the values fitted to it;
Repeat steps (a) to (d) a large number of times, and take the average of the error measures, calling this the cross-validation error (CV error).
The process just described pre-supposes a data set sufficiently large for dissection into a training set and a test sample. Small claim triangles (e.g., a 10 × 10 triangle contains only 55
observations) are not adapted to this. So, cross-validation is a model performance measure suited to large data sets, such as are analysed by GMs and MLMs.
One possible form of calibration (e.g.,
McGuire et al.
)) proceeds as follows. A sequence of models is examined with increasing
, and therefore with the number of covariates decreasing. The models with small
tend to be over-parameterised, leading to poor predictive performance; those with large
tend to be under-parameterised, again leading to poor predictive performance. The optimal model is chosen to minimise CV error.
It is evident that, by the nature of this calibration, the lasso will be expected to lead to high forecast efficiency.
Figure 3
provides a numerical example of the variation of CV error with the number of parameters used to model a particular data set.
The lasso is a relatively recent addition to the actuarial literature, but a number of applications have already been made.
Li et al.
) and
Venter and Şahın
) used it to model mortality.
Gao and Meng
) constructed a loss reserving lasso, modelling a 10 × 10 aggregate claim triangle and using a model broadly related to the chain ladder.
McGuire et al.
) also constructed a loss reserving lasso, but modelling a large data set of individual claims containing a number of complex data features, some of which will be described in
Section 6
5.3. Granular (or Micro-) Models
Granular models, sometimes referred to as micro-models, are not especially well-defined. The general idea is that they endeavour to extend modelling into some of the detail that underlies the
aggregate data in a claim triangle. For example, a granular model may endeavour to model individual claims in terms of the detail of the claim process.
) individual claim model has already been mentioned. The early statistical case estimation models used in industry were also granular. See, for example,
Taylor and Campbell
) for a model of workers compensation claims in which claimants move between “active” and “incapacitated” states, receiving benefits for incapacity and other associated benefits, such as medical
The history of granular models is generally regarded as having commenced with the papers of
) and
). These authors represented individual claims by a model that tracked a claim process through a sequence of key dates, namely accident date, notification date, partial payment date, …, partial
payment date, final payment date, and closure date. The process is a marked process in the sense that each payment date is tagged with a payment amount (or mark).
Distinction is sometimes made between aggregate and granular models, but it is debatable. The literature contains models with more extensive data inputs than just claim payment triangles. For
example, the payment triangle might be supplemented by a claim count triangle, as in the Payments per Claim Incurred model described in
), or in the Double Chain Ladder of
Miranda et al.
These models certainly use more extensive data than a simple claim amount triangle, but the data are still aggregated. It is more appropriate to regard claim models as forming a spectrum that varies
from a small amount of conditioning data at one end (e.g., a chain ladder) to a very large amount at the other (e.g., the individual claim models of Pigeon, Antonio and Denuit).
6. The Anthropocene—Intelligent Beings Intervene
6.1. Artificial Neural Networks in General
By implication, the present section will be concerned with the application of machine learning (ML) to loss reserving. Once again, the classification of specific models as MLMs or not may be
ambiguous. If ML is regarded as algorithmic investigation of patterns and structure in data with minimal human intervention, then the lasso of
Section 5.2
might be regarded as an MLM.
There are other contenders, such as regression trees, random forests, support vector machines, and clustering (
Wüthrich and Buser 2017
), but the form of ML that has found greatest application to loss reserving is the artificial neural network (ANN), and this section will concentrate on these.
Just a brief word on the architecture of a (feed-forward) ANN, since it will be relevant to the discussion in
Section 8.3
. Using the notation of
), let the ANN input be a vector
. Suppose there are
$L − 1$
$≥ 1$
) hidden layers of neurons, each layer a vector, with values denoted by
$h [ 1 ] , … , h [ L − 1 ]$
; a vector output layer, with a value denoted by
$h [ L ]$
; and a vector prediction
$y ^$
of some target quantity
. Let the components of
$h [ ℓ ]$
be denoted by
$h j [ ℓ ]$
The relevant computational relations are
$h j [ ℓ ] = g [ ℓ ] ( z j [ ℓ ] ) , ℓ = 1 , 2 , … , L$
$z [ ℓ ] = ( w [ ℓ ] ) T ( h [ ℓ − 1 ] ) + b [ ℓ ] , ℓ = 1 , 2 , … , L with the convention h [ 0 ] = x$
$z [ ℓ ]$
is a vector with components
$z j [ ℓ ]$
, the
$g [ ℓ ]$
are prescribed activation functions, the
$h j [ ℓ ]$
are called activations,
$w [ ℓ ]$
is a vector of weights, and
$b [ ℓ ]$
is a vector of biases. The weights and biases are selected by the ANN to maximise the accuracy of the prediction.
The hidden layers need not be of equal length. The activation functions will usually be nonlinear.
An early application of an ANN was given by
), who modelled an earlier version of the data set used by
McGuire et al.
) in
Section 5.2
. This consisted of a unit record file in respect of about 60,000 Auto Bodily Injury finalised claims, each tagged with its accident quarter, development quarter of finalisation, calendar quarter of
finalisation, OT at finalisation and season of finalisation (quarter).
Prior GLM analysis of the data set over an extended period had been carried out by
Taylor and McGuire
), as described in
Section 4
, and they found that claim costs were affected in a complex manner by the factors listed there. The ANN was able to identify these effects. For example, it identified:
• an accident quarter effect corresponding to the legislative change that occurred in the midst of the data; and
• SI that varied with both finalisation quarter and OT.
Although the ANN and GLM produced similar models, the ANN’s goodness-of-fit was somewhat superior to that of the GLM.
Interest in and experimentation with ANNs has accelerated in recent years.
Harej et al.
) reported on an International Actuarial Association Working Group on individual claim development with machine learning. Their model was a somewhat “under-powered” ANN that assumed separate chain
ladder models for paid and incurred costs, respectively, for individual claims, and simply estimated the age-to-age factors.
However, since both paid and incurred amounts were included as input information in both models, they managed to differentiate age-to-age factors for different claims, e.g., claims with small amounts
paid but large amounts incurred showed higher development of payments.
A follow-up study, with a similar restriction of ANN form, namely pre-supposed chain ladder structure, was published by
Jamal et al.
) carried out reserving with deep learning ANN, i.e., with multiple hidden layers. In this case, no model structure was pre-supposed. The ANN was applied to 200 claim triangles (50 insurers, each
four lines of business) by
Meyers and Shi
), and its results compared with those generated by five other models, including chain ladder and several from
The ANN out-performed all contenders most of the time and, in other cases, was only slightly inferior to them. This is an encouraging demonstration of the power of the ANN, but the small triangles of
aggregate data do not exploit the potential of the ANN, which can be expected to perform well on large data sets that conceal complex structures.
6.2. The Interpretability Problem
GMs and MLMs can greatly improve modelling power in cases of data containing complex patterns. GMs can delve deeply into the data and provide valuable detail of the claim process. Their formulation
can, however, be subject to great, even unsurmountable, difficulties. MLMs, on the other hand, for the large part provide little understanding, but may be able to bypass the difficulties encountered
by GMs. They may also be cost-effective in shifting modelling effort from the actuary to the algorithm (e.g., lasso).
MLMs’ greatest obstacle to useful implementation is the interpretability problem. Some recent applications of ANNs have sought to address this. For example,
Vaughan et al.
) introduce their explainable neural network (xNN), in which the ANN architecture (8) to (10) is restricted in such a way that
$y ^ = μ + ∑ k = 1 K γ k f k ( β k T x )$
for scalar constants
, …,
, vector constants
, …,
, and real-valued functions
$f k$
This formulation is an attempt to bring known structure to the prediction
$y ^$
. It is similar to the use of basis functions in the lasso implementation of
McGuire et al.
). The use of xNNs is as yet in its infancy but offers promise.
7. Model Assessment
The assessment of a specific loss reserving model needs to consider two main factors:
• the model’s predictive efficiency; and
• its fitness for purpose.
7.1. Adaptation of Species—Evolutionary Models
Let $R$ denote the quantum of total liability represented by the loss reserve, and $R ^$ the statistical estimate of it. Both quantities are viewed as random variables, and the forecast error is $R −
R ^$, also a random variable.
Loss reserving requires some knowledge of the statistical properties of $R ^$. Obviously, the mean $E [ R ^ ]$ is required as the central estimate. Depending on the purpose of the reserving exercise,
one may also require certain quantiles of $R ^$ for the establishment of risk margins and/or capital margins, but an important statistic will be the estimate of forecast error.
One such estimate is the mean square error of prediction (MSEP), defined as
$M S E P [ R − R ^ ] = E [ R − R ^ ] 2 .$
The smaller the MSEP, the greater the predictive efficiency of $R ^$, so a reasonable choice of model would often be that which minimises the MSEP (maximises prediction efficiency). As long as one is
not concerned with quantiles other than moderate, e.g., 75%, this conclusion will hold. If there is a major focus on extreme quantiles, e.g., 99%, the criterion for model selection might shift to the
tail properties of the distribution of $R ^$.
It may often be assumed that $R ^$ is unbiased, i.e., $E [ R − R ^ ] = 0$, but (11) may remain a reasonable measure of forecast error in the absence of this condition.
The structure of MSEP is discussed at some length in
2000, sec. 6.6
) and
Taylor and McGuire
2016, chp. 4
). Suffice to say here that it consists of three additive components, identified as:
• parameter error;
• process error; and
• model error.
As discussed in the cited references, model error is often problematic and, for the purpose of the present subsection, MSEP will be taken to be the sum of just parameter and process errors.
In one or two cases, MSEP may be obtained analytically, most notably in the case of the Mack model, as set out in detail in
). The MSEP of a GLM forecast may be approximated by the delta method, discussed in
Taylor and McGuire
2016, sec. 5.2
However, generally, for non-approximative estimates, two methods are available, namely:
7.2. Fitness for Purpose
In certain circumstances, forecasts of ultimate claim cost may be required at an individual level. Suppose, for example, a self-insurer adopts a system of devolving claim cost to cost centres, but
has not the wherewithal to formulate physical estimates of those costs. Then, a GM or MLM at the level of individual claims will be required.
If a loss reserving model is required not only for the simple purpose of entering a loss reserve in a corporate account, but also to provide some understanding of the claims experience that might be
helpful to operations, then a more elaborate model than the simplest, such as chain ladder, would be justified.
Such considerations will determine the subset of all available models that are fit for purpose. Within this subset, one would, in principle, still usually choose that with the maximum predictive
8. Predictive Efficiency
The purpose of the present section is to consider the predictive efficiency of GMs and MLMs. It will be helpful to preface this discussion with a discussion of cascaded models.
8.1. Cascaded Models
A cascaded model consists of a number of sub-models with the output of at least one of these providing input to another. An example is the Payments per Claim Finalized model discussed by
). This consists of three sub-models, as follows:
• claim notification counts;
• claim finalisation counts; and
• claim finalisation amounts.
The sub-models are configured as in
Figure 4
By contrast, the chain ladder consists of just a single model of claim amounts.
It is evident that increasing the number of sub-models within a model must add to the number of parameters, and it is well-known that, although too few parameters will lead to a poor model due to
bias in forecasts, an increase in the number of parameters beyond a certain threshold will lead to poor predictive efficiency (over-parameterisation).
A cascaded model of $n$ sub-models would typically generate less biased forecasts than one of $n − 1$ sub-models. However, the increased number of parameters might degrade predictive efficiency to
the point where the more parsimonious model, even with its increased bias, is to be preferred.
It follows that the addition of a further sub-model will be justified only if the bias arising from its exclusion is sufficiently severe. This is illustrated in the empirical study by
Taylor and Xu
) of many triangles from the data set of
Meyers and Shi
They find that many of them are consistent with the assumptions of the chain ladder, in which case that model out-performs more elaborate cascaded models. However, there are also cases in which the
chain ladder is a poor representation of the data, calling for a more elaborate model. In such cases, the cascaded models produce the superior performance.
8.2. Granular Models
The discussion of
Section 8.1
perhaps sounds a cautionary note in relation to GMs. These are, by their nature, cascaded, e.g., a sub-model for the notification process, a sub-model for the partial payment process, etc. They may,
in fact, be very elaborate, in which case the possibility of over-parameterisation becomes a concern.
A salutary remark in the consideration of GMs is that the (aggregate) chain ladder has minimum variance for over-dispersed Poisson observations (
Taylor 2011
). So, regardless of how one expands the scope of the input data (e.g., more precise accident and notification dates, individual claim data, etc.), the forecast of future claim counts will not be
improved as long as the chain ladder assumptions are valid.
The GM literature is rather bereft of demonstration that a GM has out-performed less elaborate contenders. It is true that
Huang et al.
) make this claim in relation to the data considered by them. However, a closer inspection reveals that their GM is essentially none other than the Payments per Claim Finalized model discussed in
Section 8.1
The model posits individual claim data, and generates individual claim loss reserves. However, the parameters controlling these individual reserves are not individual-claim-specific. So, the model
appears to lie somewhere between an individual claim model and an aggregate model.
This does not appear to be a case of a GM producing predictive efficiency superior to that of an aggregate model. Rather, it is a case of a cascaded model producing efficiency superior to that of
uncascaded models.
There is one other major characteristic of GMs that requires consideration. A couple of examples illustrate.
Example 1.
Recall Antonio and Plat (2014), whose model is of the type mentioned in Section 5.3, tracing individual claims through the process of occurrence, notification, partial payments and closure. Claim
payments occur according to a distribution of delays from notification but, conditional on these, the severities of individual payments in respect of an individual claim are equi-distributed and
stochastically independent.
In some lines of business, perhaps most but especially in Liability lines, this assumption will not withstand scrutiny. The payments of a medium-to-large claim typically tend to resemble the
following profile: a series of relatively small payments (fees for incident reports, preliminary medical expenses), a payment of dominant size (settlement of agreed liability), followed possibly by a
smaller final payment (completion of legal expenses).
Consequently, if a large payment (say $500 K) is made, the probability of another of anywhere near the same magnitude is remote. In other words, the model requires recognition of dependency between
Example 2.
(From Taylor et al. (2008)). Consider a GM of development of case estimates over time. Suppose an estimate of ultimate liability in respect of an individual claim increases 10-fold, from $5 K to $50
K, over a particular period. Then, typically, the probability of a further 10-fold increase, from $50 K to $500 K, in the next period will be low.
The reason is that the first increase signifies the emergence of information critical to the quantum of the claim, and it is unusual that further information of the same importance would emerge
separately in the following period. Again, the random variables describing the development of a claim cannot be assumed to be stochastically independent.
Taylor et al.
) suggest an estimation procedure that allows for any such dependency without the need for its explicit measurement.
The essential point to emerge from this discussion is that the detail of a claim process usually involves a number of intricate dependencies. One ignores these at one’s peril, but taking account of
them may well be problematic, since it opens the way to a hideously complex model with many dependency parameters. This, in turn, raises the spectre of over-parameterisation, and its attendant
degradation of predictive efficiency, not to mention possible difficulty in the estimation of the dependency parameters.
This by no means condemns GMs, but it appears to me that the jury is still out on them; they have yet to prove their case.
8.3. Artificial Neural Networks
ANNs are effective tools for taking account of obscure or complex data structures. Recall the data set used by
) ANN in
Section 6
, which had been previously modelled with a GLM. It is evident from the description of the results that the GLM would have required a number of interactions:
• for the legislative effect, interaction between accident quarter and OT;
• for SI, interaction between finalisation quarter and OT.
The seeking out of such effects in GLM modelling (feature selection) can be difficult, time-consuming and expensive. This point is made by
McGuire et al.
) in favour of the lasso, which is intended to automate feature selection.
The ANN is an alternative form of automation. As can be seen from the model form set out in (8) to (10), no explicit feature selection is attempted. The modelling is essentially an exercise in
nonlinear curve-fitting, the nonlinearity arising from the activation functions. The number of parameters in the model can be controlled by cross-validation, as described in
Section 5.2
To some extent ANNs provide a rejoinder to the dependency issues raised in
Section 8.2
. Identification of dependencies becomes a mere special case of feature selection, and is captured obscurely by (8) to (10).
On the other hand, the abstract curve-fitting nature of ANNs renders them dangerously susceptible to extrapolation errors. Consider SI, for example. In the forecast of a loss reserve, one needs to
make some assumption for the future. A GLM will have estimated past SI, and while this might not be blindly extrapolated into the future, it can provide valuable information, perhaps to be merged
with collateral information, leading to a reasoned forecast.
In the case of an ANN, any past SI will have been “modelled” in the sense that the model may include one or more functions that vary over calendar quarter, but these curves may interact with other
covariates, as mentioned above, and the extraction of all this information in an organised and comprehensible form may present difficulties.
) alludes to this issue.
All actuaries are familiar with text-book examples of curves (e.g., polynomials) that fit well to past data points, but produce wild extrapolations into the future. Blind extrapolation of ANNs can,
on occasion, produce such howlers. Suffice to say that care and, possibly, skill is required in their use for forecasting.
9. The Watchmaker and the Oracle
The tendency of GMs (watchmaking) is to increase the number of cascaded models (relative to aggregate models), first to individual claim modelling, then perhaps to individual transaction modelling,
to dissect the available data in ever greater detail, to increase the number of model components and the complexity of their connections, and then assemble an integrated model from all the tiny
If this can be achieved, it will provide powerful understanding of the claim process in question. However, as indicated in
Section 8.2
, the process is fraught with difficulty. The final model may be over-simplified and over-parameterised, with unfavourable implications for predictive efficiency. In addition, the issue of modelling
complex stochastic dependencies may be difficult, or even impossible, to surmount.
One may even discover that all sub-models pass goodness-of-fit tests, and yet the integrated model, when assembled, does not. This can arise because of inappropriate connections between the
sub-models or overlooked dependencies.
An example of this can occur in the workers compensation framework mentioned in
Section 5.3
. One might successfully model persistence in the active state as a survival process, and persistence in the incapacitated state as a separate survival process, and then combine the two to forecast a
worker’s future incapacity experience.
However, the active survival intensities may not be independent of the worker’s history. A claim recently recovered from incapacity may be less likely to return to it over the following few days than
a worker who has never been incapacitated. Failure to allow for this dependency (and possibly other similar ones) will lead to unrealistic forecasts of future experience.
The behaviour of the ANN is Oracle-like. It is presented with a question. It surveys the available information, taking account of all its complexities, and delivers an answer, with little trace of
It confers the benefit of bypassing many of the challenges of granular modelling, but the price to be paid for this is an opaque model. This is the interpretability problem. Individual data features
remain hidden within the model. They may also be sometimes poorly measured without the human assistance given to more structured models. For example, diagonal effects might be inaccurately measured,
but compensated for by measured, but actually nonexistent, row effects. Similar criticisms can be levelled at some other MLMs, e.g., lasso.
The ANN might be difficult to validate. Cross-validation might ensure a suitably small MSEP overall. However, if a poor fit is found in relation to some subset of the data, one’s recourse is unclear.
The abstract nature of the model does not lend itself easily to spot-correction.
10. Conclusions
Aggregate models have a long track record. They are demonstrably adequate in some situations, and dubious to unsuitable in others. Cases may easily be identified in which a model as simple as the
chain ladder works perfectly, and no other approach is likely to improve forecasting with respect to either bias or precision.
However, these simple models are characterised by very simple assumptions and, when a data set does not conform to these assumptions, the performance of the simple models may be seriously disrupted.
Archetypal deviations from the simple model structures are the existence of variable SI, structural breaks in the sequence of average claim sizes over accident periods, or variable claim settlement
rates (see e.g.,
Section 4
When disturbances of this sort occur, great flexibility in model structure may be required. For a few decades, GLMs have provided this (see
Section 4
). GLMs continue to be applicable and useful. However, the fitting of these models requires considerable time and skill, and is therefore laborious and costly.
One possible response to this is the use of regularised regression, and the lasso in particular (
Section 5.2
). This latter model may be viewed as a form of MLM in that it automates model selection. This retains all the advantages of a GLM’s flexibility, but with the reduced time and cost of calibration
flowing from automation, and also provides a powerful guard against over-parameterisation.
The GMs of
Section 5.3
are not a competitor of the GLM. Rather, they attempt to deconstruct the claim process into a number of components and model each of these. GLMs may well be used for the component modelling.
This approach may extract valuable information about the claim process that would otherwise be unavailable. However, as pointed out in
Section 8.2
, there will often be considerable difficulty in modelling some dependencies in the data, and failure to do so may be calamitous for predictive accuracy.
Most GMs are also cascaded models and, indeed, some are extreme cases of these.
Section 8.1
points out that the complexity of cascaded models, largely reflected in the number of sub-models, comes with a cost in terms of enlarged predictive error (MSEP). They are therefore useful only when
the failure to consider sub-models would cause the introduction of prediction bias worse than the increase in prediction error caused by their inclusion.
The increased computing power of recent years has enabled the recruitment of larger data sets, with a greater number of explanatory variables for loss reserving, or lower-level, such as individual
claim, data. This can create difficulties for GMs and GLMs. The greater volume of data may suggest greater model complexity. It may, for example, necessitate an increase in the number of sub-models
within a GLM.
If a manually constructed GLM were to be used, the challenges of model design would be increased. It is true, as noted above, that these are mitigated by the use of a lasso (or possibly other
regularisation), but not eliminated.
Automation of such a model requires a selection of the basis functions mentioned in
Section 6.2
. It is necessary that the choice allow for interactions of all orders to be recognised in the model. As the number of potential covariates if the model increases, the number of interactions can
mount very rapidly, possibly to the point of unworkability. This will sometimes necessitate the selection of interaction basis functions by the modeler, at which point erosion of the benefits of
automated model design begins.
ANNs endeavour to address this situation. Their very general structure (see (8) to (10)) renders them sufficiently flexible to fit a data set usually as well as a GLM, and to identify and model
dependencies in the data. They represent the ultimate in automation, since the user has little opportunity to intervene in feature selection.
However, this flexibility comes at a price. The output function of the ANN, from which the model values are fitted to data points, becomes abstract and inscrutable. While providing a forecast, the
ANN may provide the user with little or no understanding of the data. This can be dangerous, as the user may lack control over extrapolation into the future (outside the span of the data) required
for prediction.
The literature contains some recent attempts to improve on this situation with xNNs, which endeavor to provide some shape for the network’s output function, and so render it physically meaningful.
For example, the output function may be expressed in terms of basis functions parallel to those used for a lasso. However, experience with this form of lasso indicates that effort may still be
required for interpretation of the model output expressed in this form.
In summary, the case is still to be made for both GMs and MLMs. Particular difficulties are embedded in GMs that may prove insurmountable. MLMs hold great promise but possibly require further
development if they are to be fully domesticated and realise their loss-reserving potential.
A tantalising prospect is the combination of GMs and ANNs to yield the best of both worlds. To the author’s knowledge, no such model has yet been formulated, but the vision might be the definition of
a cascaded GM with one or more ANNs used to fit the sub-models or the connections between them, or both.
This research was funded by the Australian Research Council’s Linkage Projects funding scheme, project number LP130100723.
Conflicts of Interest
The author declares no conflict of interest in the production of this research.
$λ$ $p$ Special Case
0 - GLM
>0 1 Lasso
>0 2 Ridge regression
© 2019 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://
Share and Cite
MDPI and ACS Style
Taylor, G. Loss Reserving Models: Granular and Machine Learning Forms. Risks 2019, 7, 82. https://doi.org/10.3390/risks7030082
AMA Style
Taylor G. Loss Reserving Models: Granular and Machine Learning Forms. Risks. 2019; 7(3):82. https://doi.org/10.3390/risks7030082
Chicago/Turabian Style
Taylor, Greg. 2019. "Loss Reserving Models: Granular and Machine Learning Forms" Risks 7, no. 3: 82. https://doi.org/10.3390/risks7030082
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2227-9091/7/3/82","timestamp":"2024-11-03T04:15:00Z","content_type":"text/html","content_length":"468090","record_id":"<urn:uuid:bd767168-1274-4cf8-89f4-02a07461b8ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00823.warc.gz"} |
How Do You Find the Greatest Common Factor If There are No Common Factors? Instructional Video for 6th - 9th Grade
Curated and Reviewed by Lesson Planet
Use prime factorization to find the greatest common factor of three given values. Okay. But what if there doesn't seem to be any integer that is common in all three given values? Well, then, the
greatest common factor must be 1. Not sure what that means? Watch this video to clarify.
Resource Details
6th - 9th
3 more...
Resource Type
Media Length
Instructional Strategy
Usage Permissions
Fine Print
Start Your Free Trial
Save time and discover engaging curriculum for your classroom. Reviewed and rated by trusted, credentialed teachers.
Try It Free | {"url":"https://www.lessonplanet.com/teachers/how-do-you-find-the-greatest-common-factor-if-there-are-no-common-factors","timestamp":"2024-11-09T09:07:09Z","content_type":"text/html","content_length":"105888","record_id":"<urn:uuid:c471c334-a7ad-4c00-a399-ac4b73cb2f82>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00462.warc.gz"} |
Experimental Design
In the book Encyclopedia of Behavioral Medicine, Turner describes experimental design from a clinical perspective “as an experiment with a series of observations made under conditions in which
the research scientist controls the influences of interest”, Turner, 2013. The randomized trial is a classic example of experimental design. The data is randomized to one of two or more experimental
groups. Then the significant differences between the sets are analyzed. (Turner, 2013) The Completely Randomized Design, Randomized Block Design, and Factorial design are three experimental designs,
and analysis of variance (ANOVA) will tell us whether or not there is a difference between the means of one or more independent categorical groups. The ANOVA is an easily calculated method we can use
in R.
Thus, the importance of experimental design has significance because tested hypotheses are relatively quick to determine how to move forward with various datasets, and the design gives a good
starting point for the confidence of the data in moving to other analysis functions and depth and we can:
• Maximize insight into a data set
• Uncover underlying structure
• Extract important variables
• Detect outliers and anomalies
• Test assumptions
Nist Sematech. (n.d.). One-Way ANOVA. Retrieved from http://www.itl.nist.gov/div898/handbook/ppc/section2/ppc231.htm
Turner, R. J. (2013). Encyclopedia of Behavioral Medicine - Experimental Designs. https://doi.org/10.1007/978-1-4419-1005-9
Yau, C. (2013). R Tutorial with Bayesian Statistics Using OpenBUGS. Retrieved from: http://www.r-tutor.com/content/r-tutorial-ebook. | {"url":"https://www.bryanschafroth.com/post/2017/10/01/experimental-design/","timestamp":"2024-11-08T14:00:58Z","content_type":"text/html","content_length":"4125","record_id":"<urn:uuid:057e5381-08e9-42ed-8672-de605a782f1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00503.warc.gz"} |
From New World Encyclopedia
In mathematics, the logarithm (or log) of a number x in base b is the power (n) to which the base b must be raised to obtain the number x. For example, the logarithm of 1000 to the base 10 is the
number 3, because 10 raised to the power of 3 is 1000. Or, the logarithm of 81 to the base 3 is 4, because 3 raised to the power of 4 is 81.
In general terms, if x = b^n, then the logarithm of x in base b is usually written as
${\displaystyle \log _{b}(x)=n.\,}$
(The value b must be neither 0 nor the root of 1.)
A useful way of remembering this concept is by asking: "b to what power (n) equals x?" When x and b are restricted to positive real numbers, the logarithm is a unique real number.
Using one of the examples noted above, 3 raised to the power of 4 is usually written as
${\displaystyle 3^{4}=3\times 3\times 3\times 3=81\,}$
In logarithmic terms, one would write this as
${\displaystyle \log _{3}(81)=4\,}$
In words, the base-3 logarithm of 81 is 4; or the log base-3 of 81 is 4.
The most widely used bases for logarithms are 10, the mathematical constant e (approximately equal to 2.71828), and 2. The term common logarithm is used when the base is 10; the term natural
logarithm is used when the base is e.
The method of logarithms simplifies certain calculations and is used in expressing various quantities in science. For example, before the advent of calculators and computers, the method of logarithms
was very useful for the advance of astronomy, and for navigation and surveying. Number sequences written on logarithmic scales continue to be used by scientists in various disciplines. Examples of
logarithmic scales include the pH scale, to measure acidity (or basicity) in chemistry; the Richter scale, to measure earthquake intensity; and the scale expressing the apparent magnitude of stars,
to indicate their brightness.
The inverse of the logarithmic function is called the antilogarithm function. It is written as antilog[b](n), and it means the same as ${\displaystyle b^{n}}$.
The method of logarithms was first publicly propounded in 1614, in a book titled Mirifici Logarithmorum Canonis Descriptio, by John Napier,^[1] Baron of Merchiston in Scotland. (Joost Bürgi,
independently discovered logarithms, but he did not publish his discovery until four years after Napier.)
This method contributed to the advance of science, and especially of astronomy, by making some difficult calculations possible. Prior to the advent of calculators and computers, it was used
constantly in surveying, navigation, and other branches of practical mathematics. It supplanted the more involved method of prosthaphaeresis, which relied on trigonometric identities as a quick
method of computing products. Besides their usefulness in computation, logarithms also fill an important place in higher theoretical mathematics.
At first, Napier called logarithms "artificial numbers" and antilogarithms "natural numbers." Later, he formed the word logarithm to mean a number that indicates a ratio: λόγος (logos) meaning
proportion, and ἀριθμός (arithmos) meaning number. Napier chose that because the difference of two logarithms determines the ratio of the numbers for which they stand, so that an arithmetic series of
logarithms corresponds to a geometric series of numbers. The term "antilogarithm" was introduced in the late seventeenth century and, while never used extensively in mathematics, persisted in
collections of tables until they fell into disuse.
Napier did not use a base as we now understand it, but his logarithms were, up to a scaling factor, effectively to base 1/e. For interpolation purposes and ease of calculation, it is useful to make
the ratio r in the geometric series close to 1. Napier chose r = 1 - 10^−7 = 0.999999 (Bürgi chose r = 1 + 10^−4 = 1.0001). Napier's original logarithms did not have log 1 = 0 but rather log 10^7 =
0. Thus if N is a number and L is its logarithm as calculated by Napier, N = 10^7(1 − 10^−7)^L. Since (1 − 10^−7)^10^7 is approximately 1/e, this makes L/10^7 approximately equal to log[1/e] N/10^7.
Tables of logarithms
Prior to the advent of computers and calculators, using logarithms meant using tables of logarithms, which had to be created manually. Base-10 logarithms are useful in computations when electronic
means are not available.
In 1617, Henry Briggs published the first installment of his own table of common logarithms, containing the logarithms of all integers below 1000 to eight decimal places. This he followed, in 1624,
with his Arithmetica Logarithmica, containing the logarithms of all integers from 1 to 20,000 and from 90,000 to 100,000 to fourteen places of decimals, together with a learned introduction, in which
the theory and use of logarithms were fully developed.
The interval from 20,000 to 90,000 was filled by Adriaan Vlacq, a Dutch mathematician; but in his table, which appeared in 1628, the logarithms were given to only ten places of decimals. Vlacq's
table was later found to contain 603 errors, but "this cannot be regarded as a great number, when it is considered that the table was the result of an original calculation, and that more than
2,100,000 printed figures are liable to error."^[3] An edition of Vlacq's work, containing many corrections, was issued at Leipzig in 1794, under the title Thesaurus Logarithmorum Completus by Jurij
François Callet's seven-place table (Paris, 1795), instead of stopping at 100,000, gave the eight-place logarithms of the numbers between 100,000 and 108,000, in order to diminish the errors of
interpolation, which were greatest in the early part of the table; and this addition was generally included in seven-place tables. The only important published extension of Vlacq's table was made by
Mr. Sang 1871, whose table contained the seven-place logarithms of all numbers below 200,000.
Briggs and Vlacq also published original tables of the logarithms of the trigonometric functions.
Besides the tables mentioned above, a great collection, called Tables du Cadastre, was constructed under the direction of Gaspard de Prony, by an original computation, under the auspices of the
French republican government of the 1700s. This work, which contained the logarithms of all numbers up to 100,000 to nineteen places, and of the numbers between 100,000 and 200,000 to twenty-four
places, exists only in manuscript, "in seventeen enormous folios," at the Observatory of Paris. It was begun in 1792; and "the whole of the calculations, which to secure greater accuracy were
performed in duplicate, and the two manuscripts subsequently collated with care, were completed in the short space of two years."^[4] Cubic interpolation could be used to find the logarithm of any
number to a similar accuracy.
The logarithm as a function
The function log[b](x) depends on both b and x, but the term logarithm function (or logarithmic function) in standard usage refers to a function of the form log[b](x) in which the base b is fixed and
so the only argument is x. Thus there is one logarithm function for each value of the base b (which must be positive and must differ from 1). Viewed in this way, the base-b logarithm function is the
inverse function of the exponential function b^x. The word "logarithm" is often used to refer to a logarithm function itself as well as to particular values of this function.
Graphical interpretation
The natural logarithm of a is the area under the curve y = 1/x between the x values 1 and a.
For integers b and x > 1, the number log[b](x) is irrational (that is, not a quotient of two integers) if either b or x has a prime factor which the other does not. In certain cases this fact can be
proved very quickly: for example, if log[2]3 were rational, we would have log[2]3 = n/m for some positive integers n and m, thus implying 2^n = 3^m. But this last identity is impossible, since 2^n is
even and 3^m is odd. Much stronger results are known. See Lindemann–Weierstrass theorem.
Integer and non-integer exponents
If n is a positive integer, b^n signifies the product of n factors equal to b:
${\displaystyle \underbrace {b\times b\times \cdots \times b} _{n}.}$
However, if b is a positive real number not equal to 1, this definition can be extended to any real number n in a field (see exponentiation). Similarly, the logarithm function can be defined for any
positive real number. For each positive base b not equal to 1, there is one logarithm function and one exponential function, which are inverses of each other.
Logarithms can reduce multiplication operations to addition, division to subtraction, exponentiation to multiplication, and roots to division. Therefore, logarithms are useful for making lengthy
numerical operations easier to perform and, before the advent of electronic computers, they were widely used for this purpose in fields such as astronomy, engineering, navigation, and cartography.
They have important mathematical properties and are still widely used today.
The most widely used bases for logarithms are 10, the mathematical constant e ≈ 2.71828… and 2. When "log" is written without a base (b missing from log[b]), the intent can usually be determined from
• Natural logarithm (log[e], ln, log, or Ln) in mathematical analysis
• Common logarithm (log[10] or simply log) in engineering and when logarithm tables are used to simplify hand calculations
• Binary logarithm (log[2]) in information theory and musical intervals
• Indefinite logarithm when the base is irrelevant, for example, in complexity theory when describing the asymptotic behavior of algorithms in big O notation.
To avoid confusion, it is best to specify the base if there is any chance of misinterpretation.
Other notations
The notation "ln(x)" invariably means log[e](x), that is, the natural logarithm of x, but the implied base for "log(x)" varies by discipline:
• Mathematicians generally understand both "ln(x)" and "log(x)" to mean log[e](x) and write "log[10](x)" when the base-10 logarithm of x is intended.
• Many engineers, biologists, astronomers, and some others write only "ln(x)" or "log[e](x)" when they mean the natural logarithm of x, and take "log(x)" to mean log[10](x) or, sometimes in the
context of computing, log[2](x).
• On most calculators, the LOG button is log[10](x) and LN is log[e](x).
• In most commonly used computer programming languages, including C, C++, Java, Fortran, Ruby, and BASIC, the "log" function returns the natural logarithm. The base-10 function, if it is available,
is generally "log10."
• Some people use Log(x) (capital L) to mean log[10](x), and use log(x) with a lowercase l to mean log[e](x).
• The notation Log(x) is also used by mathematicians to denote the principal branch of the (natural) logarithm function.
• A notation frequently used in some European countries is the notation ^blog(x) instead of log[b](x).
This chaos, historically, originates from the fact that the natural logarithm has nice mathematical properties (such as its derivative being 1/x, and having a simple definition), while the base 10
logarithms, or decimal logarithms, were more convenient for speeding calculations (back when they were used for that purpose). Thus, natural logarithms were only extensively used in fields like
calculus while decimal logarithms were widely used elsewhere.
As recently as 1984, Paul Halmos in his "automathography" I Want to Be a Mathematician heaped contempt on what he considered the childish "ln" notation, which he said no mathematician had ever used.
(The notation was in fact invented in 1893 by Irving Stringham, professor of mathematics at Berkeley.) As of 2005, many mathematicians have adopted the "ln" notation, but most use "log."
In computer science, the base 2 logarithm is sometimes written as lg(x) to avoid confusion. This usage was suggested by Edward Reingold and popularized by Donald Knuth. However, in Russian
literature, the notation lg(x) is generally used for the base 10 logarithm, so even this usage is not without its perils.^[5] In German, lg(x) also denotes the base 10 logarithm, while sometimes ld
(x) or lb(x) is used for the base 2 logarithm.^[2]
Change of base
While there are several useful identities, the most important for calculator use lets one find logarithms with bases other than those built into the calculator (usually log[e] and log[10]). To find a
logarithm with base b, using any other base k:
${\displaystyle \log _{b}(x)={\frac {\log _{k}(x)}{\log _{k}(b)}}.}$
Moreover, this result implies that all logarithm functions (whatever the base) are similar to each other. So to calculate the log with base 2 of the number 16 with your calculator:
${\displaystyle \log _{2}(16)={\frac {\log(16)}{\log(2)}}.}$
Uses of logarithms
Logarithms are useful in solving equations in which exponents are unknown. They have simple derivatives, so they are often used in the solution of integrals. The logarithm is one of three closely
related functions. In the equation b^n = x, b can be determined with radicals, n with logarithms, and x with exponentials. See logarithmic identities for several rules governing the logarithm
functions. For a discussion of some additional aspects of logarithms see additional logarithm topics.
Science and engineering
Various quantities in science are expressed as logarithms of other quantities.
• The negative of the base-10 logarithm is used in chemistry, where it expresses the concentration of hydronium ions (H[3]O^+, the form H^+ takes in water), in the measure known as pH. The
concentration of hydronium ions in neutral water is 10^−7 mol/L at 25 °C, hence a pH of 7.
• The bel (symbol B) is a unit of measure that is the base-10 logarithm of ratios, such as power levels and voltage levels. It is mostly used in telecommunication, electronics, and acoustics. It is
used, in part, because the ear responds logarithmically to acoustic power. The Bel is named after telecommunications pioneer Alexander Graham Bell. The decibel (dB), equal to 0.1 bel, is more
commonly used. The neper is a similar unit which uses the natural logarithm of a ratio.
• The Richter scale measures earthquake intensity on a base-10 logarithmic scale.
• In spectrometry and optics, the absorbance unit used to measure optical density is equivalent to −1 B.
• In astronomy, the apparent magnitude measures the brightness of stars logarithmically, since the eye also responds logarithmically to brightness.
• In psychophysics, the Weber–Fechner law proposes a logarithmic relationship between stimulus and sensation.
• In computer science, logarithms often appear in bounds for computational complexity. For example, to sort N items using comparison can require time proportional to N log N.
Exponential functions
The natural exponential function exp(x), also written ${\displaystyle e^{x}}$ is defined as the inverse of the natural logarithm. It is positive for every real argument x.
The operation of "raising b to a power p" for positive arguments ${\displaystyle b}$ and all real exponents ${\displaystyle p}$ is defined by
${\displaystyle b^{p}=\exp({p\ln b}).\,}$
The antilogarithm function is another name for the inverse of the logarithmic function. It is written antilog[b](n) and means the same as ${\displaystyle b^{n}}$.
Easier computations
Logarithms switch the focus from normal numbers to exponents. As long as the same base is used, this makes certain operations easier:
Operation with numbers Operation with exponents Logarithmic identity
${\displaystyle \!\,ab}$ ${\displaystyle \!\,A+B}$ ${\displaystyle \!\,\log(ab)=\log(a)+\log(b)}$
${\displaystyle \!\,a/b}$ ${\displaystyle \!\,A-B}$ ${\displaystyle \!\,\log(a/b)=\log(a)-\log(b)}$
${\displaystyle \!\,a^{b}}$ ${\displaystyle \!\,Ab}$ ${\displaystyle \!\,\log(a^{b})=b\log(a)}$
${\displaystyle \!\,{\sqrt[{b}]{a}}}$ ${\displaystyle \!\,A/b}$ ${\displaystyle \!\,\log({\sqrt[{b}]{a}})={\frac {\log(a)}{b}}}$
These relations made such operations on two numbers much faster and the proper use of logarithms was an essential skill before multiplying calculators became available.
The ${\displaystyle \log(ab)=\log(a)+\log(b)}$ equation is fundamental (it implies effectively the other three relations in a field) because it describes an isomorphism between the additive group and
the multiplicative group of the field.
To multiply two numbers, one found the logarithms of both numbers on a table of common logarithms, added them, and then looked up the result in the table to find the product. This is faster than
multiplying them by hand, provided that more than two decimal figures are needed in the result. The table needed to get an accuracy of seven decimals could be fit in a big book, and the table for
nine decimals occupied a few shelves.
The discovery of logarithms just before Newton's era had an impact in the scientific world which can be compared with the invention of the computer in the twentieth century, because many calculations
which were too laborious became feasible.
When the chronometer was invented in the eighteenth century, logarithms allowed all calculations needed for astronomical navigation to be reduced to just additions, speeding the process by one or two
orders of magnitude. A table of logarithms with five decimals, plus logarithms of trigonometric functions, was enough for most astronomical navigation calculations, and those tables fit in a small
To compute powers or roots of a number, the common logarithm of that number was looked up and multiplied or divided by the radix. Interpolation could be used for still higher precision. Slide rules
used logarithms to perform the same operations more rapidly, but with much less precision than using tables. Other tools for performing multiplications before the invention of the calculator include
Napier's bones and mechanical calculators: see history of computing hardware.
The derivative of the natural logarithm function is
${\displaystyle {\frac {d}{dx}}\ln(x)={\frac {1}{x}}.}$ (A proof is shown below.)
By applying the change-of-base rule, the derivative for other bases is
${\displaystyle {\frac {d}{dx}}\log _{b}(x)={\frac {d}{dx}}{\frac {\ln(x)}{\ln(b)}}={\frac {1}{x\ln(b)}}={\frac {\log _{b}(e)}{x}}.}$
The antiderivative of the logarithm is
${\displaystyle \int \log _{b}(x)\,dx=x\log _{b}(x)-{\frac {x}{\ln(b)}}+C=x\log _{b}\left({\frac {x}{e}}\right)+C.}$
See also: table of limits of logarithmic functions, list of integrals of logarithmic functions.
Proof of the derivative
The derivative of the natural logarithm function is easily found via the inverse function rule. Since the inverse of the logarithm function is the exponential function, we have ${\displaystyle \ln '
(x)={\frac {1}{\exp '(\ln(x))}}}$. Since the derivative of the exponential function is itself, the right side of the equation simplifies to ${\displaystyle {\frac {1}{\exp(\ln(x))}}={\frac {1}{x}}}$,
the exponential canceling out the logarithm.
When considering computers, the usual case is that the argument and result of the ${\displaystyle \ln(x)}$ function is some form of floating point data type. Note that most computer languages uses $
{\displaystyle \log(x)}$ for this function while the ${\displaystyle \log _{10}(x)}$ is typically denoted log10(x).
As the argument is floating point, it can be useful to consider the following:
A floating point value x is represented by a mantissa m and exponent n to form
${\displaystyle x=m2^{n}.\,}$
${\displaystyle \ln(x)=\ln(m)+n\ln(2).\,}$
Thus, instead of computing ${\displaystyle \ln(x)}$ we compute ${\displaystyle \ln(m)}$ for some m such that ${\displaystyle 1\leq m<2}$. Having ${\displaystyle m}$ in this range means that the value
${\displaystyle u={\frac {m-1}{m+1}}}$ is always in the range ${\displaystyle 0\leq u<{\frac {1}{3}}}$. Some machines uses the mantissa in the range ${\displaystyle 0.5\leq m<1}$ and in that case the
value for u will be in the range ${\displaystyle -{\frac {1}{3}}<u\leq 0}$ In either case, the series is even easier to compute.
The ordinary logarithm of positive reals generalizes to negative and complex arguments, though it is a multivalued function that needs a branch cut terminating at the branch point at 0 to make an
ordinary function or principal branch. The logarithm (to base e) of a complex number z is the complex number ln(|z|) + i arg(z), where |z| is the modulus of z, arg(z) is the argument, and i is the
imaginary unit.
The discrete logarithm is a related notion in the theory of finite groups. It involves solving the equation b^n = x, where b and x are elements of the group, and n is an integer specifying a power in
the group operation. For some finite groups, it is believed that the discrete logarithm is very hard to calculate, whereas discrete exponentials are quite easy. This asymmetry has applications in
public key cryptography.
The logarithm of a matrix is the inverse of the matrix exponential.
A double logarithm, ${\displaystyle \ln(\ln(x))}$, is the inverse function of the double exponential function. A super-logarithm or hyper-logarithm is the inverse function of the super-exponential
function. The super-logarithm of x grows even more slowly than the double logarithm for large x.
For each positive b not equal to 1, the function log[b ] (x) is an isomorphism from the group of positive real numbers under multiplication to the group of (all) real numbers under addition. They are
the only such isomorphisms that are continuous. The logarithm function can be extended to a Haar measure in the topological group of positive real numbers under multiplication.
ISBN links support NWE through referral fees
• Great Britain Institute of Actuaries, Journal of the Institute of Actuaries and Assurance Magazine, 1873, Vol. 17. Forgotten Books, 2018. 978-0366971244
• Knight, Charles. The English Cyclopaedia, Vol. 4. Forgotten Books, 2012.
• Peirce, James Mills. The Elements of Logarithms with an Explanation of the Three and Four Place Tables of Logarithmic and Trigonometric Functions. Andesite Press, 2015. ISBN 978-1297495465
• Przeworska-Rolewicz, D. Logarithms and Antilogarithms: An Algebraic Analysis Approach with an appendix by Zbigniew Binderman (Mathematics and Its Applications). New York, NY: Springer, 1998. ISBN
• REA. Math Made Nice & Easy #2: Percentages, Exponents, Radicals, Logarithms and Algebra Basics (Math Made Nice & Easy). Piscataway, NJ: Research & Education Association, 1999. ISBN 0878912010.
• Ryffel, Henry, Robert Green, Holbrook Horton, and Edward Messal. Mathematics at Work. New York, NY: Industrial Press, Inc., 1999. ISBN 0831130830.
External links
All links retrieved November 3, 2022.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons
CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia
contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by
wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed. | {"url":"https://www.newworldencyclopedia.org/entry/Logarithm","timestamp":"2024-11-09T16:37:09Z","content_type":"text/html","content_length":"132609","record_id":"<urn:uuid:de2a0489-f30a-4640-93fa-8f9719d5ae4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00620.warc.gz"} |
The distribution of financial returns made simple | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
The distribution of financial returns made simple
Why returns have a stable distribution
As “A tale of two returns” points out, the log return of a long period of time is the sum of the log returns of the shorter periods within the long period.
The log return over a year is the sum of the daily log returns in the year. The log return over an hour is the sum of the minute log returns within the hour.
Returns have some distribution. The set of distributions where their sums still have the same distribution are called stable distributions. So log returns have a stable distribution.
Why returns have a normal distribution
There is a special distribution within the class of stable distributions called the normal distribution. It is the only one that has a finite variance.
The Central Limit Theorem tells us conditions when the distribution of a sum is normal (to a good approximation). Actually there is more than one Central Limit Theorem. Figure 1 shows the single
theorem idea, while Figure 2 shows the actual case.
Figure 1: Sketch of The Central Limit Theorem.
Figure 2: Sketch of The Central Limit Theorems.
The commonality of the assumptions in Figure 2 can be summarized as:
• No giants among the peons
• Not too much dependence between the peons
Returns obey these criteria. Therefore log returns have a normal distribution.
That applies to individual assets. The returns of an index — which is the weighted average of a number of assets — has even more reason to be normal. Even if the returns of the individual assets
were not normal, the averaging over assets would mean that the index returns would be normal.
The previous two sections are exquisitely reasoned. So imagine my disappointment when people try to say that returns are not normally distributed.
Figure 3 shows the daily log returns of the S&P 500 over about six decades in the form of a normal QQplot. Based on the variability of the middle half of the data, the most extreme returns we should
have seen during that period was about 3%. We’re not far off.
Figure 3: Normal QQplot of 6 decades of daily S&P 500 log returns. A reason that the distribution would not be exactly normal is because of volatility clustering — that some periods have higher
volatility than others. We can look at the residuals from a garch model to remove that effect. This is done in Figure 4.
Figure 4: Normal QQplot of 6 decades of daily GARCH residuals from S&P 500 log returns. Now that’s better, isn’t it? If you think that volatility clustering negates the logic that leads us to the
stable distribution conclusion, then, well, uh … think something else.
We don’t have to rely on pictures, we can do a statistical test. Jarque-Bera tests normality by looking at the skewness and kurtosis. The p-value for the test on the garch residuals is bigger than
10 to the minus 2800. Somewhat smaller than the probability of winning a lottery — but people win lotteries all the time.
If we were to consider the hypothetical possibility that returns are not normally distributed, how might that happen?
One way would be if returns across periods did depend on each other. Perhaps if enough people did momentum trades in which they buy because the price has gone up, and sell because the price has gone
But of course markets don’t work like that. People trade based on real information (and they evaluate that information without regard to how others value it). News arrives and the market quickly
adjusts to that new information.
If data seems to contradict logic, the only civilized thing to do is to stick to logic.
and if you think that you can tell a bigger tale
I swear to God you’d have to tell a lie…
from “Swordfishtrombone” by Tom Waits
Appendix R
A simple version of Figure 3 is:
qqline(spxret, col="gold")
garch estimate
The GARCH(1,1) model was estimated via:
spxgar <- garch(spxret)
The QQplot for the residuals was created with:
The square brackets with negative 1 inside them removes the first element of the residual vector (because it is a missing value).
normality test
The tseries package also has the jarque.bera.test function.
> jarque.bera.test(spxgar$resid[-1])
Jarque Bera Test
data: spxgar$resid[-1]
X-squared = 12721.34, df = 2, p-value < 2.2e-16
We get the real p-value (as opposed to the wimpy cop-out of being less than 2.2e-16) with a slight bit of computing:
> pchisq(12721.34, df=2, lower.tail=FALSE, log.p=TRUE) / log(10)
[1] -2762.404
26 Responses to The distribution of financial returns made simple
1. I can hear Nassim Taleb foaming at the mouth…
2. “But of course markets don’t work like that. People trade based on real information (and they evaluate that information without regard to how others value it). News arrives and the market quickly
adjusts to that new information.” It seems as though you livelihood has never depended upon management or performance fees. Talk to portfolio managers and ask them if they ever made a trade of
allocation that was contrary to “real information” but instead was based on avoiding making too big of a mistake (career convexity). You may believe that you have the science of investing
mastered, but humans – their motivations and incentives – are much too nuanced to fit your rigid ideas. Note that you use 6 decades of data; but, also note that no portfolio managers use 6
decades of data – they use current data and are affected to a great deal by the latest, best conventional wisdom and often get things wrong. Your insistence on normal distributions does not align
with the disaster of Long Term Capital Management. They failed because they modeled risk on a normal distribution and woefully underestimated exposure when precision was required to properly
manage their leverage.
3. Pat, I am sorry to say that, but this post is very shallow. I would recommend you to read Nassim Taleb for deeper thoughts. After all, there’s a reason why he is popular nowadays.
4. Hi,
there is very good work on the distribution of returns by Chris Rogers and Liang Zhang (“Understanding asset returns”)http://www.statslab.cam.ac.uk/~chris/papers/UAR.pdf
I am developing some R code on that but I am not satisfied with the calibration yet.
Best regards,
□ Richard,
Thanks for the link, and good luck with the R code. I can believe that it’s a bit tricky to get it to behave.
5. It is not clear that returns at different time scales should have the same distribution. It could merely be the case that at high frequencies, returns are fat-tailed but have finite variance, and
then at low frequencies they are approximately normal. (In fact, barring conditional heteroskedasticity issues, monthly returns do look pretty normal.)
Also, technically speaking, since there is a non-zero probability that a stock’s price may go to zero, there is a non-zero probability that the log return will be (negative) infinity, and thus
the variance (and the mean, for that matter) are not well defined. Typically we sweep that possibility under the rug either by survivorship-biasing our sample of stocks or just getting lucky.
□ Dear Shabby,
Thanks for the comments.
Yes, I think the major problem with the stable distribution idea is that time is infinitely divisible in this context. Once you’re at a time scale on a par with tick data, the distribution is
discrete (essentially) with a lot of mass at zero. So it doesn’t look much at all like lower frequency distributions.
I’ll dispute that monthly returns look normal. I just did the Jarque-Bera test on 777 20-day returns of the S&P 500 (the same history as the daily data in the post). It gets a statistic of
about 400 which gives a p-value of about 10 the minus 87. So not as extreme as the daily data but still many orders of magnitude smaller than winning a lottery.
I find it intriguing that skewness is much more in evidence at the monthly scale.
Good point about the final value. I think we inherently think about the distribution conditional on survival. That seems reasonably rational to me.
6. Pingback: A practical introduction to garch modeling | (R news & tutorials) | modelingexcellent.info
7. This specific blog post, “The distribution of financial
returns made simple | Portfolio Probe | Generate random portfolios.
Fund management software by Burns Statistics” was remarkable.
I’m generating out a duplicate to present to my personal close friends.
Thank you,Galen
8. Pingback: Popular posts 2012 May | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
9. Pingback: The US market will absolutely positively definitely go up in 2012 | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
10. Pingback: A slice of S&P 500 kurtosis history | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
11. Pingback: A practical introduction to garch modeling | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
12. Pingback: Popular posts 2012 July | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
13. Pingback: garch and long tails | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
14. Pingback: Popular posts 2012 June | Portfolio Probe | Generate random portfolios. Fund management software by Burns Statistics
15. I am still confusing about returns distribution. please share with me any link for basic understanding of this topic or anyone help me in this topic.
16. Under the assumption of (relatively) constant market factors, I also suggest that returns are normally distributed. You can find a further neat explanation here: http://www.insight-things.com/
17. This is a clear case of showing proof for a thing, and then saying it proves the opposite. I have a hard time believing it is not a troll post.
18. “One way would be if returns across periods did depend on each other. Perhaps if enough people did momentum trades in which they buy because the price has gone up, and sell because the price has
gone down.” Yea, it’s called herding, and, at times of a market crash, it’s called “panic.”
“But of course markets don’t work like that. People trade based on real information (and they evaluate that information without regard to how others value it). News arrives and the market quickly
adjusts to that new information.” Everybody values news differently, sometimes jumping to the opposite conclusion. People are rational at that what they do is so that they win, not at being right
about their decisions.
“If data seems to contradict logic, the only civilized thing to do is to stick to logic.” Please tell me it’s a joke. I know it’s a joke. Right?
19. You may find updated information about stock returns distributions in 3-minute video published here:
20. if log return has normal distribution, then return has normal distribution too?
□ No, if log returns have a normal distribution, then simple returns would have a lognormal distribution.
21. If P2/P1 = ratio of prices at end and beginning of period, then
return = (P2/P1) – 1 = (P2 – P1)/P1
which is usually < 0.10 for daily and monthly periods.
And since log (1 + x) = x for x < 0.10,
So log P2/P1 = log [1 + (P2/P1 – 1)] = P2/P1 -1 = return.
I.e. logs of price ratios = simple returns.
So there is no need to take logs of the price ratios. Using simple returns is sufficient and amounts to the same thing.
Simple returns having a normal distribution is the same thing as return ratios having a lognormal distribution.
And this creates the same technical theoretical problem of a simple return of -100+% having a non-zero probability. This ahould not be a problem because it is only a model valid only within
certain limits. Any results involving returns of -100+% are simply nil.
This entry was posted in Quant finance, R language and tagged central limit theorem, log return, normally distributed returns, stable distribution. Bookmark the permalink. | {"url":"https://www.portfolioprobe.com/2012/01/23/the-distribution-of-financial-returns-made-simple/","timestamp":"2024-11-06T20:34:07Z","content_type":"text/html","content_length":"149128","record_id":"<urn:uuid:43836cb3-0c4c-4d4b-86cb-c7789860381f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00831.warc.gz"} |
Puzzle! 8 pieces, over 2500 ways to solve. How is it possible
There is a puzzle with 8 pieces and one single board to put them.
The board is labeled with months and dates.
Your goal is to place this 8 pieces in a way that leaves only 2 squares open.
That 2 square declare the date you have solved.
You can place them on the board and solve every possible date, meaning 366 solves in total.
Also for one date there are multiple ways to solve it. I have even made 44 solve for 25 Jun!
How is that possible? Would it work with different pieces or a different board? Is there some way to prove that is solvable or that it is not? Is there any topology approach we could use?
I have spend hours in this as you can see and solved the all 366 dates, so I can say for sure that it is working but HOW!? http://prntscr.com/21lumhp
You can currently find the app only on play store, it is called 365 Days Puzzle
• Your offer is way too low dude!
• I don't even understand the question. Is it possible to prove that all positions are solvable? Well, yes, you just did by solving all positions.
Join Matchmaticians
Affiliate Marketing Program
to earn up to a 50% commission on every question that your affiliated users ask or answer. | {"url":"https://matchmaticians.com/questions/hvlr3b/puzzle-8-pieces-over-2500-ways-to-solve-how-is-it-possible-2","timestamp":"2024-11-09T16:26:22Z","content_type":"text/html","content_length":"74502","record_id":"<urn:uuid:0cd05ea1-0094-49cc-9bfe-fe9fb06515eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00186.warc.gz"} |
Relating word and tree automata
In the automata-theoretic approach to verification, we translate specifications to automata. Complexity considerations motivate the distinction between different types of automata. Already in the
60s, it was known that deterministic Büchi word automata are less expressive than nondeterministic Büchi word automata. The proof is easy and can be stated in a few lines. In the late 60s, Rabin
proved that Büchi tree automata are less expressive than Rabin tree automata. This proof is much harder. In this work we relate the expressiveness gap between deterministic and nondeterministic Büchi
word automata and the expressiveness gap between Büchi and Rabin tree automata. We consider tree automata that recognize derived languages. For a word language L, the derived language of L, denoted
LΔ, is the set of all trees all of whose paths are in L. Since often we want to specify that all the computations of the program satisfy some property, the interest in derived languages is clear. Our
main result shows that L is recognizable by a nondeterministic Büchi word automaton but not by a deterministic Büchi word automaton iff LΔ is recognizable by a Rabin tree automaton and not by a Büchi
tree automaton. Our result provides a simple explanation for the expressiveness gap between Büchi and Rabin tree automata. Since the gap between deterministic and nondeterministic Büchi word automata
is well understood, our result also provides a characterization of derived languages that can be recognized by Büchi tree automata. Finally, it also provides an exponential determinization of Büchi
tree automata that recognize derived languages.
Funders Funder number
Texas ATP 003604-0058-2003
National Science Foundation CCR-9988322, CCR-0311326, CCR-0124077, IIS-9908435, IIS-9978135, ANI-0216467, EIA-0086264
Bonfils-Stanton Foundation 9800096
Intel Corporation
• Expressive power
• Tree automata
• Word automata
Dive into the research topics of 'Relating word and tree automata'. Together they form a unique fingerprint. | {"url":"https://cris.tau.ac.il/en/publications/relating-word-and-tree-automata","timestamp":"2024-11-05T17:25:53Z","content_type":"text/html","content_length":"52790","record_id":"<urn:uuid:88ea7d83-bd01-454a-b6ca-85366008a9ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00528.warc.gz"} |
Interference from two wide slits
Further Reading
We have analyzed what happens when waves from two point sources combine. At different points in space the result may be a strengthening of what each source would do by itself (constructive
interference) or a weakening of what each source would do by itself (destructive interference), depending on whether the two waves are in phase with each other or out of phase with each other (
Two-slit interference). We've also considered what happens if a single slit is wide enough that the interference from different parts of the slit have to be taken into account (Diffraction). This is
what the set-ups and patterns look like in the two cases:
Now our question is: What happens if we have two slits that are each wide enough so that the width of the slit has to be taken into account?
Thinking about it conceptually
If you think of what is happening physically in term of wavelets from the sources at the slits, it is fairly straightforward to see what's happening. If we are at a particular position, y, on the
screen up (if positive) or down (if negative) from the center line, we receive oscillating signals from each of the slits. These signals are the result of all the wavelets coming out of that slit
adding together. Now we have to add those two results together -- with an additional phase shift as a result of the extra path difference between the two slits. This will produce the interference
pattern on the left (the two-slit pattern) multiplied by the pattern on the right (the one-slit pattern). We are getting interference of the waves from the two slits, but the waves from each of the
slits have a non-uniform strength as y changes due to the one-slit pattern.
In particular, at that position on the screen where the one slit pattern cancels (the "squash point") then the total interference pattern goes to zero there as well. You are interference waves from
two slits, but waves of 0 intensity -- so the total result is zero. The regular fringes produced by the two-slit pattern are "squashed" by the one slit pattern.
Doing the math
These are a lot of ideas to keep in one's head at once. One of the things math is good for in physics is in showing how the conceptual ideas work, a step at a time and then letting you scan over it
to hold the entire picture in your head at once. Let's give it a try.
A. From our analysis of the interference produced by two very narrow slits, we got the result as a function of $y$ (the position along the screen) was
$$I = A_0^2 \cos^2\bigg({\frac{2\pi ay}{\lambda L}\bigg)}$$
where $a$ is the separation of the slits, $\lambda$ is the wavelength, and $L$ is the distance to the screen, and $A_0$ is the amplitude of the oscillating light. The graph of this is shown in figure
A in the set of graphs below. It's an oscillation that peaks at $y=0$ (the midpoint of the screen, where the distance from each slit is the same) and oscillates with a constant amplitude.
2. From our analysis of the variation in the intensity of the light produced by a single slit, we found that the amplitude received at the screen from each slit is not constant, but rather is given
$$A^2 = A_0^2 \bigg(\frac{\sin{(\pi d y / \lambda L)}}{\pi d y / \lambda L}\bigg)^2$$
where $d$ is the width of the slit. This graph is shown in graph B below. It has a broad central peak. After the first 0 on either side of the central peak it continues to oscillate, but the peaks
get smaller and smaller, as a result of the $1/y^2$ factor in the equation.
3. Now we need to put these together. Since the result we found in part 2 is providing the actual (non-constant) amplitude, we should replace the $A_0$ in the equation in part 1 by the $A$ we found
in part 2. This gives a total amplitude
$$I = A_0^2 \bigg(\frac{\sin{(\pi d y / \lambda L)}}{\pi d y / \lambda L}\bigg)^2 \cos^2\bigg({\frac{2\pi ay}{\lambda L}\bigg)} $$
This result is shown in graph C below. The uniformly oscillating pattern we saw in A is modified by multiplying by the function shown in B. For the parameters shown below, the 4th peaks from the
center that would be expected from A are in the same place as the 0 of function B, so they don't appear in the product. They are "squashed" by the fact that although they are at a place where the two
waves add constructively, each wave itself is actually 0 due to the interference from within each slit.
All three of the graphs are shown overlapping in figure D. It's a little hard to read, but it shows how they coordinate.
The equations above are pretty messy. We can make better sense of them if we focus on the parameters we are interested in: the separation of the slits, $a$, and the width of the slits, $d$.
repackaging tool to make these equations look more like math. Since we're really only interested in the functional dependence — seeing how changing $a$ and $d$ changes our graph — let's temporarily
replace the constants $\pi /\lambda L$ by $1$.
The result looks like this:
$$I = \bigg[A_0^2 \bigg(\frac{\sin{(d y)}}{d y}\bigg)^2\bigg]\bigg[ \cos^2({2 ay)}\bigg] $$
So we can see that if we decrease $a$, the $\cos^2$ term will wiggle more slowly, moving the fringes farther out. If we decrease $d$, the $(\sin{y} / y)^2$ term will oscillate more slowly, moving the
squash point farther out.
Both the separation between the slits and the width of the slits get "turned inside out" when they get to the screen. Making the slits narrower widens the squash pattern and making the slits farther
apart pushes the interference fringes closer together. You can explore this using a sim in the workout link at the bottom of this page.
The overall result looks like this:
The top shows the graph of the total intensity (in red) "squashed" by the one-slit diffraction pattern (in blue). Below that is shown what the pattern actually would look like on the screen, and
below that a schematic of the laser beam approaching the slits.
Workout: Interference from two wide slits
Joe Redish 4/28/12 and 7/8/19
Article 718
Last Modified: July 9, 2019 | {"url":"https://www.compadre.org/nexusph/course/view.cfm?ID=718","timestamp":"2024-11-11T20:44:03Z","content_type":"text/html","content_length":"19229","record_id":"<urn:uuid:5ecad5fc-b49b-4084-bdbe-a00866470b1a>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00559.warc.gz"} |
varying: Fast Check of Variation in Data in SebKrantz/collapse: Advanced and Fast Data Transformation
varying is a generic function that (column-wise) checks for variation in the values of x, (optionally) within the groups g (e.g. a panel-identifier).
varying(x, ...) ## Default S3 method: varying(x, g = NULL, any_group = TRUE, use.g.names = TRUE, ...) ## S3 method for class 'matrix' varying(x, g = NULL, any_group = TRUE, use.g.names = TRUE, drop =
TRUE, ...) ## S3 method for class 'data.frame' varying(x, by = NULL, cols = NULL, any_group = TRUE, use.g.names = TRUE, drop = TRUE, ...) # Methods for indexed data / compatibility with plm: ## S3
method for class 'pseries' varying(x, effect = 1L, any_group = TRUE, use.g.names = TRUE, ...) ## S3 method for class 'pdata.frame' varying(x, effect = 1L, cols = NULL, any_group = TRUE, use.g.names =
TRUE, drop = TRUE, ...) # Methods for grouped data frame / compatibility with dplyr: ## S3 method for class 'grouped_df' varying(x, any_group = TRUE, use.g.names = FALSE, drop = TRUE, keep.group_vars
= TRUE, ...) # Methods for grouped data frame / compatibility with sf: ## S3 method for class 'sf' varying(x, by = NULL, cols = NULL, any_group = TRUE, use.g.names = TRUE, drop = TRUE, ...)
x a vector, matrix, data frame, 'indexed_series' ('pseries'), 'indexed_frame' ('pdata.frame') or grouped data frame ('grouped_df'). Data must not be numeric.
g a factor, GRP object, atomic vector (internally converted to factor) or a list of vectors / factors (internally converted to a GRP object) used to group x.
by same as g, but also allows one- or two-sided formulas i.e. ~ group1 + group2 or var1 + var2 ~ group1 + group2. See Examples
any_group logical. If !is.null(g), FALSE will check and report variation in all groups, whereas the default TRUE only checks if there is variation within any group. See Examples.
cols select columns using column names, indices or a function (e.g. is.numeric). Two-sided formulas passed to by overwrite cols.
use.g.names logical. Make group-names and add to the result as names (default method) or row-names (matrix and data frame methods). No row-names are generated for data.table's.
drop matrix and data.frame methods: Logical. TRUE drops dimensions and returns an atomic vector if the result is 1-dimensional.
plm methods: Select the panel identifier by which variation in the data should be examined. 1L takes the first variable in the index, 2L the second etc.. Index variables can also be
effect called by name. More than one index variable can be supplied, which will be interacted.
keep.group_vars grouped_df method: Logical. FALSE removes grouping variables after computation.
... arguments to be passed to or from other methods.
a vector, matrix, data frame, 'indexed_series' ('pseries'), 'indexed_frame' ('pdata.frame') or grouped data frame ('grouped_df'). Data must not be numeric.
a factor, GRP object, atomic vector (internally converted to factor) or a list of vectors / factors (internally converted to a GRP object) used to group x.
same as g, but also allows one- or two-sided formulas i.e. ~ group1 + group2 or var1 + var2 ~ group1 + group2. See Examples
logical. If !is.null(g), FALSE will check and report variation in all groups, whereas the default TRUE only checks if there is variation within any group. See Examples.
select columns using column names, indices or a function (e.g. is.numeric). Two-sided formulas passed to by overwrite cols.
logical. Make group-names and add to the result as names (default method) or row-names (matrix and data frame methods). No row-names are generated for data.table's.
matrix and data.frame methods: Logical. TRUE drops dimensions and returns an atomic vector if the result is 1-dimensional.
plm methods: Select the panel identifier by which variation in the data should be examined. 1L takes the first variable in the index, 2L the second etc.. Index variables can also be called by name.
More than one index variable can be supplied, which will be interacted.
Without groups passed to g, varying simply checks if there is any variation in the columns of x and returns TRUE for each column where this is the case and FALSE otherwise. A set of data points is
defined as varying if it contains at least 2 distinct non-missing values (such that a non-0 standard deviation can be computed on numeric data). varying checks for variation in both numeric and
non-numeric data.
If groups are supplied to g (or alternatively a grouped_df to x), varying can operate in one of 2 modes:
If any_group = TRUE (the default), varying checks each column for variation in any of the groups defined by g, and returns TRUE if such within-variation was detected and FALSE otherwise. Thus only
one logical value is returned for each column and the computation on each column is terminated as soon as any variation within any group was found.
If any_group = FALSE, varying runs through the entire data checking each group for variation and returns, for each column in x, a logical vector reporting the variation check for all groups. If a
group contains only missing values, a NA is returned for that group.
A logical vector or (if !is.null(g) and any_group = FALSE), a matrix or data frame of logical vectors indicating whether the data vary (over the dimension supplied by g).
## Checks overall variation in all columns varying(wlddev) ## Checks whether data are time-variant i.e. vary within country varying(wlddev, ~ country) ## Same as above but done for each country
individually, countries without data are coded NA head(varying(wlddev, ~ country, any_group = FALSE))
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/github/SebKrantz/collapse/man/varying.html","timestamp":"2024-11-02T23:41:58Z","content_type":"text/html","content_length":"36640","record_id":"<urn:uuid:a66912f0-59a9-4f4e-96a3-a4cda7b7a28a>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00360.warc.gz"} |
Excel: Calculate the Number of Months Between Dates | Online Tutorials Library List | Tutoraspire.com
Excel: Calculate the Number of Months Between Dates
by Tutor Aspire
You can use the following formulas to calculate the number of months between two dates in Excel:
Formula 1: Calculate Full Months Between Two Dates
=DATEDIF(A2, B2, "M")
Formula 2: Calculate Fractional Months Between Two Dates
=DATEDIF(A2, B2, "M") + DATEDIF(A2, B2, "MD")/(365/12)
Both formulas assume that cell A2 contains the start date and cell B2 contains the end date.
The following examples show how to use each formula in practice.
Example 1: Calculate Full Months Between Two Dates
The following screenshot shows how to calculate the number of full months between a list of start and end dates in Excel:
Here’s how to interpret the output:
• There is 1 full month between 1/1/2022 and 2/4/2022.
• There are 4 full months between 1/7/2022 and 5/29/2022.
• There are 0 full months between 1/20/2022 and 2/5/2022.
And so on.
Example 2: Calculate Fractional Months Between Two Dates
The following screenshot shows how to calculate the number of fractional months between a list of start and end dates in Excel:
Here’s how to interpret the output:
• There are approximately 1.099 months between 1/1/2022 and 2/4/2022.
• There are approximately 4.723Â months between 1/7/2022 and 5/29/2022.
• There are approximately 0.526Â months between 1/20/2022 and 2/5/2022.
And so on.
Note #1: This formula uses 365/12 to approximate the number of days in a month. You can replace this value with 30 if you’d like to simplify the formula and make the assumption that a typical month
has 30 days.
Note #2: You can find the complete documentation for the DATEDIF function in Excel here.
Additional Resources
The following tutorials explain how to perform other common operations in Excel:
How to Count by Month in Excel
How to Calculate Average by Month in Excel
How to Convert Date to Month and Year Format in Excel
Share 0 FacebookTwitterPinterestEmail
previous post
How to Calculate Ratios in R (With Examples)
You may also like | {"url":"https://tutoraspire.com/excel-months-between-dates/","timestamp":"2024-11-03T19:11:00Z","content_type":"text/html","content_length":"351027","record_id":"<urn:uuid:26668efb-db1c-4e5b-8e66-0d5a5d6b40d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00047.warc.gz"} |
Measures of Association and Effects in Epidemiology
In descriptive epidemiology, the measures of disease frequency, association, and effects are used to study the occurrence and distribution of disease in a defined population in a specific period. In
analytic epidemiology, measures are used to investigate the causes of the health outcome and the associated risk factors.
Measures of association and effects are based on an appropriate comparison between exposure and population groups to set hypotheses about an exposure-outcome relationship.
In this context, exposure is a condition in which the population groups are subjected to different environmental conditions, insects, people with communicable diseases or sexually transmitted
diseases, toxic chemicals as well as the characteristics of people including age, race, sex, immune status, marital status, occupation, etc. or surroundings in they live including access to health
services and socioeconomic status.
Measures of association are used to assess and quantify the strength of relationships between the exposure i.e., risk factors, and health outcomes i.e., disease, in both relative and absolute terms.
These are the statistics that estimate the magnitude and direction of relationships between variables in clinical medicine and epidemiological research. It relies on the data collection method, study
design, and the statistical approach used for analysis.
To identify factors that may play an etiological role in the onset of health outcomes, these measures are used to describe the comparison between two or more populations, particularly those with
varying exposure or status of health outcomes. Measures of association such as risk ratio and odds ratio are calculated as relative terms while attributable risk and attributable rate are calculated
as absolute terms.
Interesting Science Videos
Relative Risk (Risk Ratio)
The ratio of the probability of occurrence of a health outcome in a population group with exposure to the risk of occurrence of the outcome in a non-exposed group is referred to as relative risk or
risk ratio.
Relative Risk Formula
If the value of the risk ratio is greater than 1, it indicates an increased risk for the exposed group. Similarly, if the risk ratio is less than 1, it indicates a decreased risk for the group in the
numerator suggesting that exposure protects against disease occurrence. If the risk ratio is 1, it indicates an equal risk of disease occurrence in both groups.
For example, assuming that 20% of the population with high blood pressure (exposed group) suffer from stroke and 1% with normal blood pressure (non-exposed group) suffer from stroke, then the
relative risk of stroke in people with high blood pressure versus people with low blood pressure is calculated as 20%/1% = 20. This suggests that the risk of suffering from the disease is 20 times
greater in the exposed group as compared to the non-exposed group. Similarly, the probability of developing lung cancer in smokers when divided by the likelihood of lung cancer in non-smokers, the
relative risk of lung cancer is obtained.
Often used in cohort studies, relative risk provides comparative information on the probability of an event based on the exposure in populations with differing disease prevalence. However, it gives a
relative value and is not an absolute measure of the occurrence of an event.
Rate Ratio
The ratio that compares the rates (incidence/person-to-time/mortality) of two different groups is known as the rate ratio. The incidence rate ratio is obtained when the incidence rate of the event of
interest in the exposed group is divided by the incidence rate of the same event in the unexposed group.
Rate Ratio Formula
The value of the rate ratio is interpreted in the same way as the risk ratio.
Odds Ratio (OR)
Used in case-control or cross-sectional studies, the odds ratio is defined as the ratio of odds of exposure in the case group to the odds of exposure in the control group. It measures the association
between exposure to two categories and health outcomes.
Odds Ratio Formula
If the value of the Odds ratio is equal to 1, it indicates no association between exposure and health outcomes in the given population. If the value is less than 1, it indicates a negative
association which means exposure is protective and the probability of disease occurrence is less likely in case groups. Similarly, if the value is greater than one, it indicates a positive
association meaning the exposure is a risk factor and the probability of disease occurrence is less likely in control groups.
In some situations when the prevalence of a health outcome is low (<10%) or rare, the odds ratio closely approximates the risk ratio or rate ratio. However, in instances where the prevalence of a
health outcome is common, the OR provides more extreme estimates than the RR.
Attributable Risk (Risk Difference)
An absolute measure of effect, risk difference, or attributable risk gives information about the association of effects with the given treatment. It is obtained by calculating the difference in risk
of a health outcome in the exposed group and the risk of the outcome in the unexposed group.
Attributable Risk (AR) = Risk of disease occurrence in the exposed group – Risk of disease occurrence in the unexposed group
The idea of attributable risk provides a valuable method for epidemiological studies that examine the influence of exposure factors on a population level concerning the risk of a health outcome. It
is a function of Relative Risk and the prevalence of exposure, therefore, AR can arise as a result of common exposure with low relative risk, or a rare exposure with high relative risk.
Attributable Fraction
The proportion of an event as a result of exposure in the exposed group is expressed as an Attributable Fraction (AF). It is obtained by dividing the attributable risk by the probability of
occurrence of an event in the exposed group.
Attributable Fraction Formula
Population Attributable Risk (PAR)
The proportion of risk of a health outcome in both exposed and unexposed populations due to exposure is called Population Attributable Risk (PAR). It is obtained by subtracting the incidence of a
health outcome in the exposed population from the total population (sum of exposed and unexposed).
PAR = Incidence of a health outcome in the total population – Incidence in the exposed population
Population Attributable Fraction (PAF)
Similarly, Population Attributable Fraction (PAF) refers to the proportion of cases of a specific health outcome in a defined population that can be attributed to a particular risk factor. It is
calculated by dividing the population-attributable risk by the incidence of a health outcome in the total population.
Population Attributable Fraction Formula
1. Tenny, S., Hoffman, M.R. (2023). Relative Risk. StatPearls Publishing; Available from: https://www.ncbi.nlm.nih.gov/books/NBK430824/
2. Roberts, M.R., Ashrafzadeh, S. & Asgari, M.M (2019). Research Techniques Made Simple: Interpreting Measures of Association in Clinical Research. J Invest Dermatol. 139(3):502-511. DOI: 10.1016/
j.jid.2018.12.023. PMID: 30797315; PMCID: PMC7737849.
3. https://archive.cdc.gov/www_cdc_gov/csels/dsepd/ss1978/lesson3/section5.html
4. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7737849/
5. Tripepi, G., Jager, K.J., Dekker, F.W., & Zoccali, C. (2010). Measures of Effect in Epidemiological Research. Nephron Clinical Practice. 115(2): c91โ c93. DOI: https://doi.org/10.1159/000312870
6. https://sph.unc.edu/wp-content/uploads/sites/112/2015/07/nciph_ERIC3.pdf
7. https://www.populationmedicine.org/sites/default/files/2022-03/CEPH%201_Session%202_Frequency%20and%20Association%20Key%20Points.pdf
8. Uter, W., & Pfahlberg, A. (1999). The Concept of Attributable Risk in Epidemiological Practice. Biometrical Journal, 41(8), 985โ 993. DOI:10.1002/(sici)1521-4036(199912)41:8<985::aid-bimj985>
3.0.co;2-l \
9. Askari, M., Namayandeh, S.M. (2020). The Difference between the Population Attributable Risk (PAR) and the Potential Impact Fraction (PIF). Iran J Public Health. 49(10):2018-2019. DOI: 10.18502/
ijph.v49i10.4713. PMID: 33346214; PMCID: PMC7719653.
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed.
About Author | {"url":"https://microbenotes.com/measures-of-association-effects-epidemiology/","timestamp":"2024-11-06T07:52:19Z","content_type":"text/html","content_length":"216551","record_id":"<urn:uuid:6e760706-2b50-49bd-b333-6953f9c79d94>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00547.warc.gz"} |
A general formula for partial fractions
Out of the blue I wrote down a rather confusing mass of indices and summations on the board a few days ago. Writing this down at the last minute was perhaps a bad idea, but I wanted to show what the
general form for expanding a fraction into partial fractions was. Here I’m just motivating it a little more. It’s not something that you will need to use, but it’s often good to write things down in
as general a form as you can.
Let’s say that we have an expression of the form:
Where P(x) is some polynomial of degree less than 3 (because the denominator is degree 3). We can write this as:
To find A, B and C, you cross-multiply, and then match coefficients of powers of x with those in P(x). If you have an irreducible quadratic in the denominator you will have terms of the form:
in your partial fraction, and of course if it’s an irreducible quadratic to an integer power greater than one, you will have multiple terms, just as you do for the (x-2) expression in the example
above. Thus, you will have terms of the form:
and terms of the form:
where A, B and C are found by cross-multiplying and matching with the original expression, and the constants a, b, c and l are fixed by the original expression. Because we will have a number of
different terms in our partial fraction expression, rather than writing new letters each time, we can simply put an index on each letter to say that it’s coming from a new expression. So, the most
general form of our partial fraction is going to be:
$\sum_{i=1}^m \frac{A_i}{(x+a_i)^{k_i}}+\sum_{k=1}^n\frac{B_ix+C_i}{(x^2+b_ix+c_i)^{l_i}}$
where the $a_i,b_i,c_i$ all come directly from the original expression. The $k_i, l_i$ are integers found simply by having multiple terms for each higher power of linear and irreducible quadratic
terms in the original expression (a term cubed will lead to a cubic, square and first order term in the partial fraction) and $A_i, B_i, C_i$ are found again by cross-multiplying and matching
coefficients in the numerators. The limits of the sum m and n will depend on the powers in the original expression.
The expression I wrote down on the board was simply this last expression above which gives the most general form for a partial fraction.
One Comment
1. […] Integration by partial fractions part 2A general expression for partial fractions […] | {"url":"http://www.mathemafrica.org/?p=13119","timestamp":"2024-11-06T04:45:19Z","content_type":"text/html","content_length":"199844","record_id":"<urn:uuid:a138f0c3-b69f-4fc7-8a30-432240d4c9ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00154.warc.gz"} |
Elliptic Curve Signature Algorithms - Decred Documentation
Although secp256k1 is widely considered to have a secure choice of elliptic curve parameters, some questions about the origin of the curve remain. For example, the selection of the Koblitz curve,
(y^2 + xy = x^3 + ax^2 + b and a = a^2, b = b^2; a = 1 or 2, b != 0)
is typically done by enumerating the binary extension Galois fields GF(2^m) where m is a prime in the range {0, …, higher limit} and x,y ∈ GF(2^m)^1. For 128-bit security, m is required to be ⩾257
and typically the smallest prime possible in this range to facilitate fast calculation. In this case, the obvious choice for m is 277, a = 0; despite the existence of this appropriate m value being
known by the curators of the curve parameters^2 and the fact that it was the most computationally efficient solution, the parameters m = 283 and a = 0 were selected out of three possible options:
(m = 277, a = 0; m = 283, a = 0; m = 283, a = 1)
For all other Koblitz curve specifications, the most obvious m value is selected. Although this is curious, there are no known attacks that can be applied by using a slightly larger m value for the
Galois field. Other objections to the parameters used by secp256k1 have also been raised^3.
Another extremely popular digital signature algorithm (DSA) with 128-bits of security is Ed25519^4. This uses the EdDSA signing algorithm over a curve birationally equivalent to Curve25519 and is
widely employed today. Unlike secp256k1’s ECDSA, Ed25519 uses simpler Schnorr signatures that are provably secure in a random oracle model (See: Schnorr Signatures).
Schnorr signatures have also been proposed for Bitcoin^5. However, instead of using an OP code exclusive to Schnorr signatures utilizing the curve parameters for secp256k1, Decred instead uses a new
OP code OP_CHECKSIGALT to verify an unlimited number of new signature schemes. In the current implementation, both secp256k1 Schnorr signatures and Ed25519 signatures are available to supplement
secp256k1 ECDSA signatures. In the future, it is trivial to add new signature schemes in a soft fork, such as those that are quantum secure. Having these two Schnorr suites available also allows for
the generation of simple group signatures occupying the same space of a normal signature^6, which for both is implemented. In the future, threshold signatures using dealerless secret sharing will
also enable t-of-n threshold signatures occupying the same amount of space^7. | {"url":"https://docs.decred.org/research/elliptic-curve-signature-algorithms/","timestamp":"2024-11-12T23:22:51Z","content_type":"text/html","content_length":"95926","record_id":"<urn:uuid:a960313a-ca50-4122-bd29-e442540f630b>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00226.warc.gz"} |
Bar plot of measures of local association of a
ggassoc_phiplot {descriptio} R Documentation
Bar plot of measures of local association of a crosstabulation
For a cross-tabulation, plots the measures of local association with bars of varying height, using ggplot2.
ggassoc_phiplot(data, mapping, measure = "phi",
limit = NULL, sort = "none",
na.rm = FALSE, na.value = "NA")
data dataset to use for plot
mapping aesthetics being used. x and y are required, weight can also be specified.
character. The measure of association used for filling the rectangles. Can be "phi" for phi coefficient (default), "or" for odds ratios, "std.residuals" for standardized residuals,
measure "adj.residuals" for adjusted standardized residuals or "pem" for local percentages of maximum deviation from independence.
numeric value, specifying the upper limit of the scale for the height of the bars, i.e. for the measures of association (the lower limit is set to 0-limit). It corresponds to the maximum
limit absolute value of association one wants to represent in the plot. If NULL (default), the limit is automatically adjusted to the data.
character. If "both", rows and columns are sorted according to the first factor of a correspondence analysis of the contingency table. If "x", only rows are sorted. If "y", only columns are
sort sorted. If "none" (default), no sorting is done.
na.rm logical, indicating whether NA values should be silently removed before the computation proceeds. If FALSE (default), an additional level is added to the variables (see na.value argument).
na.value character. Name of the level for NA category. Default is "NA". Only used if na.rm = FALSE.
The measure of association measures how much each combination of categories of x and y is over/under-represented. The bars vary in width according to the number of observations in the categories of
the column variable. They vary in height according to the measure of association. Bars are black if the association is positive and white if it is negative.
The genuine version of this plot (see Cibois, 2004) uses the measure of association called "pem", i.e. the local percentages of maximum deviation from independence.
This function can be used as a high-level plot with ggduo and ggpairs functions of the GGally package.
a ggplot object
Nicolas Robette
Cibois Philippe, 2004, Les écarts à l'indépendance. Techniques simples pour analyser des données d'enquêtes, Collection "Méthodes quantitatives pour les sciences sociales"
See Also
assoc.twocat, phi.table, catdesc, assoc.yx, darma, ggassoc_crosstab, ggpairs
ggassoc_phiplot(data=Movies, mapping=ggplot2::aes(Country, Genre))
version 1.3 | {"url":"https://search.r-project.org/CRAN/refmans/descriptio/html/ggassoc_phiplot.html","timestamp":"2024-11-05T23:47:01Z","content_type":"text/html","content_length":"5512","record_id":"<urn:uuid:2fbeaaaa-ea72-4463-ab82-cb7d4782a1bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00189.warc.gz"} |
Graphing Sine And Cosine Functions Practice Worksheet With Answers - Function Worksheets
Graphing Sine And Cosine Functions Worksheet – The graphing of capabilities is the method of attracting details. As an example an exponential functionality could have … Read more
Graph Sine And Cosine Functions Worksheet
Graph Sine And Cosine Functions Worksheet – The graphing of features is the procedure of drawing data. As an example an exponential functionality will have … Read more | {"url":"https://www.functionworksheets.com/tag/graphing-sine-and-cosine-functions-practice-worksheet-with-answers/","timestamp":"2024-11-09T17:10:36Z","content_type":"text/html","content_length":"82887","record_id":"<urn:uuid:4fc5281a-640b-4774-9a13-8d1bb941be99>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00776.warc.gz"} |
A multifunctional matching algorithm for sample design in agricultural plots
Collection of accurate and representative data from agricultural fields is required for efficient crop management. Since growers have limited available resources, there is a need for advanced methods
to select representative points within a field in order to best satisfy sampling or sensing objectives. The main purpose of this work was to develop a data-driven method for selecting locations
across an agricultural field given observations of some covariates at every point in the field. These chosen locations should be representative of the distribution of the covariates in the entire
population and represent the spatial variability in the field. They can then be used to sample an unknown target feature whose sampling is expensive and cannot be realistically done at the population
scale. An algorithm for determining these optimal sampling locations, namely the multifunctional matching (MFM) criterion, was based on matching of moments (functionals) between sample and
population. The selected functionals in this study were standard deviation, mean, and Kendall's tau. An additional algorithm defined the minimal number of observations that could represent the
population according to a desired level of accuracy. The MFM was applied to datasets from two agricultural plots: a vineyard and a peach orchard. The data from the plots included measured values of
slope, topographic wetness index, normalized difference vegetation index, and apparent soil electrical conductivity. The MFM algorithm selected the number of sampling points according to a
representation accuracy of 90% and determined the optimal location of these points. The algorithm was validated against values of vine or tree water status measured as crop water stress index (CWSI).
Algorithm performance was then compared to two other sampling methods: the conditioned Latin hypercube sampling (cLHS) model and a uniform random sample with spatial constraints. Comparison among
sampling methods was based on measures of similarity between the target variable population distribution and the distribution of the selected sample. MFM represented CWSI distribution better than the
cLHS and the uniform random sampling, and the selected locations showed smaller deviations from the mean and standard deviation of the entire population. The MFM functioned better in the vineyard,
where spatial variability was larger than in the orchard. In both plots, the spatial pattern of the selected samples captured the spatial variability of CWSI. MFM can be adjusted and applied using
other moments/functionals and may be adopted by other disciplines, particularly in cases where small sample sizes are desired.
טביעת אצבע
להלן מוצגים תחומי המחקר של הפרסום 'A multifunctional matching algorithm for sample design in agricultural plots'. יחד הם יוצרים טביעת אצבע ייחודית. | {"url":"https://cris.ariel.ac.il/iw/publications/a-multifunctional-matching-algorithm-for-sample-design-in-agricul-2","timestamp":"2024-11-03T16:05:01Z","content_type":"text/html","content_length":"65460","record_id":"<urn:uuid:da7c6c55-17d7-4575-ba38-f6c3d5bad140>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00452.warc.gz"} |
Hello Xiss,
maybe the examples on sparse fill can help you with that. Take a look at: http://www.guwi17.de/ublas/matrix_sparse_usage.html. It is probable that using a generalized vector of coordinate matrices to
intitally assemble the matrix (by basically changing the type you have) and then use that to fill (by using push_back) a compressed matrix, will make it a lot faster. If you don't need the structure
of the compressed matrix you can neglect the last step.
If you are sort in memory, another idea that might work (although it is algorithmically harder and I haven't really tested it) is to define a vector of matrix blocks (each matrix being maybe 2*band x
2*band), that you use them to assemble the stiffness matrix and then progressively push them back in the compressed matrix like,
[B1 C1 0 0 ]
[A2 B2 C2 0 ]
[0 A3 B3 C3]
[0 0 A4 B4]
neglecting the zero entries. "Bs" will be diagonals, "As "will be at least upper triangular and "Cs" will be at least lower triangular (the least means that their diagonal elements maybe zero). This
would need some custom algorithms to navigate in the blocks though.
Date: Mon, 19 Apr 2010 00:14:36 -0300
From: xissburg_at_[hidden]
To: ublas_at_[hidden]
Subject: [ublas] Element-wise operations are really slow
In my algorithms I have to read/write from/to each individual element of a matrix, and this is making my application really slow. More specifically, I'm assembling a stiffness matrix in Finite
Element Method. The code is like this:
for(int i=0; i<m_tetrahedrons.size(); ++i) {
btTetrahedron* t = m_tetrahedrons[i]; t->computeCorotatedStiffness();
for(unsigned int j=0; j<4; ++j) for(unsigned int k=0; k<4; ++k)
{ unsigned int jj = t->getNodeIndex(j);
unsigned int kk = t->getNodeIndex(k); for(unsigned int r=0; r<3; ++r)
for(unsigned int s=0; s<3; ++s) {
m_RKR_1(3*jj+r, 3*kk+s) += t->getCorotatedStiffness0(3*j+r, 3*k+s); m_RK(3*jj+r, 3*kk+s) += t->getCorotatedStiffness1(3*j+r, 3*k+s);
} }
Where m_RKR_1 and m_RK are both compressed_matrix<float>, t->getCorotatedStiffness0/1 just returns the (i,j) element of a 12x12 compressed_matrix<float>. If I don't compute the co-rotated matrices
the simulation still works but incorrectly (linear strain only), but very fast (basically, by commenting the element wise operations in that code). Whenever I turn the co-rotational stuff on, it gets
damn slow, and those element wise operations are guilty.
What am I doing wrong? Is there any faster technique to do that? Well, there must be one...
Thanks in advance.
The New Busy think 9 to 5 is a cute idea. Combine multiple calendars with Hotmail. | {"url":"https://lists.boost.org/ublas/2010/04/4180.php","timestamp":"2024-11-13T22:36:57Z","content_type":"text/html","content_length":"14431","record_id":"<urn:uuid:7de2fdab-5302-4ad5-8021-f8c1705786ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00421.warc.gz"} |
5G Flexible Transceiver for Physical Layer Evaluation - Najeebgafar
5G Flexible Transceiver for Physical Layer Evaluation
Basic overview
The aim of this proposal is to extend the flexible PHY software component made available by the eWINE project with MIMO technologies. The goal is to utilize the second independent RF chain of the
USRP SDR to employ Alamouti space-time coding (STC) combined with GFDM. This technique, firstly introduced for OFDM based wireless transmission systems, increases the reliability of the data transfer
by transmitting multiply copies of the data stream. However, it has not yet been applied on the GFDM flexible waveform generation framework and, since GFDM is a nonorthogonal waveform, its
combination with STC is not a straight forward approach.
GFDM Library – How to use
In this library, only basic functions for GFDM transmission, signal generation, and naive reception are included, nothing fancy like channel estimation/equalization, iterative receivers, etc. The
functions are grouped into several parts:
• get_* These functions return objects that are commonly used for GFDM simulations.
• calc_* These functions calculate (not only) performance values based on the passed objects.
• do_* These more complex functions do actual simulation steps.
Remark: To use the library functions, you must include the library folder and subfolders to the MATLAB path.
The GFDM library has a folder with examples of use for the functions (GFDM_libraryexamples). With the purpose of illustration, an example file (GFDM_libraryexamplesser_in_awgn.m) for generation
of a GFDM SER (Symbol Error Rate) curve is detailed below.
For the GFDM configuration, a parameter structure p contains the configuration of the GFDM system.
% GFDM configuration
p = get_defaultGFDM('BER');
p.pulse = 'rc_fd';
p.a = 0.5;
GFDM Transmitter
1. Create random data symbols
% create symbols
s = get_random_symbols(p);
The function get_random_symbols(p) returns a sequence of integers in range 0 ⋯ 2????−1 which are the transmitted data symbols.
2. QAM-Modulate the symbols to QAM symbols and map the symbol stream to the data matrix
% map them to qam and to the D matrix
D = do_map(p, do_qammodulate(s, p.mu));
Map the integers in range 0 ⋯ 2????−1 to a quadratic QAM modulation using the MATLAB function qammod. The one-dimensional symbol stream is mapped to the data matrix D according to the values of
???????????????? and ????????????????. Empty sub-carriers and sub-symbols in D are set to zero. Afterwards there is the possibility to insert specific pilots or other information in the slots where
no data is present.
3. GFDM-Modulate the data matrix
x = do_modulate(p, D);
The matrix ???? is processed with the GFDM modulation scheme to produce a time domain signal with length ???????? that can be processed further. The transmitted signal is applied to an AWGN channel
using the function do_channel. A channel object from MATLAB can be used as input parameter for this function for different channels.
% channel -> AWGN
xch = do_channel(x, 1, snr(si));% channel -> AWGN
xch = do_channel(x, 1, snr(si));
https://najeebgafar.com/wp-content/uploads/2021/04/najib_logo.png 0 0 najeebgafar@gmail.com https://najeebgafar.com/wp-content/uploads/2021/04/najib_logo.png najeebgafar@gmail.com2018-09-26 15:03:07
2018-09-26 15:03:075G Flexible Transceiver for Physical Layer Evaluation | {"url":"https://najeebgafar.com/5g-flexible-transceiver-for-physical-layer-evaluation/","timestamp":"2024-11-08T04:55:54Z","content_type":"text/html","content_length":"55156","record_id":"<urn:uuid:cebf8c2a-bda8-46f7-b051-ae2397eea047>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00683.warc.gz"} |
The basics for building a MMM model in GSheets | CLICKTRUST
In the rapidly evolving landscape of data-driven marketing decisions, we’re unveiling the transformative potential of Marketing Mix Modelling (MMM). Whether you’re a seasoned marketer or just
starting out, this article serves as your compass to navigate the intricate world of MMM, utilizing the familiar territory of Google Sheets.
Within this article, we walk you through the journey of building a Marketing Mix Model in Google Sheets. From understanding methodology to interpreting statistical results, we’ve got you covered.
While the realm of MMM can be intricate, our aim is to simplify the process, ensuring you’re equipped to make informed decisions.
But first things first: To follow along with our article, you can request a link to our template below:
What is Marketing Mix Modelling used for?
Marketing Mix Modelling (MMM), is a tool that helps organizations understand how their marketing efforts influence customer behavior, and how changes to the marketing mix impact business performance.
More concretely, MMM can be used to:
• Optimize marketing mix: Determine the optimal allocation of marketing spend across different marketing channels, products, and regions.
• Evaluate the impact of past marketing initiatives: Assess the effectiveness of past marketing efforts and how they influenced sales or market share.
• Forecast future performance: Predict how changes in the marketing mix will impact sales, market share, and other key performance indicators.
• Allocate budget: Determine the optimal budget allocation for marketing initiatives, taking into account the expected return on investment.
Why Marketing Mix Modelling (MMM)?
As digital advertising and advanced technologies, such as micro-targeting and cookies, grew in popularity, methods like MMM appeared outdated compared to directly tracking individual users to
purchase. Nevertheless, with an increase in the use of ad blockers, the implementation of consumer privacy laws like GDPR in Europe or CCPA in California, as well as restrictions on third-party
cookies or unique identifiers, there is a shift towards more privacy-sensitive marketing attribution methods. This is why MMM has started to regain its popularity.
What causes fluctuations in our sales?
Although we cannot draw a solid conclusion from our imaginary graph below, we can see fluctuations in sales over time, such as between week 11 and 14.
It is in our interest to understand what caused these spikes. Is it external factors? Is it the new awareness campaign that we launched on YT? By better understanding the relation of different
variables to sales, we can better evaluate the different channels we are using and allocate our resources accordingly.
If we use the simple decomposition graph below, we can already see that our first spike in Week 9 was caused by Holidays, whereas the second one was a combination of holidays, radio and social ads.
A little forenote, please bear in mind that this is a simplified approach to the infinitely more complicated process of building a solid MMM model. With that said, do not be discouraged and follow
along. This will give you the very basics to understand how it works so that when your colleagues start talking about MMM, and trust me they will, you will not be completely clueless on the subject.
To read about limitations please visit the ‘limitations’ section.
What we are going to be using to build our model in Google Sheet is the Ordinary Least Squares (OLS), the most popular method for building a linear regression model. It in effect draws a line of best
fit between the points of data you have, minimizing the difference between where the line runs and the point on the chart.The ordinary least-squares method is a popular approach for predicting simple
linear regression models. It is a simple place to begin with Marketing Mix Modeling, as it can be easily done in a spreadsheet, making it more accessible than more advanced machine learning or
Bayesian models.
The purpose of the model is to make the most precise prediction of the dependent variable (such as revenue or conversions) based on the available data of the independent variables, such as
advertising spend by channel and organic factors like seasonality or national holidays. Once the model is developed and validated, it can be utilized to calculate the impact of each media channel
(how many sales would not have happened without advertising?), as well as to predict future advertising budget allocation (what will happen if you double your advertising budget?).
A simple linear regression formula for a model with just one media channel and one organic variable appears as follows:
Y = B2X2 + B1X1 + B0
• Y represents the revenue from sales or number of conversions.
• B2 represents the coefficient for advertising, meaning how many sales are generated per dollar spent.
• X2 represents the amount of money spent on advertising.
• B1 represents the coefficient for an organic variable, such as Google search trends in the industry.
• X1 represents the value of an organic variable, like an index of Google search trends ranging from 0 to 100.
• B0 represents the intercept, or the number of sales that would occur if no money was spent on advertising and there was no contribution from organic variables.
As previously stated, this is a simplified version of more complex models. If you are interested in building a more accurate and complex model, you can use open-source libraries like Robyn by Meta,
or LightweightMMM by Google, but you’ll likely need a Data Scientist. In case you need help with building and interpreting your MMM model, you can refer to our sister company LYKTA.
How can we do Media Mix Modelling in Google Sheets?
1) Get your data
Below you will find a simplified version of the data that you will need:
• Column 1: Time variable (Days or weeks)
• Column 2: Variable that you are trying to predict (generally revenue or sales)
Following columns, the variables that might have had an impact on your business e.g. Media Spending, Holidays, Price, Promotions.
Nevertheless, obtaining reliable data, particularly media spending, is not always a straightforward process.
As a media company, we can attest that obtaining net media spending is not as simple as it may seem. There are various factors to consider, such as how to deal with sponsoring costs or packages, such
as Auvio or RTL Play. These costs can be challenging to isolate from the overall media spend and can skew the results of the modeling if not accounted for accurately.
Furthermore, traditional media channels such as print and Out of Home (OOH) advertising can be tricky to split per day. This poses a challenge for businesses that want to understand the effect of
their advertising spend on sales on a daily basis.
Despite these challenges, obtaining accurate data is critical to the success of MMM. Without reliable data, the modeling results may be misleading and result in ineffective marketing strategies.
Not all variables need to be your regular channels as your sales may be related to external factors. This is why we have added a column called Holidays. Ideally, this would help us build a better
model if it did impact on business.
2) Check your data
The second step of the process involves data checking, which includes identifying missing values and managing them appropriately. It is crucial to note that a value of zero does not necessarily mean
that the data is unavailable. Thus, it is essential to differentiate between zero values and missing data.
Moreover, to ensure accurate results, it is essential to check for correlations between the variables. If two variables are highly correlated, it can lead to misleading results, as the effect of one
variable cannot be isolated from the other. For example, if a business invests in Facebook and TV advertising simultaneously, it becomes challenging to separate their effects on the outcome.
Furthermore, businesses can derive additional data by incorporating seasonality data into the analysis. This helps account for any seasonal changes that may affect the outcome.
In conclusion, the second step of the market mix modelling process involves thorough data checking, including managing missing values, checking for correlations between variables, and incorporating
seasonality data. By ensuring the quality of the data, businesses can achieve accurate results that inform their marketing strategies effectively.
3) Multi-Linear regression
The correlation coefficient and R^2 (coefficient of determination) are important measures in linear regression analysis, as they help to evaluate the strength and direction of the linear relationship
between the independent and dependent variables (More info in the ‘Statistical Verification’ section).
The correlation coefficient (r) indicates the strength and direction of the linear relationship between two variables, and it ranges from -1 to 1. A value of -1 indicates a perfect negative linear
relationship, a value of 0 indicates no linear relationship, and a value of 1 indicates a perfect positive linear relationship. The correlation coefficient can help identify whether the relationship
between the variables is strong or weak and whether it is a positive or negative relationship.
The coefficient of determination (R^2) is a measure of how well the regression line (or model) fits the data. It represents the proportion of the variation in the dependent variable that is explained
by the independent variable. It ranges from 0 to 1, with a value of 1 indicating that the model explains all the variation in the dependent variable. R^2 can help evaluate how well the linear
regression model fits the data and how much of the variation in the dependent variable is accounted for by the independent variable.
In order to establish our statistical metrics such as correlation coefficient or R^2, we will use the following function:
=LINEST(Know_data_y, Known_data_x, , [calculate_b], [verbose])
By inputting 0 as verbose, we only obtain the correlation coefficient of the different variables as well as the intercept, which we will then use to build our model. Be aware that the coefficient
comes out in order, so our last variable Holiday is now the first variable.
Now that we have the correlation coefficient, as per formula Y = B2X2 + B1X1 + B0, we need to:
• Multiply each variable by its coefficient
• Sum them together
• Add the intercept
4) Checking the verbose output of our statistics
As you see below, if we input 1 as a verbose output, Google Sheets provides us with several statistical metrics that we can then use to test the validity of our model (more details in the
‘Statistical Verification’ section).
What we are interested in for now, beside the coefficient that we already have, is the R^2. This number demonstrates to what extent our model explains the variance that we see in sales, which in this
case is 0.79 (Please refer to the ‘Statistical Verification’ section to find out how to interpret statistical metrics). If you are wondering why we have N/A, it is inevitable as there is no data to
be shown for those cells.
5) Calculating Adstock
Adstock is a term used in advertising to quantify the residual impact of previous advertising efforts. For instance, if a company launches an advertising campaign in week 1, there will be a portion
of that level that remains in week 2. In week 3, there will be a portion of the level from week 2. Essentially, Adstock represents the gradual decrease in the impact of advertising over time,
expressed as a percentage.
The formula is:
At = Xt + Adstock rate * At-1
• At-1 = the previously calculated Adstock
• Xt = Advertising spent
• Adstock rate = Percentage of residual impact of previous Adstock
Using this formula, we are now able to calculate the individual channels’ Adstock. Using the LINEST function we can then calculate the coefficient between our new variables, i.e. AS Social, AS Radio,
AS YT, and the transactions, to create new predictions. If we look at our new R^2 we see that this has increased from 79.12% to 80.07%. Depending on the model and the variable you use, Adstock can
have a different impact, or no impact at all.
6) Calculating the ideal Adstock rate
If you are wondering how to calculate the optimal Adstock rate which maximizes your R^2, you can either play around with it to see how it impacts your R2 or simply use Excel. Unfortunately this
function does not exist in GSheet that I know of – if you do please feel free to leave it in the comment below.
Let’s see how this can be done in excel. Firstly, download the file and open it in excel. Navigate to Data → Solver.
A pop up window will open where you can set the parameters to maximize your R^2.
• As an objective, select the cell where your R^2 is.
• Click on “Max” as this is the metric you want to maximize.
• By Changing Variable Cells: Select the cell where your Adstock rate is (multiple cells if you have multiple Adstock rates).
• Subject to constraints → Add → Select your cells with your Adstock rate and select lower than 1 and higher than 0.
• Click on Solve.
Excel will change the Adstock rate until it has found the one that optimizes your R^2 and therefore your model accuracy. When using more advanced machine learning tools like we do at LYKTA, the rate
is automatically calculated for you.
7) Calculating Diminishing Returns & Saturation
Diminishing Returns refers to the idea that the marginal benefit of advertising decreases as more is spent. In other words, doubling the amount spent on advertising does not necessarily double the
sales. This is because there is a point of saturation, where additional advertising has little to no impact on sales. For this we use the power function formula:
y = α⋅x^B ; 0 < B ≤ 1
• Y represents the revenue from sales or number of conversions.
• α represents the correlation coefficient.
• X represents your spend.
• B represents your “Saturation rate”.
Using this formula, we can now calculate the individual channels’ saturation. Using the LINEST function we can then calculate the coefficient between our new variables and the transactions to create
new predictions.
The image below graphically shows the concept of Diminishing Returns. With the first 2K spent we generated 10 conversions, whereas to generate just 1 incremental conversion we had to spend 1K extra.
8) Transformed Linear Regression
Using Diminishing Return and Adstock combined we can try to build a more accurate model. We can use the formula
Y= (Xt + Adstock rate * At-1)^β ; 0 < β ≤ 1.
Always remember to evaluate or model for accuracy and statistical significance. (Visit the ‘Statistical Verification’ section for more info)
9) Prediction and Budget Allocation
If you are satisfied with your model’s accuracy and your variables are statistically significant, you can now make predictions and see how your budget allocation influences your transactions and CPA.
Using the previously mentioned formula, assign different budget to the channels and see the effects for yourself:
If you are presenting this to someone, remember that a graph can go a long way:
10) Statistical Verification
The following can be used to verify the statistical validity and accuracy of your model:
• The LINEST function.
• The difference between your actual sales (or conversions).
• Your prediction.
According to the outcome of your statistical verification you can then modify the model accordingly by excluding the variables that are not significant or adding new variables.
By using the LINEST function, you will be able to calculate your P-value and establish the statistical validity of your variables.The difference between predictions and actual sales, called error,
will allow you to calculate important metrics such as MAPE and NRMSE. Please check out the template to see the formulas in detail.
Bear in mind that to keep things simple we only have one with data in our template, however if the goal is to predict future sales accurately, it is crucial to split the data into three categories:
train data, validation, and test. If you are interested in finding out more about how to build a model I suggest having a look at How to Build your MMM at Vexpower.com.
The train data is used to build the model, while the validation data is used to fine-tune the model and make any necessary adjustments. The test data is then used to evaluate the accuracy of the
Splitting the data into these categories is a best practice for MMM as it helps avoid overfitting, which occurs when the model is too closely tailored to the training data and does not generalize
well to new data. Overfitting can lead to misleading results and ineffective marketing strategies.
By evaluating the model’s accuracy on the test set, businesses can ensure that the model is not just effective for the analyzed period but can also generalize well to future periods.
Another important statistical measurement is the RSSD, used to calculate the “plausibility” of your model. To keep things simple, we did not cover this in the model.
Interpreting your statistical results
• R^2 < 0.8 = Not good
• 0.8 < R^2 < 0.9 = Acceptable
• R^2> 0.9 = Great
NRMSE – Normalized Root Mean Square Error
• RMSE > 0.1 = Not good
• 0.1 < RMSE< 0.05 = Acceptable
• RMSE<0.05 = Great
MAPE – Mean Absolute Error
• MAPE > 10% = Not good
• 10%< MAPE< 5% = Acceptable
• MAPE <5% = Great
In hypothesis testing, the p-value is used to assess the evidence against the null hypothesis (H0). If the p-value is less than a predetermined level of significance (often represented by alpha,
denoted as α), it is considered statistically significant and suggests that the null hypothesis should be rejected. In this case, you would accept the alternative hypothesis (H1).
In other words, if the p-value is less than alpha (p < α), it means that the results are unlikely to have occurred by chance and provide evidence against the null hypothesis. This means that you can
conclude that the alternative hypothesis is supported by the data and is likely to be true.
For example, if alpha is set at 0.05, and the p-value is 0.03, it suggests that there is less than a 5% chance that the observed results could have happened by chance if the null hypothesis were
true. In this case, you would reject the null hypothesis and accept the alternative hypothesis.
To sum it up:
For α=0.05
• P-Value> 0.05 = Not significant
• P-Value<0.05 = Significant at 95%
11) Limitations
Simple linear regression makes the assumption that the effectiveness of marketing efforts does not change over time, which can lead to inaccurate results such as attributing negative sales to certain
channels. In these cases, it is recommended to use more advanced techniques.
It is possible that you may not have access to all the necessary data to create a precise and reliable model. This is a common issue, especially if you haven’t spent enough time collecting data,
which can take several weeks or even months depending on the size and complexity of your business, or if some of the variables that affect your marketing performance are difficult to measure.
It’s essential to be careful when building a model, as the wrong model can lead to costly decisions. It’s always a good idea to consult with a statistician or business analyst who has econometrics
and data science expertise, or to learn more about MMM before making any important and irreversible decisions using this method.As a final note, I want to give credit to Michael Taylor as this model
was developed thanks to his course on MMM model building on Vexpower.
If you are interested in a more elaborate MMM model in GSheet, you can check this out.
If you are interested in having your model built by a professional do not hesitate to get in contact with LYKTA.
Ontvang onze inzichten rechtstreeks in je inbox
We duiken regelmatig in actuele onderwerpen op het gebied van digitale marketing en delen onze inzichten graag met jou. | {"url":"https://clicktrust.be/nl/blog/analytics/marketing-mix-modelling-the-basics-for-building-your-model-in-gsheets/","timestamp":"2024-11-05T19:04:39Z","content_type":"text/html","content_length":"163782","record_id":"<urn:uuid:c325b708-7cf4-4e4d-a972-77812bd6ffec>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00820.warc.gz"} |
Hashing Techniques in Data Structure PDF
Collision Resolution Techniques-
In Hashing, collision resolution techniques are classified as-
In this article, we will compare separate chaining and open addressing.
Separate Chaining Vs Open Addressing-
│Separate Chaining │Open Addressing │
│ │All the keys are stored only inside the hash table. │
│Keys are stored inside the hash table as well as outside the hash table. │ │
│ │No key is present outside the hash table. │
│The number of keys to be stored in the hash table can even exceed the size of the hash table.│The number of keys to be stored in the hash table can never exceed the size of the hash table.│
│Deletion is easier. │Deletion is difficult. │
│Extra space is required for the pointers to store the keys outside the hash table. │No extra space is required. │
│Cache performance is poor. │Cache performance is better. │
│ │ │
│This is because of linked lists which store the keys outside the hash table. │This is because here no linked lists are used. │
│Some buckets of the hash table are never used which leads to wastage of space. │Buckets may be used even if no key maps to those particular buckets. │
Which is the Preferred Technique?
The performance of both the techniques depend on the kind of operations that are required to be performed on the keys stored in the hash table-
Separate Chaining-
Separate Chaining is advantageous when it is required to perform all the following operations on the keys stored in the hash table-
• Insertion Operation
• Deletion Operation
• Searching Operation
• Deletion is easier in separate chaining.
• This is because deleting a key from the hash table does not affect the other keys stored in the hash table.
Open Addressing-
Open addressing is advantageous when it is required to perform only the following operations on the keys stored in the hash table-
• Insertion Operation
• Searching Operation
• Deletion is difficult in open addressing.
• This is because deleting a key from the hash table requires some extra efforts.
• After deleting a key, certain keys have to be rearranged.
To gain better understanding about Separate Chaining Vs Open Addressing,
Get more notes and other study material of Data Structures.
Watch video lectures by visiting our YouTube channel LearnVidFun.
Open Addressing | Linear Probing | Collision
Collision Resolution Techniques-
Before you go through this article, make sure that you have gone through the previous article on Collision Resolution Techniques.
We have discussed-
• Hashing is a well-known searching technique.
• Collision occurs when hash value of the new key maps to an occupied bucket of the hash table.
• Collision resolution techniques are classified as-
In this article, we will discuss about Open Addressing.
Open Addressing-
In open addressing,
• Unlike separate chaining, all the keys are stored inside the hash table.
• No key is stored outside the hash table.
Techniques used for open addressing are-
• Linear Probing
• Quadratic Probing
• Double Hashing
Operations in Open Addressing-
Let us discuss how operations are performed in open addressing-
Insert Operation-
• Hash function is used to compute the hash value for a key to be inserted.
• Hash value is then used as an index to store the key in the hash table.
In case of collision,
• Probing is performed until an empty bucket is found.
• Once an empty bucket is found, the key is inserted.
• Probing is performed in accordance with the technique used for open addressing.
Search Operation-
To search any particular key,
• Its hash value is obtained using the hash function used.
• Using the hash value, that bucket of the hash table is checked.
• If the required key is found, the key is searched.
• Otherwise, the subsequent buckets are checked until the required key or an empty bucket is found.
• The empty bucket indicates that the key is not present in the hash table.
Delete Operation-
• The key is first searched and then deleted.
• After deleting the key, that particular bucket is marked as “deleted”.
• During insertion, the buckets marked as “deleted” are treated like any other empty bucket.
• During searching, the search is not terminated on encountering the bucket marked as “deleted”.
• The search terminates only after the required key or an empty bucket is found.
Open Addressing Techniques-
Techniques used for open addressing are-
1. Linear Probing-
In linear probing,
• When collision occurs, we linearly probe for the next bucket.
• We keep probing until an empty bucket is found.
• It is easy to compute.
• The main problem with linear probing is clustering.
• Many consecutive elements form groups.
• Then, it takes time to search an element or to find an empty bucket.
Time Complexity-
│Worst time to search an element in linear probing is O (table size).│
This is because-
• Even if there is only one element present and all other elements are deleted.
• Then, “deleted” markers present in the hash table makes search the entire table.
2. Quadratic Probing-
In quadratic probing,
• When collision occurs, we probe for i^2‘th bucket in i^th iteration.
• We keep probing until an empty bucket is found.
3. Double Hashing-
In double hashing,
• We use another hash function hash2(x) and look for i * hash2(x) bucket in i^th iteration.
• It requires more computation time as two hash functions need to be computed.
Comparison of Open Addressing Techniques-
│ │Linear Probing│Quadratic Probing │Double Hashing│
│Primary Clustering │Yes │No │No │
│Secondary Clustering │Yes │Yes │No │
│Number of Probe Sequence │ │ │ │
│ │m │m │m^2 │
│(m = size of table) │ │ │ │
│Cache performance │Best │Lies between the two│Poor │
• Linear Probing has the best cache performance but suffers from clustering.
• Quadratic probing lies between the two in terms of cache performance and clustering.
• Double caching has poor cache performance but no clustering.
Load Factor (α)-
Load factor (α) is defined as-
In open addressing, the value of load factor always lie between 0 and 1.
This is because-
• In open addressing, all the keys are stored inside the hash table.
• So, size of the table is always greater or at least equal to the number of keys stored in the table.
Using the hash function ‘key mod 7’, insert the following sequence of keys in the hash table-
50, 700, 76, 85, 92, 73 and 101
Use linear probing technique for collision resolution.
The given sequence of keys will be inserted in the hash table as-
• Draw an empty hash table.
• For the given hash function, the possible range of hash values is [0, 6].
• So, draw an empty hash table consisting of 7 buckets as-
• Insert the given keys in the hash table one by one.
• The first key to be inserted in the hash table = 50.
• Bucket of the hash table to which key 50 maps = 50 mod 7 = 1.
• So, key 50 will be inserted in bucket-1 of the hash table as-
• The next key to be inserted in the hash table = 700.
• Bucket of the hash table to which key 700 maps = 700 mod 7 = 0.
• So, key 700 will be inserted in bucket-0 of the hash table as-
• The next key to be inserted in the hash table = 76.
• Bucket of the hash table to which key 76 maps = 76 mod 7 = 6.
• So, key 76 will be inserted in bucket-6 of the hash table as-
• The next key to be inserted in the hash table = 85.
• Bucket of the hash table to which key 85 maps = 85 mod 7 = 1.
• Since bucket-1 is already occupied, so collision occurs.
• To handle the collision, linear probing technique keeps probing linearly until an empty bucket is found.
• The first empty bucket is bucket-2.
• So, key 85 will be inserted in bucket-2 of the hash table as-
• The next key to be inserted in the hash table = 92.
• Bucket of the hash table to which key 92 maps = 92 mod 7 = 1.
• Since bucket-1 is already occupied, so collision occurs.
• To handle the collision, linear probing technique keeps probing linearly until an empty bucket is found.
• The first empty bucket is bucket-3.
• So, key 92 will be inserted in bucket-3 of the hash table as-
• The next key to be inserted in the hash table = 73.
• Bucket of the hash table to which key 73 maps = 73 mod 7 = 3.
• Since bucket-3 is already occupied, so collision occurs.
• To handle the collision, linear probing technique keeps probing linearly until an empty bucket is found.
• The first empty bucket is bucket-4.
• So, key 73 will be inserted in bucket-4 of the hash table as-
• The next key to be inserted in the hash table = 101.
• Bucket of the hash table to which key 101 maps = 101 mod 7 = 3.
• Since bucket-3 is already occupied, so collision occurs.
• To handle the collision, linear probing technique keeps probing linearly until an empty bucket is found.
• The first empty bucket is bucket-5.
• So, key 101 will be inserted in bucket-5 of the hash table as-
To gain better understanding about Open Addressing,
Next Article- Separate Chaining Vs Open Addressing
Get more notes and other study material of Data Structures.
Watch video lectures by visiting our YouTube channel LearnVidFun.
Separate Chaining | Collision Resolution Techniques
Hashing in Data Structure-
Before you go through this article, make sure that you have gone through the previous article on Hashing.
We have discussed-
• Hashing is a well-known searching technique.
• It minimizes the number of comparisons while performing the search.
• It completes the search with constant time complexity O(1).
In this article, we will discuss about Collisions in Hashing.
Collision in Hashing-
In hashing,
• Hash function is used to compute the hash value for a key.
• Hash value is then used as an index to store the key in the hash table.
• Hash function may return the same hash value for two or more keys.
│When the hash value of a key maps to an already occupied bucket of the hash table,│
│ │
│it is called as a Collision. │
Collision Resolution Techniques-
│Collision Resolution Techniques are the techniques used for resolving or handling the collision.│
Collision resolution techniques are classified as-
1. Separate Chaining
2. Open Addressing
In this article, we will discuss about separate chaining.
Separate Chaining-
To handle the collision,
• This technique creates a linked list to the slot for which collision occurs.
• The new key is then inserted in the linked list.
• These linked lists to the slots appear like chains.
• That is why, this technique is called as separate chaining.
Time Complexity-
For Searching-
• In worst case, all the keys might map to the same bucket of the hash table.
• In such a case, all the keys will be present in a single linked list.
• Sequential search will have to be performed on the linked list to perform the search.
• So, time taken for searching in worst case is O(n).
For Deletion-
• In worst case, the key might have to be searched first and then deleted.
• In worst case, time taken for searching is O(n).
• So, time taken for deletion in worst case is O(n).
Load Factor (α)-
Load factor (α) is defined as-
If Load factor (α) = constant, then time complexity of Insert, Search, Delete = Θ(1)
Using the hash function ‘key mod 7’, insert the following sequence of keys in the hash table-
50, 700, 76, 85, 92, 73 and 101
Use separate chaining technique for collision resolution.
The given sequence of keys will be inserted in the hash table as-
• Draw an empty hash table.
• For the given hash function, the possible range of hash values is [0, 6].
• So, draw an empty hash table consisting of 7 buckets as-
• Insert the given keys in the hash table one by one.
• The first key to be inserted in the hash table = 50.
• Bucket of the hash table to which key 50 maps = 50 mod 7 = 1.
• So, key 50 will be inserted in bucket-1 of the hash table as-
• The next key to be inserted in the hash table = 700.
• Bucket of the hash table to which key 700 maps = 700 mod 7 = 0.
• So, key 700 will be inserted in bucket-0 of the hash table as-
• The next key to be inserted in the hash table = 76.
• Bucket of the hash table to which key 76 maps = 76 mod 7 = 6.
• So, key 76 will be inserted in bucket-6 of the hash table as-
• The next key to be inserted in the hash table = 85.
• Bucket of the hash table to which key 85 maps = 85 mod 7 = 1.
• Since bucket-1 is already occupied, so collision occurs.
• Separate chaining handles the collision by creating a linked list to bucket-1.
• So, key 85 will be inserted in bucket-1 of the hash table as-
• The next key to be inserted in the hash table = 92.
• Bucket of the hash table to which key 92 maps = 92 mod 7 = 1.
• Since bucket-1 is already occupied, so collision occurs.
• Separate chaining handles the collision by creating a linked list to bucket-1.
• So, key 92 will be inserted in bucket-1 of the hash table as-
• The next key to be inserted in the hash table = 73.
• Bucket of the hash table to which key 73 maps = 73 mod 7 = 3.
• So, key 73 will be inserted in bucket-3 of the hash table as-
• The next key to be inserted in the hash table = 101.
• Bucket of the hash table to which key 101 maps = 101 mod 7 = 3.
• Since bucket-3 is already occupied, so collision occurs.
• Separate chaining handles the collision by creating a linked list to bucket-3.
• So, key 101 will be inserted in bucket-3 of the hash table as-
To gain better understanding about Separate Chaining,
Next Article- Open Addressing
Get more notes and other study material of Data Structures.
Watch video lectures by visiting our YouTube channel LearnVidFun.
Hashing in Data Structure | Hash Functions
Searching Techniques-
In data structures,
• There are several searching techniques like linear search, binary search, search trees etc.
• In these techniques, time taken to search any particular element depends on the total number of elements.
• Linear Search takes O(n) time to perform the search in unsorted arrays consisting of n elements.
• Binary Search takes O(logn) time to perform the search in sorted arrays consisting of n elements.
• It takes O(logn) time to perform the search in Binary Search Tree consisting of n elements.
The main drawback of these techniques is-
• As the number of elements increases, time taken to perform the search also increases.
• This becomes problematic when total number of elements become too large.
Hashing in Data Structure-
In data structures,
• Hashing is a well-known technique to search any particular element among several elements.
• It minimizes the number of comparisons while performing the search.
Unlike other searching techniques,
• Hashing is extremely efficient.
• The time taken by it to perform the search does not depend upon the total number of elements.
• It completes the search with constant time complexity O(1).
Hashing Mechanism-
In hashing,
• An array data structure called as Hash table is used to store the data items.
• Based on the hash key value, data items are inserted into the hash table.
Hash Key Value-
• Hash key value is a special value that serves as an index for a data item.
• It indicates where the data item should be be stored in the hash table.
• Hash key value is generated using a hash function.
Hash Function-
│Hash function is a function that maps any big number or string to a small integer value.│
• Hash function takes the data item as an input and returns a small integer value as an output.
• The small integer value is called as a hash value.
• Hash value of the data item is then used as an index for storing it into the hash table.
Types of Hash Functions-
There are various types of hash functions available such as-
1. Mid Square Hash Function
2. Division Hash Function
3. Folding Hash Function etc
It depends on the user which hash function he wants to use.
Properties of Hash Function-
The properties of a good hash function are-
• It is efficiently computable.
• It minimizes the number of collisions.
• It distributes the keys uniformly over the table.
To gain better understanding about Hashing in Data Structures,
Next Article- Collision in Hashing
Get more notes and other study material of Data Structures.
Watch video lectures by visiting our YouTube channel LearnVidFun. | {"url":"https://www.gatevidyalay.com/tag/hashing-techniques-in-data-structure-pdf/","timestamp":"2024-11-10T12:14:46Z","content_type":"text/html","content_length":"146739","record_id":"<urn:uuid:a376aad5-37f8-4d59-806f-ccf69e40bb7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00443.warc.gz"} |
Weights are used by most aggregation methods to optionally alter the contribution of each indicator in an aggregation group, as well as by aggregates themselves if they are further aggregated.
Weighting is therefore part of aggregation, but this vignette deals with it separately because there are a few special tools for weighting in COINr.
First, let’s see what weights look like in practice. When a coin is built using new_coin(), the iMeta data frame (an input to new_coin()) has a “Weight” column, which is also required. Therefore,
every coin should have a set of weights in it by default, which you had to specify as part of its construction. Sets of weights are stored in the .$Meta$Weights sub-list. Each set of weights is
stored as a data frame with a name. The set of weights created when calling new_coin() is called “Original”. We can see this by building the example coin and accessing the “Original” set directly:
# build example coin
coin <- build_example_coin(up_to = "Normalise", quietly = TRUE)
# view weights
#> iCode Level Weight
#> 9 Goods 1 1
#> 10 Services 1 1
#> 11 FDI 1 1
#> 12 PRemit 1 1
#> 13 ForPort 1 1
#> 31 Renew 1 1
The weight set simply has the indicator code, Level, and the weight itself. Notice that the indicator codes also include aggregate codes, up to the index:
# view rows not in level 1
coin$Meta$Weights$Original[coin$Meta$Weights$Original$Level != 1, ]
#> iCode Level Weight
#> 50 Physical 2 1
#> 51 ConEcFin 2 1
#> 52 Political 2 1
#> 53 Instit 2 1
#> 54 P2P 2 1
#> 55 Environ 2 1
#> 56 Social 2 1
#> 57 SusEcFin 2 1
#> 58 Conn 3 1
#> 59 Sust 3 1
#> 60 Index 4 1
And that the index itself doesn’t have a weight because it is not used in an aggregation. Notice also that weights can be specified relative to one another. When an aggregation group is aggregated,
the weights within that group are first scaled to sum to 1. This means that weights are relative within groups, but not between groups.
Manual re-weighting
To change weights, one way is to simply go back to the original iMeta data frame that you used to build the coin, and edit it. If you don’t want to do that, you can also create a new weight set. This
simply involves:
1. Making a copy of the existing set of weights
2. Changing the weights of the copy
3. Putting the new set of weights in the coin
For example, if we want to change the weighting of the “Conn” and “Sust” sub-indices, we could do this:
# copy original weights
w1 <- coin$Meta$Weights$Original
# modify weights of Conn and Sust to 0.3 and 0.7 respectively
w1$Weight[w1$iCode == "Conn"] <- 0.3
w1$Weight[w1$iCode == "Sust"] <- 0.7
# put weight set back with new name
coin$Meta$Weights$MyFavouriteWeights <- w1
Now, to actually use these weights in aggregation, we have to direct the Aggregate() function to find them. When weights are stored in the “Weights” sub-list as we have done here, this is easy
because we only have to pass the name of the weights to Aggregate():
coin <- Aggregate(coin, dset = "Normalised", w = "MyFavouriteWeights")
#> Written data set to .$Data$Aggregated
Alternatively, we can pass the data frame itself to Aggregate() if we don’t want to store it in the coin for some reason:
coin <- Aggregate(coin, dset = "Normalised", w = w1)
#> Written data set to .$Data$Aggregated
#> (overwritten existing data set)
When altering weights we may wish to compare the outcomes of alternative sets of weights. See the Adjustments and comparisons vignette for details on how to do this.
Effective weights
COINr has some statistical tools for adjusting weights as explained in the next sections. Before that, it is also interesting to look at “effective weights”. At the index level, the weighting of an
indicator is not due just to its own weight, but also to the weights of each aggregation that it is involved in, plus the number of indicators/aggregates in each group. This means that the final
weighting, at the index level, of each indicator, is slightly complex to understand. COINr has a built in function to get these “effective weights”:
w_eff <- get_eff_weights(coin, out2 = "df")
#> iCode Level Weight EffWeight
#> 9 Goods 1 1 0.02000000
#> 10 Services 1 1 0.02000000
#> 11 FDI 1 1 0.02000000
#> 12 PRemit 1 1 0.02000000
#> 13 ForPort 1 1 0.02000000
#> 31 Renew 1 1 0.03333333
The “EffWeight” column is the effective weight of each component at the highest level of aggregation (the index). These weights sum to 1 for each level:
# get sum of effective weights for each level
tapply(w_eff$EffWeight, w_eff$Level, sum)
#> 1 2 3 4
#> 1 1 1 1
The effective weights can also be viewed using the plot_framework() function, where the angle of each indicator/aggregate is proportional to its effective weight:
PCA weights
The get_PCA() function can be used to return a set of weights which maximises the explained variance within aggregation groups. This function is already discussed in the Analysis vignette, so we will
only focus on the weighting aspect here.
First of all, PCA weights come with a number of caveats which need to be mentioned (this is also detailed in the get_PCA() function help). First, what constitutes “PCA weights” in composite
indicators is not very well-defined. In COINr, a simple option is adopted. That is, the loadings of the first principal component are taken as the weights. The logic here is that these loadings
should maximise the explained variance - the implication being that if we use these as weights in an aggregation, we should maximise the explained variance and hence the information passed from the
indicators to the aggregate value. This is a nice property in a composite indicator, where one of the aims is to represent many indicators by single composite. See here for a discussion on this.
But. The weights that result from PCA have a number of downsides. First, they can often include negative weights which can be hard to justify. Also PCA may arbitrarily flip the axes (since from a
variance point of view the direction is not important). In the quest for maximum variance, PCA will also weight the strongest-correlating indicators the highest, which means that other indicators may
be neglected. In short, it often results in a very unbalanced set of weights. Moreover, PCA can only be performed on one level at a time.
The result is that PCA weights should be used carefully. All that said, let’s see how to get PCA weights. We simply run the get_PCA() function with out2 = "coin" and specifying the name of the
weights to use. Here, we will calculate PCA weights at level 2, i.e. at the first level of aggregation. To do this, we need to use the “Aggregated” data set because the PCA needs to have the level 2
scores to work with:
coin <- get_PCA(coin, dset = "Aggregated", Level = 2,
weights_to = "PCAwtsLev2", out2 = "coin")
#> Weights written to .$Meta$Weights$PCAwtsLev2
This stores the new set of weights in the Weights sub-list, with the name we gave it. Let’s have a look at the resulting weights. The only weights that have changed are at level 2, so we look at
coin$Meta$Weights$PCAwtsLev2[coin$Meta$Weights$PCAwtsLev2$Level == 2, ]
#> iCode Level Weight
#> 50 Physical 2 0.5117970
#> 51 ConEcFin 2 0.3049926
#> 52 Political 2 0.3547671
#> 53 Instit 2 0.5081540
#> 54 P2P 2 0.5108455
#> 55 Environ 2 0.6513188
#> 56 Social 2 -0.7443677
#> 57 SusEcFin 2 0.1473108
This shows the nature of PCA weights: actually in this case it is not too severe but the Social dimension is negatively weighted because it is negatively correlated with the other components in its
group. In any case, the weights can sometimes be “strange” to look at and that may or may not be a problem. As explained above, to actually use these weights we can call them when calling Aggregate
Optimised weights
While PCA is based on linear algebra, another way to statistically weight indicators is via numerical optimisation. Optimisation is a numerical search method which finds a set of values which
maximise or minimise some criterion, called the “objective function”.
In composite indicators, different objectives are conceivable. The get_opt_weights() function gives two options in this respect - either to look for the set of weights that “balances” the indicators,
or the set that maximises the information transferred (see here). This is done by looking at the correlations between indicators and the index. This needs a little explanation.
If weights are chosen to match the opinions of experts, or indeed your own opinion, there is a catch that is not very obvious. Put simply, weights do not directly translate into importance.
To understand why, we must first define what “importance” means. Actually there is more than one way to look at this, but one possible measure is to use the (possibly nonlinear) correlation between
each indicator and the overall index. If the correlation is high, the indicator is well-reflected in the index scores, and vice versa.
If we accept this definition of importance, then it’s important to realise that this correlation is affected not only by the weights attached to each indicator, but also by the correlations between
indicators. This means that these correlations must be accounted for in choosing weights that agree with the budgets assigned by the group of experts.
In fact, it is possible to reverse-engineer the weights either analytically using a linear solution or numerically using a nonlinear solution. While the former method is far quicker than a nonlinear
optimisation, it is only applicable in the case of a single level of aggregation, with an arithmetic mean, and using linear correlation as a measure. Therefore in COINr, the second method is used.
Let’s now see how to use get_opt_weights() in practice. Like with PCA weights, we can only optimise one level at a time. We also need to say what kind of optimisation to perform. Here, we will search
for the set of weights that results in equal influence of the sub-indexes (level 3) on the index. We need a coin with an aggregated data set already present, because the function needs to know which
kind of aggregation method you are using. Just before doing that, we will first check what the correlations look like between level 3 and the index, using equal weighting:
# build example coin
coin <- build_example_coin(quietly = TRUE)
# check correlations between level 3 and index
get_corr(coin, dset = "Aggregated", Levels = c(3, 4))
#> Var1 Var2 Correlation
#> 1 Index Conn 0.9397805
#> 2 Index Sust 0.8382873
This shows that the correlations are similar but not the same. Now let’s run the optimisation:
# optimise weights at level 3
coin <- get_opt_weights(coin, itarg = "equal", dset = "Aggregated",
Level = 3, weights_to = "OptLev3", out2 = "coin")
#> iterating... objective function = -7.11287670895252
#> iterating... objective function = -6.75731482891423
#> iterating... objective function = -7.5563175412706
#> iterating... objective function = -8.21181051402935
#> iterating... objective function = -10.0802172796095
#> iterating... objective function = -13.3043247136273
#> iterating... objective function = -8.7011048855954
#> iterating... objective function = -7.93721550859392
#> iterating... objective function = -9.92111795779074
#> iterating... objective function = -8.57337082557942
#> iterating... objective function = -13.0490317878554
#> iterating... objective function = -10.1205749624737
#> iterating... objective function = -11.4698196057753
#> iterating... objective function = -11.5046209642509
#> iterating... objective function = -12.938292451273
#> Optimisation successful!
#> Optimised weights written to .$Meta$Weights$OptLev3
We can view the optimised weights (weights will only change at level 3)
coin$Meta$Weights$OptLev3[coin$Meta$Weights$OptLev3$Level == 3, ]
#> iCode Level Weight
#> 58 Conn 3 0.3902439
#> 59 Sust 3 0.6097561
To see if this was successful in balancing correlations, let’s re-aggregate using these weights and check correlations.
# re-aggregate
coin <- Aggregate(coin, dset = "Normalised", w = "OptLev3")
#> Written data set to .$Data$Aggregated
#> (overwritten existing data set)
# check correlations between level 3 and index
get_corr(coin, dset = "Aggregated", Levels = c(3, 4))
#> Var1 Var2 Correlation
#> 1 Index Conn 0.8971336
#> 2 Index Sust 0.8925119
This shows that indeed the correlations are now well-balanced - the optimisation has worked.
We will not explore all the features of get_opt_weights() here, especially because optimisations can take a significant amount of CPU time. However, the main options include specifying a vector of
“importances” rather than aiming for equal importance, and optimising to maximise total correlation, rather than balancing. There are also some numerical optimisation parameters that could help if
the optimisation doesn’t converge. | {"url":"https://cran.ma.imperial.ac.uk/web/packages/COINr/vignettes/weights.html","timestamp":"2024-11-12T22:56:58Z","content_type":"text/html","content_length":"54297","record_id":"<urn:uuid:91ff337d-7682-4ddd-8b02-c7a361d910de>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00737.warc.gz"} |
Enhancement Of Mathematical Reasoning Ability At Senior High School By The Application Of Learning With Open Ended Approach
Yani , Ramdani (2011) Enhancement Of Mathematical Reasoning Ability At Senior High School By The Application Of Learning With Open Ended Approach. PROCEEDINGS International Seminar and the Fourth
National Conference on Mathematics Education. ISSN 978-979-16353-7-0
P - 82.pdf
Download (137kB) | Preview
The objective of this research is to investigate the differences of students’ enhancement of mathematical reasoning ability as the result of the application of learning with open ended approach and
conventional learning. The population in this research was the entire students in high schools and Aliyah in Bandung. The sample is students on grade X. Two classes are randomly selected from each
school, one class as an experiment class (open-ended approach) and another class as a control class (conventional learning). The instruments used include mathematical prior knowledge test,
mathematical reasoning test, and guidelines for observation. The results of data analysis show that if it is viewed as a whole, students’ enhancement of mathematical reasoning who had treated with
instruction using open-ended approach was better than students who had treated with regular instruction. There is interaction between learning approach and school levels towards students’ enhancement
of mathematical reasoning. There is no interaction between learning approach and the initial of mathematical ability towards students’ enhancement of mathematical reasoning. Keywords: Open Ended
Approach, Conventional, and Mathematical Reasoning
Actions (login required) | {"url":"http://eprints.uny.ac.id/2131/","timestamp":"2024-11-03T03:19:13Z","content_type":"application/xhtml+xml","content_length":"23487","record_id":"<urn:uuid:42bb70da-e7fa-4488-afd6-3a680dab97a6>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00876.warc.gz"} |
Math Is Fun Forum
Re: Fun with 0 / 0
Re: Fun with 0 / 0
After some thought, I'm convinced that what you are trying to do is define the quotient 0/0 to be a
, i.e. In that case, can you define a set of rules for which addition and multiplication of sets of real numbers by other sets of real numbers makes sense? For instance, which of these
"multiplications" would you like to be true?
(These are not usually how products of sets are defined, by the way: the Cartesian product (the third one) is the norm.)
Whichever you choose (and there are probably other possibilities), you will need to give a detailed explanation of your theory. In particular, which theorems and algebraic laws are still valid? Which
ones aren't? And more importantly, is there any reason why we'd do this instead of leaving 0/0 to be indeterminate?
There are arithmetic constructions out there which do similar things: for instance, the so-called "interval arithmetic". Wheel Theory is an algebra in which 0/0 is used, for which there is a more
thorough treatment here. You've mentioned "going deeper" with this -- could you elaborate?
Last edited by zetafunc (2016-01-07 09:34:13)
Re: Fun with 0 / 0
Calligar wrote:
Here, let me make this easier; let's start off like this.
0×0 = 0, 1×0 = 0, 2×0 = 0, 3×0 = 0, etc.. Therefore, instead of listing the unlimited different answers, one just simply puts the variable "v". So it looks like this: v×0 = 0.
is some real number? Looks OK so far. (I assume you're working over , since you mentioned
.) Or is
a set, i.e. ?
Calligar wrote:
Therefore, this reasoning looks at it in a way that if it were v×0=0, then 0÷0 = v.
But you can't make the deduction . You haven't justified why you can perform this operation over the real numbers: this step requires an assumption that 0 has a multiplicative inverse on . It
doesn't: if it did, then the reals would cease to be a field. If you want to construct a set of axioms under which this operation is defined, then you'll have to take 0 = 1 (and hence, inductively, 1
= 2 = 3 = ...). I've provided a suitable construction (the trivial ring R = {0}) in the previous post. If you want another justification, then you'll need to construct your own set of axioms for the
real numbers under which the operations of addition and multiplication are well-defined and are consistent with 0 having a multiplicative inverse.
Calligar wrote:
The issue is, with this, you can not define "v" or it won't work. If you were to say 0÷0 = 1, then 0÷0 ≠ 2. That is why it remains "v" and stays undefined. However, I personally don't see this as
a solution either, and going deeper with this has its difficulties to say the least.
I don't think that's necessarily the issue: we've defined
as being either a real number or . The issue is the implication above.
Last edited by zetafunc (2016-01-06 02:52:50)
Re: Fun with 0 / 0
zetafunc wrote:
In other words, you start by assuming that 0/0 is undefined? I'm still having trouble understanding precisely what you mean: are we simply replacing "0/0" with "v", here? If not, can you give a
more precise definition of v?
Here, let me make this easier; let's start off like this.
0×0 = 0, 1×0 = 0, 2×0 = 0, 3×0 = 0, etc.. Therefore, instead of listing the unlimited different answers, one just simply puts the variable "v". So it looks like this: v×0 = 0. Therefore, this
reasoning looks at it in a way that if it were v×0=0, then 0÷0 = v. The issue is, with this, you can not define "v" or it won't work. If you were to say 0÷0 = 1, then 0÷0 ≠ 2. That is why it remains
"v" and stays undefined. However, I personally don't see this as a solution either, and going deeper with this has its difficulties to say the least.
Another thing I wanted to clarify:
Relentless wrote:
For example, you mentioned that the answer to 1/0 could be infinity.
I apologize if I left that impression. I'll make try to make this clear now. I do not believe 1/0 = ∞. Not even close actually. Not only do you have an issue of using infinity as a number, but it
just won't work, at least not that simply. Like I left in my earlier example, if 1/0 = ∞, then what is 2/0 = ?. Logically, one would start to conclude 2/0 = 2∞, however when dealing with infinity,
that doesn't seem to make sense. Putting this in this simple form, when you divide 1 by 0, what number do you get? When you divide 2 by 0, what number do you get? Surely you don't get the same number
for both problems, do you? Infinity is also not a number. And while it can at times be used similarly to one, it doesn't mean it is one. That's at least how I like to think about it (putting it
There are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Everything makes sense, one only needs to figure out how. -[unknown]
Re: Fun with 0 / 0
Calligar wrote:
Okay, just for fun. Let's say that what my friend said is true (which I'm not saying it is). In that example, v (or any variable), and not just any number would have to be equal to 0 / 0. What
you are trying to do is get to the answer 0 / 0. As it stands, if you were to multiply 0 by any number, you would get 0. Therefore, using that, 0 / 0 does not make 1, nor 2, nor 3. It remains an
undefined variable unless you were using it in some defined manner.
In other words, you start by assuming that 0/0 is undefined? I'm still having trouble understanding precisely what you mean: are we simply replacing "0/0" with "v", here? If not, can you give a more
precise definition of v?
Calligar wrote:
Now how can the variable possible be defined. So I'll make up a problem to try to give an example. So the problem would go something like this: 0÷0+b = a-1. If b is 3, show that 0(a) = 0(4).
0÷0+b = a-1
0÷0+b+1 = a
0(0÷0+b+1) = 0(a)
0[(0÷0)+(+b+1)] = 0(a)
0(0÷0)+0(b+1) = 0(a)
0+0(b+1) = 0(a)
0(v+1) = 0(a)
filling in 3 for b....
0(3+1) = 0(a)
0(4) = 0(a)
0 = 0
Now after doing all that, my point arises with the specific variable being defined, while 0÷0 still existing. 0÷0 still however remains undefined in this way. That's the first part of what I was
talking about...
By starting with the expression 0÷0+b = a-1, aren't you assuming that 0/0 is defined to begin with? Moreover, isn't the identity 0*a = 0*4 true for all a, independent of your expression?
Calligar wrote:
Yet if you use an undefined variable, like in the example I gave where "a" never gets defined (but "v" does). Even in step 2 where 0÷0+b+1 = "a", that's as much of a definition as you're going to
get because you still can't determine what 0÷0 is, making a an even different undefined variable showing how this all may seem simple, but it can very quickly get more complicated. So I repeat
again, there's reason this remains undefined. Comparing it to a variable is something interesting, but if you actually want to go deeper in understanding it, it will get more and more difficult
as you advance.
But why, then, are you allowed to manipulate an expression involving terms which are undefined?
There are other reasons why 0/0 is undefined: for instance, if 0 were its own multiplicative inverse, then the real numbers would then fail to be a field, since we would then end up with 0 = 1, which
contradicts the crucial field axiom that 0 ≠ 1 (since the real numbers are a complete, ordered field). If you want 0/0 to be defined, then you'll have to make the assumption that 1 = 0.
A simple construction in which 0/0 can be defined is to consider the ring R = {0} -- called the "trivial ring" -- in which addition (+) and multiplication (*) are defined such that 0 + 0 = 0 and 0*0
= 0. Then 0 serves as its own multiplicative inverse, and in particular, 0/0 = 0, as desired. This ring is unique in the sense that it is the only ring consisting of one element (up to isomorphism).
Re: Fun with 0 / 0
I understood that
In particular, if you graph a few functions, you will get answers for what functions approach when various things get arbitrarily close to 0, and they will not be 3.9425. I stated in the thread "What
is 0^0?" what these limits are.
For example, you mentioned that the answer to 1/0 could be infinity. But if you graph 1/x, you will find that it could be infinity ... or it could be minus infinity, depending on which direction you
approach x=0 from. The same holds true for any nonzero real number in the numerator.
As for 0/0, if you graph x/x, it is always 1. If you graph 0/x, it is always 0 (although x has to be positive). So, the answer to 0/0 could be either 1 or 0, or plus or minus infinity, in a sense.
But it doesn't work unless there is one consistent answer.
Last edited by Relentless (2016-01-02 15:24:27)
Re: Fun with 0 / 0
Okay, just for fun. Let's say that what my friend said is true (which I'm not saying it is). In that example, v (or any variable), and not just any number would have to be equal to 0 / 0. What you
are trying to do is get to the answer 0 / 0. As it stands, if you were to multiply 0 by any number, you would get 0. Therefore, using that, 0 / 0 does not make 1, nor 2, nor 3. It remains an
undefined variable unless you were using it in some defined manner.
Now how can the variable possible be defined. So I'll make up a problem to try to give an example. So the problem would go something like this: 0÷0+b = a-1. If b is 3, show that 0(a) = 0(4).
0÷0+b = a-1
0÷0+b+1 = a
0(0÷0+b+1) = 0(a)
0[(0÷0)+(+b+1)] = 0(a)
0(0÷0)+0(b+1) = 0(a)
0+0(b+1) = 0(a)
*0(b+1) = 0(a)
filling in 3 for b....
0(3+1) = 0(a)
0(4) = 0(a)
0 = 0
Now after doing all that, my point arises with the specific variable being defined, while 0÷0 still existing. 0÷0 still however remains undefined in this way. That's the first part of what I was
talking about...
Calligar wrote:
1 ≠ 0/0, nor does 2. v = 0/0, and it will remain undefined unless there's something else that will allow it to be defined. In other words, v may = 1, v may = 2, but unless it is defined, you
can't simply say it is 2.
The second part...
However, if v = 0/0, and for some reason v were defined as 3 in this case. Then only 3 = 0/0, and not 2. It would have to be specific to that 0/0.
Making this very simple, if 0/0 = v, and v = 3, then 0/0 = 3. 0/0 = v, but it also = 4, or 5, or 6. Since v is now defined, v is just 1 of infinite things 0/0 may be. Yet if you use an undefined
variable, like in the example I gave where "a" never gets defined (but "v" does). Even in step 2 where 0÷0+b+1 = "a", that's as much of a definition as you're going to get because you still can't
determine what 0÷0 is, making a an even different undefined variable showing how this all may seem simple, but it can very quickly get more complicated. So I repeat again, there's reason this remains
undefined. Comparing it to a variable is something interesting, but if you actually want to go deeper in understanding it, it will get more and more difficult as you advance.
I would also like to add that I made a bit of a mistake, saying "only 3 = 0/0, and not 2", like what I was saying above, is it's specific to the variable "v". If 3 is defined as "v", then v ≠ 2, and
0/0 = um...let's say "w" which remains undefined, unless you define that as well, then you must use another variable as long as it remains undefined.
Anyway, let me make my case clear now, 0/0 ≠ v. It was an interesting comparison, comparing the answer to a variable, but it just isn't as simple as that. I am still interested in what people have to
comment on it because I still find the comparison really interesting as it reminds me of something a kid would do. But I want to make sure everyone understands I don't actually think this is how it
* *edit* I changed "v" to "b" because I noticed I put "v" by mistake....
Last edited by Calligar (2016-01-05 18:03:01)
There are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Everything makes sense, one only needs to figure out how. -[unknown]
Re: Fun with 0 / 0
Calligar wrote:
1 ≠ 0/0, nor does 2. v = 0/0, and it will remain undefined unless there's something else that will allow it to be defined. In other words, v may = 1, v may = 2, but unless it is defined, you
can't simply say it is 2. However, if v = 0/0, and for some reason v were defined as 3 in this case. Then only 3 = 0/0, and not 2. It would have to be specific to that 0/0.
I'm finding it rather difficult to make sense of this. It seems as though you're merely replacing "0/0" with the symbol "v". In particular, you say that:
"...unless there's something else that will allow it to be defined."
"...for some reason v were defined as 3 in this case."
What would this "something else" be that would allow the undefined quantity 0/0 to be well-defined? You will need to exhibit your own framework in which 0/0 is well-defined. Moreover, if you make the
choice to assign 0/0 with a value of 3 (which, in doing so, you implicitly assume that 0/0 is not undefined), then immediately you're faced with the same contradiction as before: it would not "be
specific to that 0/0" (which is a statement I also can't make sense of).
Re: Fun with 0 / 0
That's what Relentless also pointed out...
Relentless wrote:
One of the fundamental issues with dividing by zero is that if you allow it, then just as you get 1*0=0, 3.9425*0=0, you also get 1 = 3.9425
It is an issue with how you are looking at it. If I were to try to argue the case, I'd say it's a bit flawed. 1 ≠ 0/0, nor does 2. v = 0/0, and it will remain undefined unless there's something else
that will allow it to be defined. In other words, v may = 1, v may = 2, but unless it is defined, you can't simply say it is 2. However, if v = 0/0, and for some reason v were defined as 3 in this
case. Then only 3 = 0/0, and not 2. It would have to be specific to that 0/0. However, please don't try to look at it this way. Anything divided by 0 is undefined or the like, and for reason. I'm not
trying to argue that 0 / 0 should equal v. I only was pointing out something interesting that helps you look at 0 / 0 that I found interesting (which doesn't mean it is correct).
Last edited by Calligar (2016-01-02 09:40:09)
There are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Everything makes sense, one only needs to figure out how. -[unknown]
Re: Fun with 0 / 0
Calligar wrote:
So for example, 0 / 0 = v, because v * 0 = 0.
Why? If such an operation were allowed, then we'd be able to construct arguments such as:
"1*0 = 0
2*0 = 0
dividing through by zero yields
1 = 0/0 = 2
Therefore, 1 = 2."
0/0 is often called an 'indeterminate form' since one can obtain numerous different answers depending on how it is approached.
Re: Fun with 0 / 0
Relentless wrote:
One of the fundamental issues with dividing by zero is that if you allow it, then just as you get 1*0=0, 3.9425*0=0, you also get 1 = 3.9425
Interesting point (it took me a little bit to realize what you were saying). It is at least one of the issues calling it v. But again, I never listed it saying it was correct, I just thought that
such a simple thing like that was worth mentioning out of interest.
Relentless wrote:
Personally, I think rather than saying that x/0 is equal to some undefined variable v, it is best to say that it is a question that does not make sense. When you think about 20 divided by 5, you
are thinking about how many times you subtract 5 from 20 to get 0. So how many times do you subtract 0 from 1 to get 0? Well, you never will. How many times do you subtract 0 from 0 to get 0? As
many as you like. They are really nonsense questions from that perspective.
Unfortunately dividing any other number by 0 creates a lot more issues. I've worked with it before with my friend. 1 / 0 can not equal v like 0 / 0 can. That point I made only works with 0 / 0. As
you pointed out, if you divide 1 / 0, you can not reach a number. There isn't a number in existence (that I'm aware of) that can answer that question. Some may argue the answer is ∞. But then comes
an issue with using ∞ as a number, which it isn't (or at least not exactly in that way). Plus there are issues to that. If say 1 / 0 = ∞, what does 2 / 0 = ? Does it also equal ∞, does it equal 2∞?
How do you even begin to make sense of it? You can make sense of it, but I don't know anything in mathematics that will answer it except for maybe...1 / 0 = undefined (or something of that nature).
Probably already realized all this, but at least interesting to point out.
There are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Everything makes sense, one only needs to figure out how. -[unknown]
Re: Fun with 0 / 0
There are frameworks in which division by zero are defined. For instance, the extended complex plane (also called the Riemann sphere), given by adopts arithmetic conventions in which addition,
multiplication and division by 0 or infinity are defined (although certain expressions such as ∞ - ∞ or ∞/∞ are left undefined). We can then identify the Riemann sphere with the unit 2-sphere using
an isomorphism, which is called stereographic projection -- one of the foundations of spherical geometry.
Relentless wrote:
Personally, I think rather than saying that x/0 is equal to some undefined variable v, it is best to say that it is a question that does not make sense. When you think about 20 divided by 5, you
are thinking about how many times you subtract 5 from 20 to get 0. So how many times do you subtract 0 from 1 to get 0? Well, you never will. How many times do you subtract 0 from 0 to get 0? As
many as you like. They are really nonsense questions from that perspective.
I think we have to be careful when we decide whether or not a question makes sense: many results in mathematics don't necessarily have to coincide with our natural human intuition. For instance, if
we assume the axiom of choice, then we end up with the Banach-Tarski paradox, which challenges our geometric intuition. It also doesn't make intuitive sense to consider the factorial of a
non-integer, nor the Riemann zeta function with complex argument -- but they can be made sense of via a process called analytic continuation, the latter of which gives rise to the Riemann hypothesis,
perhaps the most famous unsolved problem in mathematics.
Last edited by zetafunc (2016-01-02 00:46:00)
Re: Fun with 0 / 0
Yes! It is a neat implication, one of many when working with 0. Your thinking is that the solution to 0/0 is v in v*0=0; but then v=0/0 can be literally anything. One of the fundamental issues with
dividing by zero is that if you allow it, then just as you get 1*0=0, 3.9425*0=0, you also get 1 = 3.9425
Personally, I think rather than saying that x/0 is equal to some undefined variable v, it is best to say that it is a question that does not make sense. When you think about 20 divided by 5, you are
thinking about how many times you subtract 5 from 20 to get 0. So how many times do you subtract 0 from 1 to get 0? Well, you never will. How many times do you subtract 0 from 0 to get 0? As many as
you like. They are really nonsense questions from that perspective.
Last edited by Relentless (2016-01-01 21:23:39)
Fun with 0 / 0
One of the things I thought was pretty interesting was something my friend showed me some time back. Very basic and easy, but amazingly I had never thought about it before. 0 / 0 = undefined, why?
Well, that's been answered here before and there are LOTS of reasons for that already. But when you start thinking anything * 0, you get 0.
So 1 * 0 = 0. 2 * 0 = 0. 3.9425 * 0 = 0. (pi) * 0 = 0. So then if you divide by 0, you'd think you'd get the reverse. If you divided by 0, you could get any real number, no? My friend (I think
jokingly) brought up 0 / 0 = (a variable). So for example, 0 / 0 = v, because v * 0 = 0. And while that may not be correct, I actually thought that was pretty cool and wanted to share it with other
people. Any thoughts?
There are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Everything makes sense, one only needs to figure out how. -[unknown]
Okay, I just want to make sure you have caught this...
Calligar wrote:
Anyway, let me make my case clear now, 0/0 ≠ v. It was an interesting comparison, comparing the answer to a variable, but it just isn't as simple as that. I am still interested in what people
have to comment on it because I still find the comparison really interesting as it reminds me of something a kid would do. But I want to make sure everyone understands I don't actually think this
is how it works...
Basically, all this time, I wasn't trying to prove it, unless I was making arguments for the fun of it, which aren't entirely correct. I was merely trying to explain that there was an interesting
comparison my friend made. The logic (and possibly one of the flaws too) is that when dealing with multiplication, division is the opposite. Therefore, whatever you multiply, you can turn around to
divide. So 4×2 = 8 and 2×4=8. So, 8÷2 = 4 and 8÷4 = 2. Using that logic, comes the 0÷0, the problem is. Since v×0 = 0 and 0×v = 0, then 0÷0 = v and 0÷v = 0. We already know that 0 divided by anything
= 0 (unlike anything divided by 0), therefore, there will be less argument against that. The problem is saying 0÷0, as you start running into more issues.
Now another one of the issues with what I did, which is pretty much wrong, was with this example:
Calligar wrote:
0÷0+b = a-1
0÷0+b+1 = a
0(0÷0+b+1) = 0(a)
0[(0÷0)+(+b+1)] = 0(a)
0(0÷0)+0(b+1) = 0(a)
0+0(b+1) = 0(a)
*0(b+1) = 0(a)
filling in 3 for b....
0(3+1) = 0(a)
0(4) = 0(a)
0 = 0
Let me do something quick that will make it seem wrong.
2 ≠ 3
0(2) ≠ 0(3)
0 ≠ 0
Talking to my friend recently about this, he will still argue it's correct, actually making the argument both 0s are not equal. In the example he gave, he would have taken it further and reversed the
whole thing as well. He also would not have done the last thing I did where I said 0 = 0. So I think that's the end of me trying to make arguments for his case
zetafunc wrote:
You've mentioned "going deeper" with this -- could you elaborate?
I may have given a brief example of taking it a little deeper, but to be honest, I am not really sure I want to go too much deeper into that (at least at this time), because that's beginning to go
into things that don't exist (at least within my knowledge) and also complicated things that already exist, expanding on rules or changing other ones. And I'm not even the one that came up with it,
making it more difficult to explain something my own friend did (which is frustrating especially when I make certain mistakes). I'd prefer to not use things like 0÷0 at all as other than just working
on the math of it as I don't really have any use for that (so just going into the pure math for the fun of it I guess).
The reason I brought it up here is not to argue what this said is correct, just a curious example that may get you to think about it some more (especially those who aren't familiar with why 0÷0 is
undetermined). I know a few years ago (I think I even brought it up on this forum), I honestly thought 0÷0 = only 0. Knowing that 1÷0 = undefined, but specifically was wondering why that was the case
with 0÷0. I figured it out later, but when I talked to my friend about it, and he had this whole thing set up for it already that used pretty much basic algebra to try to show it. I thought it was a
very curious example, wrong or not. Basically, I was curious about other peoples reactions to that, because I honestly thought it was pretty cool the first time I saw it (even if I don't really
believe that to be the case).
Also, I'm not ignoring all the cases you gave zetafunc, I'm honestly still looking into those. The wheel theory I find pretty interesting. But I don't know how much further I can actually answer your
questions to be honest, at least at this time. Also, I'm not familiar enough with some of the things your saying. For instance, I'm honestly not sure what you mean each time you're using ∈ nor am I
the most familiar with set theory (which I have done very little work with; better just to say I don't know set theory), though I believe I understand the rest of it (could be mistaken).
Note: Also saying 0÷0 = v = ℝ probably is fine, though, I'm unsure if my friend would put it the same way as I'm not sure if he would restrict it to only real numbers or possible include even more.
There are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Everything makes sense, one only needs to figure out how. -[unknown]
From: Bumpkinland
Registered: 2009-04-12
Posts: 109,606
Re: Fun with 0 / 0
I'd prefer to not use things like 0÷0 at all as other than just working on the math of it as I don't really have any use for that
What people find useful is very time relevant, the Calligar of the future will certainly find uses for 0^0 = 1.
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Re: Fun with 0 / 0
0^0 = 1 is actually quite interesting. It seems to go against what one would think. Let's see, how does the one proof go again...? 1 = A^n/A^n = A^(n-n) = A^0...something like that...
Oh, rofl, I see! 0^0/0^0. I was just about to post and caught that
There are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Everything makes sense, one only needs to figure out how. -[unknown]
Re: Fun with 0 / 0
0^0 is equivalent to 0/0, as far as I know. Both of them, especially 0^0, have strong arguments for being 1. Unfortunately, they both have solid arguments for being 0 as well. Several people have
mentioned recently that it is useful to assume 0^0 = 1 for the purpose of many formulas (in fact, I saw this just yesterday when deriving the Kelly formula). That is what bobbym is referring to.
Re: Fun with 0 / 0
Oh yeah, I realize 0/0 is also used for that purpose with convenience. One of the things that actually gets me interested in these kinds of things is seeing how it may potentially be something other
than what you think it is when you look more at it. So in arithmetic, you learn 0 / 0 = (undefined), but then you begin to think, wait a second, what does that even mean exactly? Why is it even
undefined? You learn more math later on, as you move on to algebra, calculus, um...set theory, also things you just simply study on your own, etc., which may provide you with even more reasoning for
why it is like that. But then, my friend tells me of something like this he concocts, and then, you just kind of laugh and say, this is seemingly quite simple and appears to make sense while also
finding it quite clever (well I do anyway). Even if it isn't true and doesn't work, I find the whole thing very interesting and still wonder if it does work with more explanation. It still think on
some level it could fit in, but there's so much I don't know, I'm not even able to say that with confidence.
There are always other variables. -[unknown]
But Nature flies from the infinite, for the infinite is unending or imperfect, and Nature ever seeks an end. -Aristotle
Everything makes sense, one only needs to figure out how. -[unknown]
Re: Fun with 0 / 0
If somebody starts with zero asset and go on multiplying his assets he will never become rich. for that he has to add assets. Of course once he adds then he can go on multiplying.
Last edited by thickhead (2016-04-21 19:43:03)
{1}Vasudhaiva Kutumakam.{The whole Universe is a family.}
(2)Yatra naaryasthu poojyanthe Ramanthe tatra Devataha
{Gods rejoice at those places where ladies are respected.}
Re: Fun with 0 / 0
Last edited by thickhead (2016-04-24 23:08:13)
{1}Vasudhaiva Kutumakam.{The whole Universe is a family.}
(2)Yatra naaryasthu poojyanthe Ramanthe tatra Devataha
{Gods rejoice at those places where ladies are respected.}
Registered: 2016-05-26
Posts: 17
Re: Fun with 0 / 0
Calligar wrote:
As you pointed out, if you divide 1 / 0, you can not reach a number. There isn't a number in existence (that I'm aware of) that can answer that question. Some may argue the answer is ∞. But then
comes an issue with using ∞ as a number, which it isn't (or at least not exactly in that way). Plus there are issues to that. If say 1 / 0 = ∞, what does 2 / 0 = ? Does it also equal ∞, does it
equal 2∞? How do you even begin to make sense of it? You can make sense of it, but I don't know anything in mathematics that will answer it except for maybe...1 / 0 = undefined (or something of
that nature). Probably already realized all this, but at least interesting to point out.
As far as I know, ∞ = 2∞. Think of it this way:
Say we have a hotel that has ∞ floors, with 1 room on each floor. Let's say that the hotel is full. Then, a party of ∞ comes along. How do they make space for them? By moving the person in floor #1
to floor #2 and fitting 1 person from the party in it. Same for the person in #2, #3, so on.
Or, in math language (aka Engrish), if we have a set that has ∞ terms, we can multiply each term by 2 to fit in an additional ∞ terms.
So, writing that out in mathematical terms, ∞ = 2∞.
Last edited by rileywkong (2016-06-21 07:04:04)
Be nice.
"Life is like a bicycle; to keep your balance, you have to move." - Albert Einstein
Registered: 2012-01-30
Posts: 266
Re: Fun with 0 / 0
If you have an equation of x=y, it is always passing through the coordinate {0,0}. Since,
it concludes that :)
Re: Fun with 0 / 0
Well, if y=2x 0/0=2 . actually you can assign a value to 0/0 so that your function has no discontinuity. the value depends on the function.
{1}Vasudhaiva Kutumakam.{The whole Universe is a family.}
(2)Yatra naaryasthu poojyanthe Ramanthe tatra Devataha
{Gods rejoice at those places where ladies are respected.}
Registered: 2012-01-30
Posts: 266
Re: Fun with 0 / 0
Basically 0/0 can be anything. Unless we tend to go beyond normal understanding of infinity, we can actually get 0/0=1. For example
Taking natural long on both sides yields
If we could consider the "undefined" cancelling each other equal to zero.
We can argue like forever with this thing but this is how mathematics progresses
Re: Fun with 0 / 0
I will be making a video about this discussion within the next week or so. | {"url":"https://www.mathisfunforum.com/viewtopic.php?pid=378353","timestamp":"2024-11-12T07:25:15Z","content_type":"application/xhtml+xml","content_length":"75524","record_id":"<urn:uuid:06ccdef5-e231-4748-a438-1de919f86fe3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00223.warc.gz"} |
Spaces and fpqc coverings
Lemma 115.15.1. Let $S$ be a scheme. Let $X$ be an algebraic space over $S$. Let $\{ f_ i : T_ i \to T\} _{i \in I}$ be a fpqc covering of schemes over $S$. Then the map
\[ \mathop{\mathrm{Mor}}\nolimits _ S(T, X) \longrightarrow \prod \nolimits _{i \in I} \mathop{\mathrm{Mor}}\nolimits _ S(T_ i, X) \]
is injective.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0ARG. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0ARG, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0ARG","timestamp":"2024-11-12T12:43:08Z","content_type":"text/html","content_length":"15609","record_id":"<urn:uuid:d4520b40-3f52-45ca-bde6-79fdbc203df2>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00301.warc.gz"} |
Data Navigator for Schools
Example utilities costs plot
For general information on how to read a benchmark histogram, see here.
This plot shows the distributions of utilities costs with the option to look at:
Dropdown control
1. Total amount, which is the utilities costs, calculated from survey question 39. Running costs: Utilities.
2. Total amount per pupil, calculated as utilities costs divided by the total number of pupils. The total number of pupils is calculated from survey question 13. Number of pupils by boarding type as
at 31 August.
3. Percentage (%) of net fees, calculated as utilities costs divided by Net fee income.
4. Percentage (%) of total expenditure, calculated as utilities costs divided by the total expenditure. | {"url":"https://datanavigator.barnett-waddingham.co.uk/man/expenditure-expenditure_running_costs-running_utilities-manual.html","timestamp":"2024-11-11T11:01:48Z","content_type":"application/xhtml+xml","content_length":"29639","record_id":"<urn:uuid:fc663bc6-6f6f-4f78-a821-e34c543be4b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00437.warc.gz"} |
How is right triangle trigonometry used in real life?
Trigonometry can be used to roof a house, to make the roof inclined ( in the case of single individual bungalows) and the height of the roof in buildings etc. It is used naval and aviation
industries. It is used in cartography (creation of maps). Also trigonometry has its applications in satellite systems.
How do we apply trigonometry in real life?
Other Uses of Trigonometry
1. The calculus is based on trigonometry and algebra.
2. The fundamental trigonometric functions like sine and cosine are used to describe the sound and light waves.
3. Trigonometry is used in oceanography to calculate heights of waves and tides in oceans.
4. It used in the creation of maps.
What is the formula for a right angle triangle?
Right Triangles and the Pythagorean Theorem. The Pythagorean Theorem, a2+b2=c2, a 2 + b 2 = c 2 , can be used to find the length of any side of a right triangle.
How do doctors use trigonometry?
Trigonometry helps the doctors to study and understand waves like radiation waves, x-ray waves, ultraviolet waves and water waves as well. These all have a great effect on living things such as
humans and animals.
Where is calculus used in real life?
Calculus is the language of engineers, scientists, and economists. The work of these professionals has a huge impact on our daily life – from your microwaves, cell phones, TV, and car to medicine,
economy, and national defense.
What is the basis of trigonometry?
There are three basic functions in trigonometry, each of which is one side of a right-angled triangle divided by another. You may find it helpful to remember Sine, Cosine and Tangent as SOH CAH TOA.
What are the formulas for triangles?
The area of each triangle is one-half the area of the rectangle. So, the area A of a triangle is given by the formula A=12bh where b is the base and h is the height of the triangle. Example: Find the
area of the triangle.
How many types of right angle triangles are there?
The sides adjacent to the 90degree/right angle in the triangle are known as the base and perpendicular of the triangle. When you will draw the perpendicular from the right angle of the triangle and
join it to the hypotenuse, you will always get three similar kinds of triangles.
Where do you see right triangles in real life?
Let’s explore the real-life examples of the triangle:
• Bermuda Triangle.
• Traffic Signs.
• Pyramids.
• Truss Bridges.
• Sailing Boat.
• Roof.
• Staircase and ladder.
• Buildings, Monuments, and Towers.
Do doctors use trigonometry?
Doctors use trig specifically to understand waves (radiation, X-ray, ultraviolet, and water). Trigonometry is vital to understand calculus.
What are the angles of a right angle triangle?
A right triangle has one angle equal to 90 degrees. A right triangle can also be an isosceles triangle–which means that it has two sides that are equal. A right isosceles triangle has a 90-degree
angle and two 45-degree angles.
How do you find the angle of a triangle in trigonometry?
Draw a line from one of the other angles in the triangle so that it intersects the opposite side at a right angle. Measure the side of the right triangle between the right angle and the angle you are
trying to find. This is called the adjacent side of the triangle.
How do you calculate perimeter of a right triangle?
Finding the Perimeter of Triangles Set up the formula for finding the perimeter of a triangle. The formula is P=a+b+c{\\displaystyle P=a+b+c}, where the variables equal the three sides of the
triangle. Find the perimeter of a right triangle with a missing side length.
Does trigonometry only work for right triangles?
The trigonometric functions are based on a measure of 90 degrees yes, but it is not restricted to work with only right triangles. You can use sine and cosine laws when the triangle isn’t a right | {"url":"https://wildpartyofficial.com/how-is-right-triangle-trigonometry-used-in-real-life/","timestamp":"2024-11-06T09:00:48Z","content_type":"text/html","content_length":"78602","record_id":"<urn:uuid:926c95f1-63d6-4d28-9710-777cad0b3c82>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00227.warc.gz"} |
Bond Pricing Formula | How to Calculate Bond Price? | Examples
Bond Pricing Formula
Last Updated :
21 Aug, 2024
Formula to Calculate Bond Price
The formula for bond pricing is the calculation of the present value of the probable future cash flows, which comprises the coupon payments and the par value, which is the redemption amount on
maturity. The rate of interest used to discount the future cash flows is known as the yield to maturity (YTM.)
Bond Price = ∑[i=1]^n C/(1+r)^n + F/(1+r)^n
Bond Price = C* (1-(1+r)^-n/r ) + F/(1+r)^n
You are free to use this image on your website, templates etc, Please provide us with an attribution link
where C = Periodic coupon payment,
• F = Face / Par value of bond,
• r = Yield to maturity (YTM) and
• n = No. of periods till maturity
On the other, the bond valuation formula for deep discount bonds or zero-coupon bonds can be computed simply by discounting the par value to the present value, which is mathematically represented as,
Zero-Coupon Bond Price = (as the name suggests, there are no coupon payments)
• The bond pricing formula involves calculating the present value of the anticipated future cash flows, including coupon payments and the par value, or the amount redeemed when the bond matures.
The interest rate discounting future cash flows is called the "yield to maturity (YTM).
• Bonds are an essential component of the financial markets, making bond pricing particularly significant. Analysts and investors must thus comprehend how a bond's many features interact to assess
its underlying value.
• Bond pricing is integral to bond investing since it helps determine if an investment suits a portfolio.
Bond Pricing Calculation (Step by Step)
The formula for Bond Pricing calculation by using the following steps:
1. Firstly, the face value or par value of the bond issuance is determined as per the company's funding requirement. The par value is denoted by F.
2. Now, the coupon rate, which is analogous to the interest rate of the bond and the frequency of the coupon payment, is determined. The coupon payment during a period is calculated by multiplying
the coupon rate and the par value and then dividing the result by the frequency of the coupon payments in a year. The coupon payment is denoted by C.
C = Coupon rate * F / No. of coupon payments in a year
3. The total number of periods till maturity is computed by multiplying the years till maturity and the frequency of the coupon payments in a year. The number of periods till maturity is denoted by
n = No. of years till maturity * No. of coupon payments in a year
4. The YTM is the discounting factor, which is determined based on the current market return from an investment with a similar risk profile. The YTM is denoted by r.
5. Now, the present value of the first, second, third coupon payment, and so on so forth, along with the present value of the par value to be redeemed after n periods, is derived as,
6. Finally, adding together the present value of all the coupon payments and the par value gives the bond price as below,
Practical Examples (with Excel Template)
Example #1
Let us take an example of a bond with annual coupon payments. Let us assume a company XYZ Ltd has issued a bond having a face value of $100,000, carrying an annual coupon rate of 7% and maturing in
15 years. The prevailing market rate of interest is 9%.
• Given, F = $100,000
• C = 7% * $100,000 = $7,000
• n = 15
• r = 9%
The price of the bond calculation using the above formula as,
Since the coupon rate is lower than the YTM, the bond price is less than the face value, and as such, the bond is said to be traded at a discount.
Example #2
Let us take an example of a bond with semi-annual coupon payments. Let us assume a company ABC Ltd has issued a bond having the face value of $100,000 carrying a coupon rate of 8% to be paid
semi-annually and maturing in 5 years. The prevailing market rate of interest is 7%.
Hence, the price of the bond calculation using the above formula as,
Since the coupon rate is higher than the YTM, the bond price is higher than the face value, and as such, the bond is said to be traded at a premium
Example #3
Let us take the example of a zero-coupon bond. Let us assume a company QPR Ltd has issued a zero-coupon bond with a face value of $100,000 and matures in 4 years. The prevailing market rate of
interest is 10%.
Hence, the price of the bond calculation using the above formula as,
• Bond price = $68,301.35 ~ $68,301
Use and Relevance
The concept of bond pricing is very important because bonds form an indispensable part of the capital markets. As such, investors and analysts must understand how a bond's different factors behave to
calculate its intrinsic value. Similar to stock valuation, the pricing of a bond helps understand whether it is a suitable investment for a portfolio and consequently forms an integral part of bond
Bond Pricing Formula Video
Frequently Asked Questions (FAQs)
What is mid-swap in bond pricing?
The midway point between the purchase and sell prices (bid and offer prices) for currency or interest rate transactions is known as the mid-swap price.
What is a tick in bond pricing?
A tick is the smallest increase or decrease in a security's price. The price variation of a security from one trade to the next can also be referred to as a tick. The minimum tick size for equities
trading above $1 has been one cent since 2001, with the introduction of decimalization.
What is a bond pricing date?
The pricing date is when an issuance's interest rates, yields, and prices are decided. Depending on the situation, the length of the price period might be anything from a few hours to many days.
Recommended Articles
This has been a guide to Bond Pricing Formula. Here we discuss how to perform bond pricing calculations, practical examples, and downloadable excel templates. You may learn more about Fixed Income
from the following articles – | {"url":"https://www.wallstreetmojo.com/bond-pricing-formula/","timestamp":"2024-11-03T16:22:59Z","content_type":"text/html","content_length":"354695","record_id":"<urn:uuid:b1d5205a-fa4f-4189-a803-fea6b51f5e54>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00701.warc.gz"} |
Steel Bracing in Braced Multi-storey Frames - STRUCTURES CENTRE
This article discusses the types of steel bracings required for ensuring lateral stability in braced multi-storey steel frames, the design considerations and the procedures required when providing
them within a steel frame.
Steel frames with respect to their lateral stability can be classified into two groups; moment resisting frames and braced frames (Figure 1). In moment resisting frames, the steel connections are
very stiff, they allow the transfer of bending moment to the supporting column and by virtue of this stiffness and strength of all interconnected structural elements, lateral stability is ensured.
However, in braced frames, the steel connections are relatively less stiff, they are composed of simple connections, therefore there is no moment transfer between elements and lateral stability has
to be provided through a different process. The process through which lateral stability is ensured in a braced steel frame is via the use of ‘steel bracings.’
Figure 1: Moment resisting frame and Braced frame
This article discusses the types of steel bracings required for ensuring lateral stability in braced multi-storey steel frames, the design considerations and the procedures required when providing
them within a steel frame.
A vast majority of multi-storey steel frames today, are designed as braced frames. The implication as highlighted in the opening paragraph is that the connections between beams and columns has to be
nominally pinned and resistance to lateral forces is to be provided by a core, if present, or more primarily by steel bracings. A further consequence of this is that steel beams can be designed as
simply supported carrying only vertical loads only and the steel columns for axial force and nominal moment arising from eccentricity between the beam-column connections, bringing a simplification to
the analysis and design process.
Bracings System in Steel Frames
Steel bracings within a braced multi storey steel arrangement can be classified into two types; horizontal bracing and vertical bracing.
Horizontal Bracing
When lateral forces are applied on a braced frame, the forces are first received by the building cladding, and then to the horizontal bracing which consequently transfers it to the vertical bracing
from where it reaches the foundation. The primary function of a horizontal bracing is to provide a route or load path through which all applied lateral forces would reach the foundation. Thus,
horizontal bracings are provided on the horizontal plane of each floor level.
There are two forms of horizontal bracing that can be provided within a multi-storey steel braced frame. These are:
• Diaphragm
• Triangulated Bracings
A diaphragm is the part of a structure that provides bracing in its plane. Usually, these are floor slabs or roof cladding, but they can also be in vertical cladding elements. In steel braced
frames, diaphragms are tied back to vertical bracings of the structure that provide lateral stability. In-situ concrete slabs, metal decks, and timber floors are all excellent forms of diaphragm,
albeit they may sometimes require some assessment for strength sufficiency in resisting the applied lateral loads.
Triangulated Bracing
In the absence of a sufficiently stiff diaphragm, Horizontal bracing may be used, in the form of steel bracing. The bracings are arranged in the form of a modified warren truss or the modified fink
truss format as shown in Figure 2. The modified fink truss is more suited for tension bracing. In both cases the floor or beams act as part of the horizontal stability system and therefore must be
designed for tension or compression in addition to flexure and shear.
Figure 2: Triangulated Bracing
Vertical Bracing
Vertical bracings comprise of the diagonal bracing elements in the vertical plane of the structure provided in both orthogonal direction of the steel frame. This part of the bracing system is
responsible for transferring the applied lateral forces down to the foundation. They are usually provided by bracing two lines of columns from the ground-up, just like a vertical cantilevering truss.
In order for a vertical bracing to be effective, it must be provided at each floor level, throughout the entire height of the steel frame.
Just like concrete cores, it is always desired to ensure that the disposition of vertical steel bracings within a steel frame is symmetrical. This ensures that the center of stiffness of the steel
frame coincides with the point of application of lateral loads, thus it can be conveniently assumed that all lateral forces are shared equally between the bracing system in the orthogonal directions.
Where a symmetrical arrangement is impossible, equal sharing of lateral forces cannot be assumed and there would be introduction of torsional effects.
There are several forms of vertical bracing available based on arrangement of the bracing elements. These are listed and briefly explained in the following sections.
• Single Diagonal Bracing
• X Bracing
• Chevron Bracing
Single Diagonal Bracing
Single diagonal bracing (Figure 3) is the simplest form of vertical bracing. It is created by the introduction of a diagonal steel members between two lines of column, thereby forming a vertical
truss. In the single diagonal steel bracing system, the diagonal members are either in compression or tension as the case may be.
Figure 3: Single diagonal bracing
Single diagonal bracings can be considered to be relatively economical when compared to others, since they utilize a single diagonal member. However, because some members have to be designed for
compression, buckling invariably leads to an increase in member sizing. Whether this increase in member sizing can be justified against the reduction in member quantity can only be answered through
X Bracing
The cross or X bracing utilizes two diagonal members intersecting each other (Figure 4). In an X bracing, one of the members act as a redundant member, leaving the other member to be designed for
tension only. Regardless of the direction of loading, an X brace would only resist tension. The implication, therefore, is that relative to the single diagonals, an X brace might offer some reduced
member size against compression members and in favour of economy.
Figure 4: X bracing
However, with regard to aesthetics, cross bracing might be undesirable as they frequently interrupt the position of openings such as windows and doors in the façade.
Chevron Bracing
A chevron bracing consists of diagonals members intersecting at the middle of a horizontal member to form a V shape (see Figure 5). Chevron bracings are known to offer very good elastic stiffness and
strength. Unlike the X bracing, Chevron bracings are architecturally functional because they allow for openings within the braced frame.
Figure 4: Chevron bracing
Designing a Bracing System
For most braced frames, the floor system is usually sufficient to act as a diaphragm without the need for additional horizontal bracing, hence this article will focus only on the design of vertical
bracing element. The design of a vertical bracing has to consider the following.
• Wind Action
• Equivalent Horizontal Forces (EHF)
• Frame Stability
Wind Action
The primary lateral load on the bracing system comes from wind. Wind action can be readily derived using the recommendations of BS EN 1991 (part 4). The derived wind actions are considered in
combination to other actions. Typically, the load combination where wind is considered as the leading variable action would produce more onerous analysis result in the design of a bracing system.
To apply wind pressure on a bracing system, the wind pressure is first converted to a force which is shared amongst the braced bays resisting the wind. And then for each bracing system, the wind
force would be applied at each floor level, by proportioning the force based on story height, see Figure 6.
Figure 6: Proportioning of wind load on a braced frame
Equivalent Horizontal Forces
BS EN 1993-1-1 requires for global imperfections to be considered in the design of vertical steel bracings. This global imperfection can be considered through the application of some equivalent
horizontal forces as it’s much easier and practical to introduce EHF’s than to introduce geometric imperfection into the model.
Equivalent horizontal forces, according to section 5.3.2(7) have the design value of ΦN[Ed]. The procedure for determining the value of the EHF has already been detailed in a previous article. See
Application of Notional Loads on Structures.
Frame Stability
The sensitivity of the vertical bracing to sway must also be checked. Where the frame is ‘sway sensitive,’ the effects of the deformed geometry on the structure needs to be allowed for within the
analysis. This can be achieved through an amplifying factor as provided in 5.2.2.(2)B of BS EN 1993-1-1 or through a full blown second order analysis.
The procedure for determining the sway sensitivity of frames has also been expatiated in a previous article (See: Assessing the Stability of Frames).
Steps in Designing a Bracing System
Prior to the design of a bracing system, the designer has to Verify the reliability of the floor type in acting as a rigid diaphragm. Where there is the presence of an in-situ concrete slab, metal
steel deck or timber floor, this can usually be assumed to satisfy the horizontal bracing requirement. In the absence of a rigid diaphragm, triangulated steel bracing must be provided in the floor
Secondly, vertical bracings should be disposition within the structure towards avoiding torsional stresses. This can be achieved by ensuring that the center of stiffness of the steel frame, coincides
as much as possible with the point of application of the lateral loads.
The following design process is recommended when designing a bracing system within a braced multi-storey steel frame.
• Choose appropriate sizes for the steel beams and steel column.
• Estimate the external lateral load from wind and the equivalent horizontal forces EHF, sharing this appropriately between the bracing system in the direction of the applied loads. For each
vertical bracing, proportion the lateral load floor by floor, based on storey height.
• Calculate the total shear at the base of the braced frame and analyse the system as a cantilever truss.
• Size the bracing elements by designing members in compression for axial compression (See: Design of Steel Elements in Axial Compression to EC3) and members in tension for tension (See: Design of
Steel Elements in Tension to EC3).
• Assess the frame stability, in terms of the parameter α[cr], using the combination of the wind load and EHF as the lateral forces on the frame, in conjunction with the vertical loads.
• Where the frame is found to be sensitive to sway, i.e., if α[cr ]< 10 an amplification factor is to be determined and all lateral forces are to be amplified. Thus, the bracing elements would be
rechecked or even resized for the amplified forces.
Worked Example
The figure shown below is the typical floor plan and braced frame of a 4-storey of an office building. The building is composed of a composite metal steel deck and steel beams with steel columns
designed in simple construction. Lateral stability is being ensured in each direction through the disposition of two 6.0m braced bays each. Assuming the composite steel deck is sufficient to act as
a rigid diaphragm, using the data provided in Table 1, size the vertical bracings at base level only, when wind is acting on the 24m side of the building.
Roof permanent actions 1.5kN/m^2
Roof imposed actions 0.6 kN/m^2
Floors permanent action 4.5kN/m^2
Floors imposed actions 3.0kN/m^2
Wind Pressure 1.8kN/m^2
Steel S275
Table 1: Design Data
Design Value of Actions
When two or more variable actions are acting simultaneously, BS EN 1990 states that one of them must be considered to be the leading variable action at a time while others are the accompanying
variable actions. In this case we have two variable actions, the imposed loadings and wind, hence two loading combination should be considered -one in which the floor-imposed loading is taken as the
leading variable action while wind is the accompanying action and vice versa.
However, for most typical braced bays, the loading combination where wind is the leading variable would usually produce more unfavourable results in the brace member, hence we can restrict ourselves
to these alone. From Eqn. 6.10 of BS EN 1990, the design value of applied actions is given as:
{ n }_{ s }=1.35{ g }_{ k }+{ 1.5\psi }_{ o }{ q }_{ k }+1.5{ w }_{ k }
Vertical Loads
@ Roof Level
\quad { n }_{ s,roof }\quad =1.35{ g }_{ k }+1.5{ \psi }_{ o }{ q }_{ k }\\ =\left( 1.35\times 1.5 \right) +\left( 1.5\times 0.5\times 0.6 \right) \\=2.48kN/{ m }^{ 2 }
{ V }_{ Ed,roof }={ n }_{ s,roof }{ A }_{ ref }=2.48\times \left( 12\times 24 \right)\\ =714.2kN
@ Floor Level
\quad { n }_{ s,roof }\quad =1.35{ g }_{ k }+1.5{ \psi }_{ o }{ q }_{ k }\\ =\left( 1.35\times 4.5 \right) +\left( 1.5\times 0.7\times 3.0 \right) \\=9.23kN/{ m }^{ 2 }
{ V }_{ Ed,roof }={ n }_{ s,roof }{ A }_{ ref }=9.23\times \left( 12\times 24 \right) \\=2658.2kN
Wind Loads
{ w }_{ d }=1.5{ w }_{ k }=\left( 1.5\times 1.8 \right) \\=2.7kN/{ m }^{ 2 }
The total wind force acting parallel to the length of the building:
{ F }_{ d }={ w }_{ d }{ A }_{ ref }=2.7\times \left( 24\times 16 \right) \\=1036.8kN
This wind force would be shared between the braced bays, since there are two braced bays, wind force carried by each braced bay is one-half the total value = (1036.8/2 = 518.4kN).
The wind force can be distributed as point loads at floor levels in proportion to the storey height. In this case:
@Roof=\frac { 2 }{ 16 } \times 518.4=64.8kN
@Floors=\frac { 4 }{ 16 } \times 518.4=129.6kN
@Base=\frac { 2 }{ 16 } \times 518.4=64.8kN
At base level, the wind force can be assumed to be carried in shear by the ground floor slab, hence its value is ignored in the frame analysis.
Equivalent Horizontal Forces
Equivalent horizontal forces are determined using the equation (Φ[0]α[h]α[m]VEd) for simplicity, the reduction factors are going to be ignored. This is conservative.
Equivalent horizontal forces shared between each braced bay:
@Roof=\frac { 1 }{ 200 } \times 714.2\times 0.5=1.79kN
@Floors=\frac { 1 }{ 200 } \times 2658.2\times 0.5=6.65kN
Analysis of Braced Bay
A computer analysis of the bracing system can be performed to obtain the member forces. Alternatively, hand calculations can be carried out to find the member forces by analyzing the braced bay as a
cantilever truss. However, since we are interested in sizing the bracing at base level only, simply resolving forces horizontally at ground level is sufficient to calculate the force in the lowest
(most highly loaded) bracing member, as shown in Figure 8.
Figure 8: Loading of braced frame
Design value of base shear = 64.8+(3×129.6) + 1.79+(3×6.65) = 475.3kN
Resolving the forces into components, the axial force in the bracing member at base level is given as:
{ N }_{ Ed }=\frac { 475.3 }{ cos33.7 } =571.3kN\left( c \right)
Member Verification
From the analysis, the axial force in the bracing member is compressive, thus ordinarily this member ought to be designed for compression only, however, when wind is applied in the opposite
direction, the bracing member considered above will be loaded in tension. Thus, the bracing member will be designed to withstand both tensile and compressive forces of the same magnitude.
Try 168.3 x 12.5mm CHS – Section
Depth, h=168mm; Thickness, t= 12.5mm; Radius of gyration about major & major axis i[yy] = i[z]=5.53cm; Area of section A = 61.2cm^2
Tensile resistance
verify\quad \frac { { N }_{ Ed } }{ { N }_{ t,Rd } } \le 1.0
{ N }_{ t,Rd }=\frac { A{ f }_{ y } }{ { \gamma }_{ M0 } } =\frac { 6120\times 275 }{ 1.0 } =1683kN
\frac { { N }_{ Ed } }{ { N }_{ t,Rd } } =\frac { 571.3 }{ 1683 } =0.34<1\quad o.k
Flexural buckling resistance
For design against compression, the flexural buckling resistance controls.
verify\quad \frac { { N }_{ Ed } }{ { N }_{ b,Rd } } \le 1.0
Assuming no intermediate restraint, the effective length of the brace member is taken as the actual length.
{ l }_{ cr }=\sqrt { { 4000 }^{ 2 }+{ 6000 }^{ 2 } } =7211mm
For hot finished CHS section, buckling curve a applies, hence α =0.21
\lambda =\frac { { l }_{ cr } }{ i } \cdot \frac { 1 }{ 93.9\varepsilon } =\frac { 7211 }{ 55.3 } \frac { 1 }{ 86.8 } =1.5
\phi =0.5\left[ 1+\alpha \left( \lambda -0.2 \right) +{ \lambda }^{ 2 } \right] \\ =0.5\left[ 1+0.21\left( 1.5-0.2 \right) +{ 1.5 }^{ 2 } \right] =1.76
\chi =\frac { 1 }{ \phi +\sqrt { { \phi }^{ 2 }-{ \lambda }^{ 2 } } }
=\frac { 1 }{ 1.76+\sqrt { 1.76-{ 1.5 }^{ 2 } } } =0.37
{ N }_{ b,Rd }=\chi \frac { A{ f }_{ y } }{ { \gamma }_{ M1 } }
=0.37\times \frac { 6120\times 275 }{ 1.0 } =622.7kN
\frac { { N }_{ Ed } }{ { N }_{ b,Rd } } =\frac { 571.3 }{ 622.7 } =0.92<1\quad o.k
Use a 168.3 x 12.5 Circular Hollow Section
Frame Stability
Finally, having sized the brace section, a check for the frame susceptibility to second order effect must be carried out. If the frame is found to be sensitive to sway, the bracing members have to be
resized for the increased lateral forces, either through the use of amplification factors or a full blown second order analysis.
See: Design of Steel Trusses to Eurocode3
Sources & Citations
• D.G Brown., D.C Iles., E. Yandzio. Steel Building Design: Medium Rise Braced Frames in accordance with Eurocodes and the U.K National Annexes. Steel Construction Institute.
• M.E Brettle., D.G Brown. Steel Building Design: Worked examples for students In accordance with Eurocodes and the UK National Annexes. Steel Construction Institute.
• BS EN 1993: Design of steel structures – Part 1-1: General rules and rules for buildings
• U.K National Annex to BS EN 1993: Design of steel structures – Part 1-1: General rules and rules for buildings. | {"url":"https://structurescentre.com/steel-bracing-in-braced-multi-storey-frames/","timestamp":"2024-11-05T02:29:24Z","content_type":"text/html","content_length":"139312","record_id":"<urn:uuid:2f174956-e4c4-41ae-a484-9c4ada8652aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00192.warc.gz"} |
How to use the TODAY function
What is the TODAY function?
The TODAY function returns the Excel date (serial number) of the current date.
Note! This function is volatile!
What is a volatile function?
The TODAY function is a volatile function, it updates or recalculates every time the worksheet is recalculated. This may slow down your workbook calculations considerably if there are many formulas
that depend on the TODAY function.
It may also slow down your worksheet/workbook if you have many volatile functions.
When is the worksheet calculated?
Cells containing non volatile functions are only calculated once or until you force a recalculation, however, volatile functions are recalculated each time you type in a cell and press enter.
Can you stop recalculating a worksheet?
Yes, you can change a setting to manual recalculations.
1. Go to tab "Formulas".
2. Press with left mouse button on the "Calculation Options" button, a popup menu appears.
3. Press with mouse on "Manual".
This stops the automatic recalculations.
How to force a recalculation?
Pressing F9 key will recalculate or refresh all the formulas and values in every worksheet of every workbook you have open.
Pressing Shift+F9 will only recalculate the formulas and values on the single worksheet you're currently viewing or active.
Pressing Ctrl+Alt+F9 is the quickest way to force a full recalculation of absolutely everything in all open workbooks, even if nothing has changed. It ignores whether changes were made or not and
completely recomputes.
Are there more volatile functions in Excel?
Yes. OFFSET, TODAY, RAND, NOW among others.
Function Syntax Description
OFFSET OFFSET(reference, rows, cols) Returns a cell offset from a reference cell.
NOW NOW() Returns the current time.
RAND RAND() Returns a random decimal between 0 and 1.
RANDARRAY RANDARRAY([rows], [columns], [min], [max], [whole_number]) Returns an array with random numbers.
RANDBETWEEN RANDBETWEEN(bottom, top) Returns a random whole number between bottom and top
Note, that conditional formatting is extremely volatile or super-volatile meaning it is recalculated as you scroll through a worksheet.
What are dates in Excel?
Dates are stored numerically but formatted to display in human-readable date/time formats, this enables Excel to do work with dates in calculations.
For example, dates are stored as sequential serial numbers with 1 being January 1, 1900 by default. The integer part (whole number) represents the date the decimal part represents the time.
This allows dates to easily be formatted to display in many date/time formats like mm/dd/yyyy, dd/mm/yyyy and so on and still be part of calculations as long as the date is stored numerically in a
You can try this yourself, type 10000 in a cell, press CTRL + 1 and change the cell's formatting to date, press with left mouse button on OK. The cell now shows 5/18/1927.
TODAY Function Syntax
TODAY Function Arguments
The TODAY function has no arguments
TODAY Function example
Formula in cell B4:
What does the TODAY function return?
The Excel date the TODAY function returns is a serial number that Excel recognizes and can be filtered, sorted and used in other date calculations.
Excel dates are actually serial numbers formatted as dates, 1/1/1900 is 1 and 2/2/2018 is 43133. There are 43132 days between 2/2/2018 and 1/1/1900.
You can try this yourself, type 10000 in a cell, press CTRL + 1 and change the cell's formatting to date, press with left mouse button on OK. The cell now shows 5/18/1927.
Filter the upcoming dates for next week
This example demonstrates a formula that extracts records from cell range B3:C18 if the dates fall during the next seven days. The formula automatically changes the date
Excel 365 formula in cell E5:
This formula works only in Excel 365, the FILTER function may return multiple values that automatically spills to cells below.
Explaining formula
An Excel date is a serial number that Excel recognizes and can be filtered, sorted and used in other date calculations.
Excel dates are actually serial numbers formatted as dates, 1/1/1900 is 1 and 2/2/2018 is 43133. There are 43132 days between 2/2/2018 and 1/1/1900.
You can try this yourself, type 10000 in a cell, press CTRL + 1 and change the cell's formatting to date, press with left mouse button on OK. The cell now shows 5/18/1927.
Step 1 - Add 7 days to today's date
The plus sign + lets you perform addition in an Excel formula.
equals 45248 (11/18)
Step 2 - Check dates smaller than today plus 7 days
The less than sign. larger than sign, and the equal sign are comparison operators that let you create logical expressions. The logical expression returns the boolean value, TRUE or FALSE.
{45249; 45244; 45234; 45245; 45245; 45251; 45241; 45241; 45252; 45247; 45241; 45251; 45252; 45253; 45240; 45240}<=45248
and returns
{TRUE; TRUE; TRUE; TRUE; TRUE; FALSE; TRUE; TRUE; FALSE; TRUE; TRUE; FALSE; FALSE; FALSE; TRUE; TRUE}
Step 3 - Check dates later than or equal to today
{45249; 45244; 45234; 45245; 45245; 45251; 45241; 45241; 45252; 45247; 45241; 45251; 45252; 45253; 45240; 45240}<=45241
and returns
{TRUE; TRUE; FALSE; TRUE; TRUE; TRUE; FALSE; FALSE; TRUE; TRUE; FALSE; TRUE; TRUE; TRUE; FALSE; FALSE}
Step 4 - Multiply boolean values to perform AND logic
The asterisk character lets you multiply numbers and boolean values in an Excel formula. The numerical equivalent to boolean value TRUE is 1 and FALSE is 0 (zero)
{TRUE; TRUE; TRUE; TRUE; TRUE; FALSE; TRUE; TRUE; FALSE; TRUE; TRUE; FALSE; FALSE; FALSE; TRUE; TRUE} * {TRUE; TRUE; FALSE; TRUE; TRUE; TRUE; FALSE; FALSE; TRUE; TRUE; FALSE; TRUE; TRUE; TRUE; FALSE;
and returns
Step 5 - Filter records
The FILTER function extracts values/rows based on a condition or criteria.
Function syntax: FILTER(array, include, [if_empty])
and returns the following array in cell E5 and spills to adjacent cells to the right and below as far as needed.
{45249, "#001"; 45244, "#007"; 45245, "#016"; 45245, "#012"; 45247, "#009"}
'TODAY' function examples
Excel calendar
Table of Contents Excel monthly calendar - VBA Calendar Drop down lists Headers Calculating dates (formula) Conditional formatting Today Dates […]
Functions in 'Date and Time' category
The TODAY function function is one of 22 functions in the 'Date and Time' category.
Excel function categories
Excel categories
How to comment
How to add a formula to your comment
<code>Insert your formula here.</code>
Convert less than and larger than signs
Use html character entities instead of less than and larger than signs.
< becomes < and > becomes >
How to add VBA code to your comment
[vb 1="vbnet" language=","]
Put your VBA code here.
How to add a picture to your comment:
Upload picture to postimage.org or imgur
Paste image link to your comment. | {"url":"https://www.get-digital-help.com/how-to-use-the-today-function/","timestamp":"2024-11-06T23:10:22Z","content_type":"application/xhtml+xml","content_length":"186556","record_id":"<urn:uuid:c77ed1f2-f78a-4877-9a6c-0ff053ff4e7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00824.warc.gz"} |
What is a Linear Measurement ⭐ Definition, Units, Examples, Facts
Linear Measurement – Definition with Examples
Updated on January 9, 2024
Welcome to another intriguing session at Brighterly, the one-stop destination for children to fall in love with mathematics! Today, we unravel the concept of Linear Measurement. If you’ve ever
pondered about the length of your homework book, the width of your table tennis bat, or the distance you cover while cycling to the park, then you’ve already dabbled in linear measurement, perhaps
without even realizing it!
At its core, linear measurement is the method we employ to ascertain the distance between two points, usually along a straight line. It’s one of the most fundamental mathematical concepts, enabling
us to quantify our physical world in terms of lengths, heights, and distances. Through the concept of linear measurement, the abstract world of numbers and units intertwines with our tangible
reality, giving shape to our understanding of space.
What Is Linear Measurement?
Let’s start with an understanding of Linear Measurement. The term can sound a bit complicated, but it’s an integral part of our day-to-day lives. When we measure the length, width, or height of
anything, we are using linear measurement. This concept helps us understand the world around us, allowing us to precisely determine the distance between two points. Linear measurements are
everywhere: in the dimensions of your school book, the distance you travel to school, or even the length of your favorite toy.
In essence, linear measurement involves the process of determining the length – or linearity – of an object, which could be a physical entity, a geographical space, or a spatial relationship in
mathematics. It is called ‘linear’ because the measurement is along a straight line (a line in a linear direction) from one end to the other.
Definition of Linear Measurement
When we get into the definition of linear measurement, it can be described as the act of measuring the length of an object or distance between two points. It’s a fundamental aspect of spatial
calculation, crucial to various fields such as physics, engineering, architecture, astronomy, and of course, mathematics. It’s even vital in the world of sports, where the measure of distance covered
or the span of certain equipment can impact the game significantly.
Linear measurement doesn’t concern itself with volume, area, weight, or time, but purely with distance. For example, when you measure the length of a pencil using a ruler, you’re executing a linear
Definition of Units in Linear Measurement
Linear measurements aren’t complete without mentioning units of measurement. These are standard quantities used to express the amount of linear distance. Some of the commonly used units include
millimeters (mm), centimeters (cm), meters (m), kilometers (km), inches (in), feet (ft), and miles (mi).
Units can be categorized into two systems: the metric system (millimeters, centimeters, meters, kilometers) and the imperial system (inches, feet, yards, and miles). Depending on where you are in the
world, one system may be more commonly used than the other.
Properties of Linear Measurement
Let’s delve into some of the key properties of linear measurement. First, it’s a scalar quantity, which means it only has magnitude and no direction. Second, it is invariant, which means that
regardless of the position or orientation of the object, the measurement remains the same. Finally, linear measurement is transitive, implying that if point A is a certain distance from point B, and
point B is the same distance from point C, then point A is the same distance from point C.
These properties help us understand the basics of linear measurement, the foundation on which we measure the world around us.
Properties of Units in Linear Measurement
Just as linear measurements have properties, so too do the units in linear measurement. Units are standardized, meaning they have an agreed-upon value that does not change. This standardization
allows us to accurately communicate measurements.
Also, units are scalable. This means they can be multiplied or divided into larger or smaller units while still retaining their value. For instance, 1 meter can be broken down into 100 centimeters,
or scaled up to be a fraction of a kilometer. This scalability allows us to measure objects of different sizes with precision.
Differences Between Different Units of Linear Measurement
The differences between various units of linear measurement come down to their sizes and the systems from which they originate. A millimeter is smaller than a centimeter, which is smaller than a
meter, and so forth. This is true for both the metric and the imperial system, where, for instance, an inch is smaller than a foot, which is smaller than a yard.
Each system is suited to specific applications. The metric system, with its logical base-10 divisions, is used worldwide in scientific and everyday measurements. Conversely, the imperial system, with
its more varied conversions, is primarily used in the United States for day-to-day measurements.
Equations Involving Linear Measurements
Linear measurements aren’t only used for measuring objects; they’re also crucial in creating equations involving linear measurements. You’ve probably encountered these equations in your math classes,
such as calculating the perimeter of a square (P = 4a, where a is the length of one side) or finding the distance between two points on a graph (d = sqrt[(x2−x1)² + (y2−y1)²]).
Writing Equations Involving Linear Measurements
When writing equations involving linear measurements, it’s important to identify the variable representing the linear measurement and understand how it interacts with other variables in the equation.
These equations may describe a relationship between different lengths, such as in geometry, or represent a conversion between different units of length, like converting feet to miles.
Converting Between Different Units of Linear Measurement
Converting between different units of linear measurement involves understanding the relationship between the units. For instance, knowing that 1 kilometer equals 1,000 meters allows you to convert
between these units. Conversion is particularly important when dealing with measurements from different systems, like translating a length from meters (metric system) to feet (imperial system).
Practice Problems on Linear Measurements
To reinforce your understanding of linear measurement, let’s dive into some practice problems on linear measurements:
1. If the length of a rectangle is 5 cm and the width is 3 cm, what is the perimeter?
2. If you run a 5K race, how many meters have you run?
3. If a pencil is 7.5 inches long, how long is it in feet?
Navigating through the world of mathematics, especially the realm of linear measurements, may seem daunting. But, as we’ve seen, these concepts are deeply intertwined with our daily experiences. At
Brighterly, we strive to demystify these seemingly complex mathematical concepts, making them accessible, fun, and relatable. As you go about your day, remember that each step you take, every page of
your book, and even the screen on which you’re reading this – they all resonate with the pulse of linear measurements. So, the next time you measure something, think about the units you’re using, the
properties at play, and the conversions you might need to make. After all, the world around us is a vast mathematical playground waiting to be explored!
Frequently Asked Questions on Linear Measurement
Lastly, let’s answer some of the most Frequently Asked Questions on Linear Measurement:
What is linear measurement?
Linear measurement is a method used to determine the distance between two points. This measurement usually happens along a straight line and can be used to measure various things such as the length,
width, or height of an object.
What are the units of linear measurement?
The units of linear measurement are the standardized quantities we use to express the result of a measurement. The most common units are millimeters (mm), centimeters (cm), meters (m), and kilometers
(km) in the metric system, and inches (in), feet (ft), yards (yd), and miles (mi) in the imperial system.
How do you convert between different units of linear measurement?
Converting between different units of linear measurement involves understanding the relationship between these units. For instance, there are 100 centimeters in a meter, so to convert a measurement
in meters to centimeters, you would multiply the number of meters by 100. Converting between units in the metric system and the imperial system requires knowing the conversion factors, such as 1 inch
equals 2.54 centimeters.
So, there you have it! Linear measurement demystified, right here at Brighterly. For more such fun and insightful lessons, stay connected with us. After all, at Brighterly, we illuminate the path of
Information Sources:
Poor Level
Weak math proficiency can lead to academic struggles, limited college, and career options, and diminished self-confidence.
Mediocre Level
Weak math proficiency can lead to academic struggles, limited college, and career options, and diminished self-confidence.
Needs Improvement
Start practicing math regularly to avoid your child`s math scores dropping to C or even D.
High Potential
It's important to continue building math proficiency to make sure your child outperforms peers at school. | {"url":"https://brighterly.com/math/linear-measurement/","timestamp":"2024-11-06T07:24:08Z","content_type":"text/html","content_length":"94185","record_id":"<urn:uuid:e973dfe8-119e-40e9-8e04-3803f9a86204>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00244.warc.gz"} |
The Nest of StudentsThe Nest of Students: MATLAB
Carmine Vittoria ... Publisher: CRC Press; 2nd edition (November, 2023) - Language: English.
Future microwave, wireless communication systems, computer chip designs, and sensor systems will require miniature fabrication processes in the order of nanometers or less as well as the fusion of
various material technologies to produce composites consisting of many different materials. This requires distinctly multidisciplinary collaborations, implying that specialized approaches will not be
able to address future world markets in communication, computer, and electronic miniaturized products.
Anticipating that many students lack specialized simultaneous training in magnetism and magnetics, as well as in other material technologies, Magnetics, Dielectrics, and Wave Propagation with MATLABR
Codes avoids application-specific descriptions, opting for a general point of view of materials per se. Specifically, this book develops a general theory to show how a magnetic system of spins is
coupled to acoustic motions, magnetoelectric systems, and superconductors. Phenomenological approaches are connected to atomic-scale formulations that reduce complex calculations to essential forms
and address basic interactions at any scale of dimensionalities. With simple and clear coverage of everything from first principles to calculation tools, the book revisits fundamentals that govern
magnetic, acoustic, superconducting, and magnetoelectric motions at the atomic and macroscopic scales, including superlattices. Constitutive equations in Maxwell’s equations are introduced via
general free energy expressions which include magnetic parameters as well as acoustic, magnetoelectric, semiconductor, and superconducting parameters derived from first principles. More importantly,
this book facilitates the derivation of these parameters, as the dimensionality of materials is reduced toward the microscopic scale, thus introducing new concepts. The deposition of ferrite films at
the atomic scale complements the approach toward the understanding of the physics of miniaturized composites. Thus, a systematic formalism of deriving the permeability or the magnetoelectric coupling
tensors from first principles, rather than from an ad hoc approach, bridges the gap between microscopic and macroscopic principles as applied to wave propagation and other applications.
MATLAB R2024a: Math. Graphics. Programming.
Saturday, April 06, 2024 Data Analysis , MATLAB , Programming , Softwares
MATLAB R2024a v24.1.0.2537033 x64 [Size: 12.375 GB] ... The company MathWorks is the most complete software computing computer production in … program main the company that, in fact, come, it is
software MATLAB (stands for Matrix Laboratory and able laboratories, matrix), which is one of the most advanced software numerical calculations, and mathematical and a programming language, advanced,
and fourth-generation and it is possible to visualize and draw the functions and data it provides. Icon and sign MATLAB that with the manufacturer’s logo is also identical from the wave equation,
etc. of the membrane in L-shape and the Special Functions is extracted. Competitors of MATLAB can be used to Mathematica, etc. Maple and Mathcad noted.
Facilities and features of the software MathWorks MATLAB: – Perform a variety of calculations, heavy and complicated math – Development environment for managing code, files and data – Discover the
methods to faster reach a solution – A variety of mathematical functions for linear algebra, etc. statistics. analysis, February,, optimization, etc. filtering, etc. old style numeric and… – Draw
functions of the graphics were two-dimensional and three-dimensional visualization information – Design and build user interfaces under the programming languages C++, C or Java – Ability to test and
measure the exact functions and charts – Possibility of signal processing, communications, image and video – Is there a box of tools from various companies, engineering, navigation, applications
monitoring, such as telecommunications, control, fuzzy, etc. estimate the … statistics. gathering data, simulation system, neural network, etc. probability and… – Ability to biological computing ...
Mastering MATLAB: A Comprehensive Journey Through Coding and Analysis
Saturday, March 02, 2024 Books , Codes , MATLAB
Kameron Hussain; Frahaan Hussain ... 220 pages - Publisher: Independently Published; (January, 2024) - Language: English.
Dive into the world of MATLAB with "Mastering MATLAB: A Comprehensive Journey Through Coding and Analysis," a definitive guide designed for both beginners and experienced users. This book serves as
an invaluable resource for engineers, scientists, and anyone interested in harnessing the power of MATLAB for numerical computation, data analysis, and algorithm development. Starting with the
basics, the book introduces you to the MATLAB environment, guiding you through its user-friendly interface and powerful tools. You'll learn to write clean, efficient MATLAB code, with a focus on
understanding syntax, functions, and the extensive libraries available. Each chapter builds upon the last, ensuring a gradual and solid grasp of concepts.
"Mastering MATLAB" is more than just a programming guide; it's a practical handbook for real-world applications. Delve into chapters dedicated to data visualization, matrix manipulations, and
statistical analysis, all crucial for data-driven projects. You'll encounter detailed examples and exercises that demonstrate how MATLAB can solve complex problems in engineering, science, and
mathematics. For advanced readers, the book delves into sophisticated topics such as GUI development, machine learning applications, and integrating MATLAB with other programming languages. This
section is particularly beneficial for professionals seeking to elevate their coding prowess and integrate MATLAB into their workflow for more efficient problem-solving and research.
Every concept is explained in-depth, accompanied by illustrative examples, making complex ideas accessible. Whether you're a student needing a comprehensive academic resource, a professional aiming
to enhance your technical skillset, or a hobbyist eager to explore computational mathematics, "Mastering MATLAB" is your go-to guide. Embrace the journey of mastering MATLAB and unlock a world of
possibilities in coding and analysis.
Machine Learning for Decision Makers 2nd Edition
Patanjali Kashyap ... 674 pages - Language: English - Publisher: Apress; 2nd edition (December, 2023).
This new and updated edition takes you through the details of machine learning to give you an understanding of cognitive computing, IoT, big data, AI, quantum computing, and more. The book explains
how machine learning techniques are used to solve fundamental and complex societal and industry problems.This second edition builds upon the foundation of the first book, revises all of the chapters,
and updates the research, case studies, and practical examples to bring the book up to date with changes that have occurred in machine learning. A new chapter on quantum computers and machine
learning is included to prepare you for future challenges. Insights for decision makers will help you understand machine learning and associated technologies and make efficient, reliable, smart, and
efficient business decisions. All aspects of machine learning are covered, ranging from algorithms to industry applications. Wherever possible, required practical guidelines and best practices
related to machine learning and associated technologies are discussed. Also covered in this edition are hot-button topics such as ChatGPT, superposition, quantum machine learning, and reinforcement
learning from human feedback (RLHF) technology. Upon completing this book, you will understand machine learning, IoT, and cognitive computing and be prepared to cope with future challenges related to
machine learning.
What You Will Learn? Master the essentials of machine learning, AI, cloud, and the cognitive computing technology stack + Understand business and enterprise decision-making using machine learning +
Become familiar with machine learning best practices + Gain knowledge of quantum computing and quantum machine learning.
Fractal Patterns with MATLAB
Santo Banerjee, A. Gowrisankar, Komandla Mahipal Reddy ... 100 pages - Language: English - Publisher: Springer; (January, 2024).
This book presents the iterative beauty of fractals and fractal functions graphically with the aid of MATLAB programming. The fractal images generated using the MATLAB codes provide visual delight
and highly encourage the fractal lovers for creative thinking. The book compiles five cutting-edge research chapters, each with state-of-the art fractal illustrations. It starts with the fundamental
theory for the construction of fractal sets via the deterministic iteration algorithm. Incorporating the theoretical base, fractal illustrations of elementary fractal sets are provided with the
explicit MATLAB code. The book gives examples of MATLAB codes to present the fractal surfaces.
This book is contributed to all the research beginners as well as the professionals on the field of fractal analysis. As it covers basic fractals like Sierpinski triangle to advanced fractal
functions with explicit MATLAB code, the presented fractal illustrations hopefully benefit even the non-field readers. The book is a useful course to all the research beginners on the fractal and
fractal-related fields.
Applied Numerical Methods Using MATLAB
R. V. Dukkipati ... 1728 pages - Language: English - Publisher: Mercury Learning and Information; (April, 2023).
The book is designed to cover all major aspects of applied numerical methods, including numerical computations, solution of algebraic and transcendental equations, finite differences and
interpolation, curve fitting, correlation and regression, numerical differentiation and integration, matrices and linear system of equations, numerical solution of ordinary differential equations,
and numerical solution of partial differential equations. MATLAB is incorporated throughout the text and most of the problems are executed in MATLAB code. It uses a numerical problem-solving
orientation with numerous examples, figures, and end of chapter exercises. Presentations are limited to very basic topics to serve as an introduction to more advanced topics.
Features: Integrates MATLAB throughout the text + Includes over 600 fully-solved problems with step-by-step solutions + Limits presentations to basic concepts of solving numerical methods.
Table of Contents: 1: Numerical Computations. 2: Linear System of Equations. 3: Solution of Algebraic and Transcendental Equations. 4: Numerical Differentiation. 5: Finite Differences and
Interpolation. 6: Curve Fitting, Regression, and Correlation. 7: Numerical Integration. 8: Numerical Solution of Ordinary Differential Equations. 9: Direct Numerical Integration Methods. 10: MATLAB
Basics. 11: Optimization. 12: Partial Differential Equations. Appendices: A. Answers to Selected Problems. B. Bibliography. C. Partial Fraction Expansions. D. Basic Engineering Mathematics. E.
Cramer’s Rule. F. MATLAB Built-In Functions. G. MATLAB M-File Functions. Index.
Artificial Neural Network and Machine Learning using MATLAB
Saturday, September 02, 2023 Artificial Intelligence , Courses/Lessons , Machine Learning , MATLAB , Neural Networks
Teacher: Nastaran Reza Nazar Zadeh - Language: English - Videos: 50 - Duration: 4 hours and 11 minutes.
Artificial Neural Network and Machine Learning using MATLAB This course is uniquely designed to be suitable for both experienced developers seeking to make that jump to Machine learning or complete
beginners who don’t understand machine learning and Artificial Neural Network from the ground up. In this course, we introduce a comprehensive training of multilayer perceptron neural networks or MLP
in MATLAB, in which, in addition to reviewing the theories related to MLP neural networks, the practical implementation of this type of network in MATLAB environment is also fully covered. MATLAB
offers specialized toolboxes and functions for working with Machine Learning and Artificial Neural Networks which makes it a lot easier and faster for you to develop a NN. At the end of this course,
you’ll be able to create a Neural Network for applications such as classification, clustering, pattern recognition, function approximation, control, prediction, and optimization.
What you’ll learn: Develop a multilayer perceptron neural networks or MLP in MATLAB using Toolbox + Apply Artificial Neural Networks in practiceBuilding Artificial Neural Network Model + Knowledge on
Fundamentals of Machine Learning and Artificial Neural Network + Understand Optimization methods + Understand the Mathematical Model of a Neural Network + Understand Function approximation
methodology + Make powerful analysis + Knowledge on Performance Functions + Knowledge on Training Methods for Machine Learning.
A Practical Guide to Error-Control Coding Using MATLAB
Yuan Jiang ... 281 pages - Publisher: Artech House; (September, 2010) - Language: English - ISBN-10: 1608070883 - ISBN-13: 978-1608070886.
This practical resource provides engineers with a comprehensive understanding of error control coding, an essential and widely applied area in modern digital communications. The goal of error control
coding is to encode information in such a way that even if the channel (or storage medium) introduces errors, the receiver can correct the errors and recover the original transmitted information.
This book includes the most useful modern and classic codes, including block, Reed Solomon, convolutional, turbo, and LDPC codes. Professionals find clear guidance on code construction, decoding
algorithms, and error correcting performances. Moreover, this unique book introduces computer simulations integrally to help readers master key concepts. Including a companion DVD with MATLAB
programs and supported with over 540 equations, this hands-on reference provides an in-depth treatment of a wide range of practical implementation issues. DVD is included! It contains carefully
designed MATLAB programs that practitioners can apply to their projects in the field.
An Introduction to Finite Element Analysis Using Matlab Tools
Shuvra Das ... 313 pages - Language: English - Publisher: Springer (March, 2023) - AmazonSIN: B0BSF5M5RL.
This book is an attempt to develop a guide for the user who is interested in learning the method by doing. There is enough discussion of some of the basic theory so that the user can get a broad
understanding of the process. And there are many examples with step-by-step instructions for the user to quickly develop some proficiency in using FEA. We have used Matlab and its PDE toolbox for the
examples in this text. The syntax and the modeling process are easy to understand and a new user can become productive very quickly. The PDE toolbox, just like any other commercial software, can
solve certain classes of problems well but is not capable of solving every type of problem. For example, it can solve linear problems but is not capable of handling non-linear problems. Being aware
of the capabilities of any tool is an important lesson for the user and we have, with this book, tried to highlight that lesson as well.
MATLAB for Engineering and the Life Sciences 2nd Edition
Joseph Tranquillo ... 189 pages - Language: English - Publisher: Springer; 2nd edition (March, 2023) - AmazonSIN: B0BW3RK1Z8.
This book is a self-guided tour of MATLAB for engineers and life scientists. It introduces the most commonly used programming techniques through biologically inspired examples. Although the text is
written for undergraduates, graduate students and academics, as well as those in industry, will find value in learning MATLAB.
The book takes the emphasis off of learning syntax so that the reader can focus more on algorithmic thinking. Although it is not assumed that the reader has taken differential equations or a linear
algebra class, there are short introductions to many of these concepts. Following a short history of computing, the MATLAB environment is introduced. Next, vectors and matrices are discussed,
followed by matrix-vector operations. The core programming elements of MATLAB are introduced in three successive chapters on scripts, loops, and conditional logic. The last three chapters outline how
to manage the input and output of data, create professional quality graphics and find and use MATLAB toolboxes. Throughout, biomedical and life science examples are used to illustrate MATLAB's
Focuses the reader on algorithmic thinking and not on the learning of syntax; Emphasizes the foundations and ability to efficiently integrate and operate vectors and matrices; Demonstrates MATLAB as
a powerful all-purpose application tool concentrating on the idea of a script and then the concept of a function.
Application of Numerical Methods in Engineering Problems using MATLAB
M. S. H. Al-Furjan, M. Rabani Bidgoli, Reza Kolahchi, A. Farrokhian, M. R. Bayati ... 297 pages - Publisher: CRC Press; (January, 2023) - Language: English - ISBN-10: 1032393912 - ISBN-13:
Application of Numerical Methods in Engineering Problems using MATLAB(R) presents an analysis of structures using numerical methods and mathematical modeling. This structural analysis also includes
beam, plate, and pipe elements, and also examines deflection and frequency, or buckling loads. The various engineering theories of beams/plates/shells are comprehensively presented, and the
relationships between stress and strain, and the governing equations of the structure are extracted. To solve governing equations with numerical methods, there are two general types, including
methods based on derivatives or integrals. Derivative-based methods have the advantage of flexibility in modeling boundary conditions, low analysis time, and very high degree of accuracy. Therefore,
the book explains numerical methods based on derivatives, especially the differential quadrature method. Features: Examines the application of numerical methods to obtain the deflection, frequency,
and buckling loads Discusses the application of numerical methods for solving motion equations Includes numerous practical and applicable examples throughout.
Getting Started in Mathematical Life Sciences
Sunday, April 09, 2023 Books , MATLAB , Programming
Makoto Sato ... 213 pages - Language: English - Publisher: Springer; (January, 2023) - ISBN-10: 9811982562 - ISBN-13: 978-9811982569.
This book helps the reader make use of the mathematical models of biological phenomena starting from the basics of programming and computer simulation. Computer simulations based on a mathematical
model enable us to find a novel biological mechanism and predict an unknown biological phenomenon. Mathematical biology could further expand the progress of modern life sciences. Although many
biologists are interested in mathematical biology, they do not have experience in mathematics and computer science. An educational course that combines biology, mathematics, and computer science is
very rare to date. Published books for mathematical biology usually explain the theories of established mathematical models, but they do not provide a practical explanation for how to solve the
differential equations included in the models, or to establish such a model that fits with a phenomenon of interest.
MATLAB is an ideal programming platform for the beginners of computer science. This book starts from the very basics about how to write a programming code for MATLAB (or Octave), explains how to
solve ordinary and partial differential equations, and how to apply mathematical models to various biological phenomena such as diabetes, infectious diseases, and heartbeats. Some of them are
original models, newly developed for this book. Because MATLAB codes are embedded and explained throughout the book, it will be easy to catch up with the text. In the final chapter, the book focuses
on the mathematical model of the proneural wave, a phenomenon that guarantees the sequential differentiation of neurons in the brain. This model was published as a paper from the author’s lab (Sato
et al., PNAS 113, E5153, 2016), and was intensively explained in the book chapter “Notch Signaling in Embryology and Cancer”, published by Springer in 2020. This book provides the reader who has a
biological background with invaluable opportunities to learn and practice mathematical biology.
Intelligent Reliability Analysis Using MATLAB and AI
Sunday, January 15, 2023 Artificial Intelligence , Books , Machine Learning , MATLAB
Cherry Bhargava, Pardeep Kumar Sharma ... 198 pages - Language: English - Publisher: BPB Publications; (June, 2021).
Intelligent Reliability Analysis using MATLAB and AI explains a roadmap to analyze and predict various electronic components’ future life and performance reliability. Deeply narrated and authored by
reliability experts, this book empowers the reader to deepen their understanding of reliability identification, its significance, preventive measures, and various techniques.The book teaches how to
predict the residual lifetime of active and passive components using an interesting use case on electronic waste. The book will demonstrate how the capacity of re-usability of electronic components
can benefit the consumer to reuse the same component, with the confidence of successful operations. It lists key attributes and ways to design experiments using Taguchi’s approach, based on various
acceleration factors.
This book makes it easier for readers to understand reliability modeling of active and passive components using the Artificial Neural Network, Fuzzy Logic, Adaptive Neuro-Fuzzy Inference System
(ANFIS). The book keeps you engaged with a systematic and detailed explanation of step-wise MATLAB-based implementation of electronic components. These explanations and illustrations will help the
readers to predict fault and failure well before time.
Advanced Engineering Mathematics: A Second Course with MATLAB
Saturday, January 07, 2023 Books , Maths , MATLAB
Dean G. Duffy ... 448 pages - Language: English - Publisher: Chapman and Hall/CRC; (March, 2022) - ISBN: 103-2133422 and 978-1032133423.
Through four previous editions of Advanced Engineering Mathematics with MATLAB, the author presented a wide variety of topics needed by today's engineers. The fifth edition of that book, available
now, has been broken into two parts: topics currently needed in mathematics courses and a new stand-alone volume presenting topics not often included in these courses and consequently unknown to
engineering students and many professionals. The overall structure of this new book consists of two parts: transform methods and random processes. Built upon a foundation of applied complex
variables, the first part covers advanced transform methods, as well as z-transforms and Hilbert transforms--transforms of particular interest to systems, communication, and electrical engineers.
This portion concludes with Green's function, a powerful method of analyzing systems.
The second portion presents random processes--processes that more accurately model physical and biological engineering. Of particular interest is the inclusion of stochastic calculus. The author
continues to offer a wealth of examples and applications from the scientific and engineering literature, a highlight of his previous books. As before, theory is presented first, then examples, and
then drill problems. Answers are given in the back of the book. This book is all about the future: The purpose of this book is not only to educate the present generation of engineers but also the
next. The main strength is the text is written from an engineering perspective. The majority of my students are engineers. The physical examples are related to problems of interest to the engineering
MATLAB for Engineering Applications 5th Edition
William J. Palm III ... Language: English - Publisher: McGraw Hill; 5th edition (March, 2022) - ISBN-10: 1265139199 - ISBN-13: 978-1265139193.
MATLAB for Engineering Applications is a simple, concise book designed to be useful for beginners and to be kept as a reference. MATLAB is a globally available standard computational tool for
engineers and scientists. This text is intended as a stand-alone introduction to MATLAB and can be used in an introductory course, as a self-study text, or as a supplementary text. The text’s
material is based on the author’s experience in teaching a required two-credit semester course devoted to MATLAB for engineering freshmen.
Nonlinear Model Predictive Control Theory and Algorithms
Thursday, October 27, 2022 Algorithms , Books , Data Analysis , MATLAB
Lars Grüne, Jürgen Pannek ... 456 pages - Publisher: Springer; 2nd edition (November 11, 2016) - Language: English - ISBN-10: 3319460234 - ISBN-13: 978-3319460239.
This book offers readers a thorough and rigorous introduction to nonlinear model predictive control (NMPC) for discrete-time and sampled-data systems. NMPC schemes with and without stabilizing
terminal constraints are detailed, and intuitive examples illustrate the performance of different NMPC variants. NMPC is interpreted as an approximation of infinite-horizon optimal control so that
important properties like closed-loop stability, inverse optimality and suboptimality can be derived in a uniform manner. These results are complemented by discussions of feasibility and robustness.
An introduction to nonlinear optimal control algorithms yields essential insights into how the nonlinear optimization routine―the core of any nonlinear model predictive controller―works. Accompanying
software in MATLAB® and C++ (downloadable from extras.springer.com/), together with an explanatory appendix in the book itself, enables readers to perform computer experiments exploring the
possibilities and limitations of NMPC. The second edition has been substantially rewritten, edited and updated to reflect the significant advances that have been made since the publication of its
predecessor, including: • a new chapter on economic NMPC relaxing the assumption that the running cost penalizes the distance to a pre-defined equilibrium; • a new chapter on distributed NMPC
discussing methods which facilitate the control of large-scale systems by splitting up the optimization into smaller subproblems; • an extended discussion of stability and performance using
approximate updates rather than full optimization; • replacement of the pivotal sufficient condition for stability without stabilizing terminal conditions with a weaker alternative and inclusion of
an alternative and much simpler proof in the analysis; and • further variations and extensions in response to suggestions from readers of the first edition. Though primarily aimed at academic
researchers and practitioners working in control and optimization, the text is self-contained, featuring background material on infinite-horizon optimal control and Lyapunov stability theory that
also makes it accessible for graduate students in control engineering and applied mathematics.
Practical Optimization with MATLAB
Thursday, October 27, 2022 Books , Data Analysis , MATLAB , Optimization
Mircea Ancau ... 295 pages - Publisher: Cambridge Scholars Publishing; (November, 2019) ... Language: English - ISBN-10: 1527538494 - ISBN-13: 978-1527538498.
A comprehensive introduction to optimization with a focus on practical algorithms for the design of engineering systems: This book offers a comprehensive introduction to optimization with a focus on
practical algorithms. The book approaches optimization from an engineering perspective, where the objective is to design a system that optimizes a set of metrics subject to constraints. Readers will
learn about computational approaches for a range of challenges, including searching high-dimensional spaces, handling problems where there are multiple competing objectives, and accommodating
uncertainty in the metrics. Figures, examples, and exercises convey the intuition behind the mathematical approaches. The text provides concrete implementations in the Julia programming language.
MatLAB R2022a UpDated
Friday, March 11, 2022 Data Analysis , MATLAB , Softwares , Statistics
MathWorks MATLAB 2022a v9.12.0.1884302 [Size: 20.8 GB] ... MATLAB is a highlevel language and interactive environment that is used by millions of engineers and scientists around the world. It allows
you to explore and visualize ideas and collaborate in various disciplines, including signal and image processing, communications, management systems and financial engineering. Whether you’re
analyzing data, developing algorithms, or creating models, MATLAB is designed for the way you think and the work you do. MATLAB toolboxes are professionally developed, rigorously tested, and fully
documented. MATLAB apps let you see how different algorithms work with your data. Iterate until you’ve got the results you want, then automatically generate a MATLAB program to reproduce or automate
your work. Scale your analyses to run on clusters, GPUs, and clouds with only minor code changes. There’s no need to rewrite your code or learn big data programming and out-of-memory techniques.
Features of Mathworks Matlab: Perform a variety of complex mathematical calculations and heavy + Development environment for managing code, files, and data + Explore ways to achieve this solution + A
variety of mathematical functions for linear algebra, statistics, Fourier analysis, optimization, filtering, numerical integration and ... + Drawing two-dimensional and three-dimensional graphics
functions for visualizing data as + Design and construction of user interfaces under the programming languages C ++, C or Java + Ability to test and measure the exact functions and graphs + The
possibility of communication signal processing, image and video + There are various Jbhabzarhay engineering companies for specific applications such as + Telecommunications, control, fuzzy,
estimates, statistics, data collection, simulation systems.
Functional Data Analysis with R and MATLAB
Thursday, January 13, 2022 Algorithms , Books , MATLAB , Statistics
James Ramsay, Giles Hooker, Spencer Graves ... 202 pages - Publisher: Springer; (April, 2010) - Language: English - ISBN-10: 0387981845 - ISBN-13: 978-0387981840.
The book provides an application-oriented overview of functional analysis, with extended and accessible presentations of key concepts such as spline basis functions, data smoothing, curve
registration, functional linear models and dynamic systems. Functional data analysis is put to work in a wide a range of applications, so that new problems are likely to find close analogues in this
book. The code in R and Matlab in the book has been designed to permit easy modification to adapt to new data structures and research problems.
From the BackCover: Scientists often collect samples of curves and other functional observations, and develop models where parameters are also functions. This volume in the UseR! Series is aimed at a
wide range of readers, and especially those who would like apply these techniques to their research problems. It complements Functional Data Analysis, Second Edition and Applied Functional Data
Analysis: Methods and Case Studies by providing computer code in both the R and Matlab languages for a set of data analyses that showcase functional data analysis techniques. The authors make it easy
to get up and running in new applications by adapting the code for the examples, and by being able to access the details of key functions within these pages. This book is accompanied by additional
web-based support at http://www.functionaldata.org for applying existing functions and developing new ones in either language. The companion 'fda' package for R includes script files to reproduce
nearly all the examples in the book including all but one of the 76 figures.
Modeling and Computing for Geotechnical Engineering
M. S. Rahman, M. B. Can Ülker ... 506 pages - Publisher: CRC Press; (September, 2018) ... Language: English - ISBN-10: 149877167X - ISBN-13: 978-1498771672.
Modeling and computing is becoming an essential part of the analysis and design of an engineered system. This is also true of "geotechnical systems", such as soil foundations, earth dams and other
soil-structure systems. The general goal of modeling and computing is to predict and understand the behaviour of the system subjected to a variety of possible conditions/scenarios (with respect to
both external stimuli and system parameters), which provides the basis for a rational design of the system. The essence of this is to predict the response of the system to a set of external forces.
The modelling and computing essentially involve the following three phases: (a) Idealization of the actual physical problem, (b) Formulation of a mathematical model represented by a set of equations
governing the response of the system, and (c) Solution of the governing equations (often requiring numerical methods) and graphical representation of the numerical results. This book will introduce
these phases. | {"url":"https://www.geoteknikk.com/search/label/MATLAB","timestamp":"2024-11-07T09:44:16Z","content_type":"application/xhtml+xml","content_length":"756398","record_id":"<urn:uuid:c854211e-0de6-47b6-9bb5-3c3e495b1670>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00330.warc.gz"} |
Electrical Imaging for Hydrogeology
4.1 The Goal of Inversion
Once electrical imaging data are collected, they are inverted to obtain a spatially discretized (i.e., gridded or meshed) distribution of the electrical properties of the subsurface. In the case of
ER measurements alone, it is just the electrical conductivity structure that is estimated by the inversion. When IP datasets are acquired, both the electrical conductivity and the intrinsic
chargeability (or the intrinsic phase) structure are estimated, and IP data cannot be inverted without ER data. With IP datasets, images can be presented in terms of the real and imaginary components
of the complex conductivity. Irrespective of what data have been acquired—whether ER data alone, or ER and IP data combined—the inversion of electrical imaging datasets involves a number of common
key steps/concepts. For simplicity, we describe the inversion process from the perspective of an ER dataset alone, but the mechanics and considerations introduced in this section apply equally to
combined ER and IP datasets.
The goal of inversion of an ER dataset is to recover a subsurface distribution of electrical conductivity (Figure 10), σ[x,y,z] (note for frequency domain, IP this would be σ*[x,y,z]) that could have
produced the observed data (step 2 and 3 below). The general procedure for inverse modeling consists of the following steps:
1. Start with a distribution of electrical conductivity (typically a homogeneous starting model corresponding to the average apparent conductivity measured in the field);
2. Use a forward simulator which, for the given distribution of electrical conductivity, calculates predicted data using Equation 1 (or Equation 5 for frequency-domain IP, and time-domain IP with
caveats as noted below);
3. Calculate the misfit between the predicted and the observed data and also a measure of the complexity (e.g., roughness) of the electrical conductivity distribution; and
4. If the misfit is less than our stopping criteria, stop and accept the current subsurface distribution of electrical conductivity as the final result. If not, modify the model to improve the fit,
and return to step 2.
In inversion codes, forward models are called repeatedly within an optimization framework to calculate predicted data for comparison with observed data. The optimization iteratively updates the model
to improve the match between predictions and observations. Data processing and inversion can be performed with commercially available software packages such as AGI EarthImager (LaBrecque and Yang,
2001), Res2DInv (Loke and Barker, 1996), or MPT ERTLab, and public-domain or freeware packages such as RESINVM3D (Pidlisecky et al., 2007), E4D (Johnson et al., 2010), ResIPy (Blanchy et al., 2020),
pyGIMLIi (Rucker et al., 2017), or SimPEG (Cockett et al., 2015). The majority of these codes also permit the inversion of IP datasets. It is not our goal to review or compare these codes, but most
follow similar inversion approaches, which are based on Gauss-Newton, quasi-Newton or steepest-descent algorithms (Tarantola, 1987), with some codes supporting use of multiple inversion algorithms.
Depending on selection of modeling and inversion parameters, these codes generally can be made to produce similar results. Default values differ greatly, however, and it is not always clear how
parameters are used within the inversion. Selection of many inversion settings can be somewhat subjective and should be guided by prior knowledge of the site geology or the nature of the targets. For
example, in a layered system, one might choose to apply anisotropic smoothing, which will result in a tomogram that has a layered character. For results to be reproducible, it is critical to (1)
report all parameter selections including default values, (2) document the algorithm used by the software, and (3) archive a copy of the software code or executable. Justifications of parameter
choices should also be documented. An example of how inversion settings affect tomograms is demonstrated in Section 5.2. The process for selecting inversion settings should be guided by prior
information, and many hydrogeologists will find it useful to work with a geophysicist while learning inverse techniques for these data.
Ideally, the inversion should result in the true distribution of electrical conductivities of the subsurface. In practice, this is impossible for several reasons. First, the electrical conductivity
estimates are commonly produced for blocks with dimensions of tens of centimeters to several meters on the side, while earth conductivity varies on much smaller scales. Thus, the best one can hope
for is to identify block conductivities that represent some sort of weighted, spatial averages. Additionally, there commonly is not enough information in the data to uniquely determine all the block
bulk conductivity parameter values. Unlike medical imaging, where it is possible to acquire a 360-degree view around the target, ER is usually limited to surface and borehole electrodes unless
working on soil cores or experimental tanks, which is partly why medical imaging offers higher resolution compared to geophysical imaging. The result of this limited available information is called
ill–conditioning, which is a property of matrix-inversion problems where limited data sensitivity to parameter values causes uncertainties in the data (e.g., resistance measurements) and can lead to
large errors in the parameter estimates (e.g., electrical conductivity values). This ill conditioning is typically rectified through model regularization, which imposes additional constraints on the
estimated model to get a unique set of stable solutions, as described subsequently. Additionally, as noted above, depending on the discretization of the finite-difference grid or finite-element mesh
used for numerical approximations of Equation 1 or 5, some quadripoles may model poorly. For example, the inverse model would not be able to accurately match these data. Finally, accurate
representation of data errors (for example using reciprocal measurements or stacking described above) is needed to ensure quality inversion results that do not over- or under-fit measurements.
Resistivity software packages that incorporate time-domain IP measurements model the distribution of the intrinsic chargeability in addition to electrical conductivity. In contrast, frequency-domain
measurements are processed with algorithms that model the complex electrical conductivity distribution per Equation 5. The electrical conductivity magnitude and phase, or the real and imaginary parts
of the complex electrical conductivity, can be imaged. The measured apparent chargeability and measured apparent phase are directly proportional to each other, although the proportionality constant
will vary depending on how a time-domain instrument is configured (Slater and Lesmes, 2002). Consequently, time-domain measurements can be modeled using Equation 5 if this proportionality constant is
defined (Mwakanyamale et al., 2012). | {"url":"https://books.gw-project.org/electrical-imaging-for-hydrogeology/chapter/the-goal-of-inversion/","timestamp":"2024-11-13T04:40:45Z","content_type":"text/html","content_length":"74376","record_id":"<urn:uuid:cd144374-3111-4c28-a238-c374769762e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00002.warc.gz"} |
Hybrid k-Clustering: Blending k-Median and k-Center
We propose a novel clustering model encompassing two well-known clustering models: k-center clustering and k-median clustering. In the Hybrid k-Clustering problem, given a set P of points in R^d, an
integer k, and a non-negative real r, our objective is to position k closed balls of radius r to minimize the sum of distances from points not covered by the balls to their closest balls.
Equivalently, we seek an optimal L1-fitting of a union of k balls of radius r to a set of points in the Euclidean space. When r = 0, this corresponds to k-median; when the minimum sum is zero,
indicating complete coverage of all points, it is k-center. Our primary result is a bicriteria approximation algorithm that, for a given ε > 0, produces a hybrid k-clustering with balls of radius (1
+ ε)r. This algorithm achieves a cost at most 1 + ε of the optimum, and it operates in time 2(kd/ε)O(1) · n^O(1). Notably, considering the established lower bounds on k-center and k-median, our
bicriteria approximation stands as the best possible result for Hybrid k-Clustering.
Publication series
Name Leibniz International Proceedings in Informatics, LIPIcs
Volume 317
ISSN (Print) 1868-8969
Conference 27th International Conference on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2024 and the 28th International Conference on Randomization and Computation,
RANDOM 2024
Country/ United Kingdom
City London
Period 28/08/24 → 30/08/24
• Euclidean space
• clustering
• fpt approximation
• k-center
• k-median
ASJC Scopus subject areas
Dive into the research topics of 'Hybrid k-Clustering: Blending k-Median and k-Center'. Together they form a unique fingerprint. | {"url":"https://cris.bgu.ac.il/en/publications/hybrid-k-clustering-blending-k-median-and-k-center","timestamp":"2024-11-11T23:30:06Z","content_type":"text/html","content_length":"61170","record_id":"<urn:uuid:17cf5a8f-0712-45d2-9fb6-e42c432e8eb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00169.warc.gz"} |
Supported functions in Luminesce SQL queries
The following common functions are supported in Luminesce SQL queries:
We have also implemented the following custom functions:
See the full list of valid keywords (both inherited from SQLite and proprietary to FINBOURNE).
General custom functions
General custom function Explanation and examples
Performs an access control check on any application in the FINBOURNE platform, for example:
@x = select
CHECK_ACCESS check_access('Feature/Honeycomb/Execute', 'Honeycomb:stable-provider-sys-file') x,
check_access('Insights', 'Feature/LUSID/Execute', 'Insights:accesslogs-view-all') y,
check_access('Insights', 'Feature/LUSID/Execute', 'Insights:accesslogs-view-xxx') z;
select * from @x
Can be used conditionally to throw an error message, for example:
select iif(abc is not null, abc, throw('abc cannot be null'))
Append the following argument to throw without an error message:
throw('abc cannot be null', 'Ignorable')
Can be used to debug a query, give feedback to an end user running the query, or enter text into the logs. Takes the following arguments:
• The text to print (mandatory). This may contain 'structured result formatters', standard .NET format strings with named placeholders (see examples below).
• A severity, one of Progress (the default), Debug, Information, Warning, Error.
• A sender (defaults to PRINT).
PRINT • Up to 125 arguments that substitute into .NET format strings.
select print('*** Starting your process NOW!')
[INF] LuminesceCli: >> Progress >> PRINT >> *** Starting your process NOW! select print('*** Starting your process NOW! {X:00000}', 'error', 'START!!!', 42)
[ERR] LuminesceCli: >> Progress >> START!!! >> *** Starting your process NOW! 00042 select print('*** Starting your process NOW! {X}', 'error', 'START!!!', 42)
[ERR] LuminesceCli: >> Progress >> START!!! >> *** Starting your process NOW! 42
Converts a fractional day number to a more intuitive count of days, hours, minutes, seconds and so on.
day_count_display(3.1234567) a,
day_count_display(0.00123) b,
day_count_display(0.00000123) c,
DAY_COUNT_DISPLAY day_count_display(julianday('now', '+2 minutes') - julianday('now', '-1 hours', '+3 minutes', '-12 seconds')) d
│ a │ b │ c | d │
│ 3d 2h 57m │ 1m 46s │ 106ms │ 59m 12s │
Performs an explicit date conversion from a string in a non-standard format, for example:
select to_date('2022/29/03', 'yyyy/dd/MM') as aDate
Returns 1 if a value is a date or datetime, otherwise 0. For example:
@x = select is_date('2024-01-19');
select * from @x
Transforms a date and a TZ database name timezone to a UTC date, for example:
TO_UTC select to_utc('2023/01/30 10:30:11', 'US/Eastern') as aDate
See also these examples.
Transforms a datetime to an ISO-formatted string, for example:
TO_ISO select to_iso(#2023-03-25#) as aDate
See also these examples.
Transforms a UTC date to a given TZ database name timezone, for example:
select from_utc('2023/01/30 10:30:11', 'US/Eastern') as aDate
Transforms a date from one TZ database name timezone to another, for example:
select convert_timezone('2023/01/30 10:30:11', 'US/Eastern', 'US/Pacific') as aDate
Returns 1 if a value is a long, int, double or decimal, otherwise 0. For example:
@x = select is_numeric('200.99');
select * from @x
Returns 1 if a value is a long or an int, otherwise 0. For example:
@x = select is_integer('200');
select * from @x
Returns the Levenshtein distance between two strings, for example:
@x = select edit_distance('Microsoft', 'Google');
select * from @x
Creates a hash string from one or more defined columns. The parameter syntax is as follows: (hashing-algorithm, column[, ...]). You can specify one of the following
hashing algorithms when using this function:
• SHA512 (default if no hashing algorithm is specified)
• SHA1
GENERATE_HASH • SHA256
• SHA384
• MD5
For example:
select generate_hash('SHA256', TradeAmount, TradePrice) from Lusid.Portfolio.Txn limit 20
Returns 1 if a string contains the specified token (that is, whole word), otherwise 0. For example:
@data = select 'The quick brown fox jumped over the lazy dog' as myValue;
CONTAINS_TOKEN @x = select contains_token(myValue, 'quick') from @data;
select * from @x
This function is optimised for filtering data retrieved from providers supplied by FINBOURNE. For more information on filtering, see this article.
Returns 1 if a string contains a word starting with the specified token, otherwise 0. For example:
@data = select 'The quick brown fox jumped over the lazy dog' as myValue;
CONTAINS_TOKEN_STARTING_WITH @x = select contains_token_starting_with(myValue, 'quic') from @data;
select * from @x
This function is optimised for filtering data retrieved from providers supplied by FINBOURNE. For more information on filtering, see this article.
Enables filtering by regular expression matches. Note this function is used as an operator and so behaves differently to the other regular expression functions below. For
REGEXP example:
select * from x where abc REGEXP '([A-Z]*-[0-9]*)'
Returns the first portion of the search string that matches the regular expression, or null if there are no matches. The parameter syntax is as follows: (search-string,
regex [, name-of-capture-group-to-return]). For example:
-- returns: ABC-123
REGEXP_MATCH select regexp_match('XXX: ABC-123 Something', '([A-Z]+-[0-9]*)')
select regexp_match('XXX: ABC-123 Something', '([a-zA-Z]+-[0-9]*)')
-- returns: null (due to not matching on case)
select regexp_match('XXX: ABC-123 Something', '([a-z]+-[0-9]*)')
-- returns: ABC
select regexp_match('XXX: ABC-123 Something', '(?<project>[A-Z]+)-(?<id>[0-9]*)', 'project')
Returns the portions of the search string that match the regular expression as a comma-separated list, or null if there are no matches. The parameter syntax is as
follows: (search-string, regex). For example:
-- returns: ABC-123,QQQ-456
select regexp_matches('XXX: ABC-123 something QQQ-456 Something', '([A-Z]+-[0-9]*)')
Returns the 1-based index of the first portion of the search string that matches the regular expression, or null if there are no matches. The parameter syntax is as
follows: (search-string, regex [, name-of-capture-group-to-return]). For example:
-- returns: 6
select regexp_match_location('XXX: ABC-123 Something', '([A-Z]+-[0-9]*)')
REGEXP_MATCH_LOCATION select regexp_match_location('XXX: ABC-123 Something', '([a-zA-Z]+-[0-9]*)')
select regexp_match_location('XXX: ABC-123 Something', '(?<project>[A-Z]+)-(?<id>[0-9]*)')
-- returns: null (due to not matching on case)
select regexp_match_location('XXX: ABC-123 Something', '([a-z]+-[0-9]*)')
-- returns: 10
select regexp_match_location('XXX: ABC-123 Something', '(?<project>[A-Z]+)-(?<id>[0-9]*)', 'id')
Replaces the portions of the search string that match the regular expression with the replacement value. The parameter syntax is as follows: (search-string, regex,
replacement-value). For example:
-- returns: XXX: ? something ? Something
select regexp_replace('XXX: ABC-123 something QQQ-456 Something', '([A-Z]+-[0-9]*)', '?')
Statistical custom functions
Explanation Luminesce SQL syntax Equivalent Lumipy method
The coefficient of variation is the standard deviation scaled by the mean. It is a standardised measure of the
dispersion of a random variable so distributions of different scale can be compared [1] coefficient_of_variation([x]) x.stats.coef_of_variation()
x (numeric): column of values to compute the coefficient of variation over.
Covariance is a statistical measure of the joint variability of two random variables [1]
Inputs covariance([x], [y], ddof) x.stats.covariance(y, ddof)
x (numeric): the first column of values to compute the coefficient of variation over.
y (numeric): the second column of values to compute the coefficient of variation over.
ddof (int): delta degrees of freedom (defaults to 1). Use ddof = 0 for the population covariance and
ddof = 1 for the sample covariance.:
The empirical CDF is the cumulative distribution function of a sample. It is a step function that jumps by 1/n at each
of the n data points.
This function returns the value of the empirical CDF at the given value [1] value) x.stats.empirical_cdf(value)
x (numeric): column of values to compute the coefficient of variation over.
value (numeric): location in the domain of x to evaluate the empirical CDF at.
The Shannon entropy measures the average amount of "surprise" in a sequence of values. It can be considered a measure of
variability [1]
It is calculated as
S = -sum(p_i * log(p_i)) entropy([x]) x.stats.entropy()
where p_i is the probability of the ith value occurring computed from the sample (n occurrences / sample size). This
function is equivalent to scipy.stats.entropy called with a single series and with the natural base [2]
x (numeric): column of values to compute the entropy over.
The geometric mean is the multiplicative equivalent of the normal arithmetic mean. It multiplies a set of n-many numbers
together and then takes the n-th root of the result [1] geometric_mean([x]) x.stats.geometric_mean()
x (numeric): column of values to compute the geometric mean over.
The geometric standard deviation measures the variability of a set of numbers where the appropriate mean to use is the
geometric one (they are more appropriately combined by multiplication rather than addition) [1]
This is computed as the exponential of the standard deviation of the natural log of each element in the set exp(window_stdev(log([x]))) x.stats.geometric_stdev()
GSD = exp(stdev(log(x)))
x (numeric): column of values to compute the geometric standard deviation over.
The harmonic mean is the reciprocal of the mean of the individual reciprocals of the values in a set [1]
harmonic_mean([x]) x.stats.harmonic_mean()
x (numeric): column of values to compute the harmonic mean over.
The interquantile range is the difference between two different quantiles. This is a generalisation of the interquartile
range where q1=0.25 and q2=0.75.
The upper quantile (q2) value must be greater than the lower quantile (q1) value. interquantile_range([x], [q1], [q2]) x.stats.interquantile_range()
x (numeric): column of values to compute the interquantile range over.
q1 (float): the lower quantile value.
q2 (float): the upper quantile value.
The interquartile range is the difference between the upper and lower quartiles. It can be used as a robust measure of
the variability of a random variable [1] interquartile_range([x]) x.stats.interquartile_range()
x (numeric): column of values to compute the interquartile range over.
Kurtosis measures how much probability density is in the tails (extremes) of a sample's distribution [1]
This function corresponds to the Pearson Kurtosis measure and currently only supports sample kurtosis. kurtosis([x]) x.stats.kurtosis()
x (numeric): column of values to compute the kurtosis over.
The lower quartile is the value that bounds the lower quarter of a dataset [1]
It is equivalent to quantile 0.25 or the 25th percentile. quantile([x], 0.25) x.stats.lower_quartile()
x (numeric): column of values to compute the lower quartile over.
This is a convenience function for computing the mean divided by the standard deviation. This is used in multiple
financial statistics such as the Sharpe ratio and information ratio. mean_stdev_ratio([x]) x.stats.mean_stdev_ratio()
x (numeric): column of values to compute the mean standard deviation ratio over.
The median is the value that separates the top and bottom half of a dataset [1]
It is equivalent to quantile 0.5, or the 50th percentile. quantile([x], 0.5) x.stats.median()
x (numeric): column of values to compute the median over.
The median absolute deviation is a measure of the variability of a random variable. Unlike the standard deviation it is
robust to the presence of outliers [1] median_absolute_deviation([x]) x.stats.median_abs_deviation()
x (numeric): column of values to compute the median absolute deviation over.
Pearson's r is a measure of the linear correlation between two random variables [1]
pearson_correlation([x], [y]) x.stats.pearson_r(y)
x (numeric): the first series in the Pearson's r calculation.
y (numeric): the second series in the Pearson's r calculation.
The quantile function of a given random variable and q value finds the value x where the probability of observing a
value less than or equal to x is equal to q [1]
quantile([x], q) x.stats.quantile(q)
x (numeric): column of values to compute the quantile over.
q (float): the quantile value. Must be between 0 and 1.
The Root Mean Square (RMS) is the square root of the mean of the squared values of a set of values. It is a statistical
measure of the spead of a random variable [1] root_mean_square([x]) x.stats.root_mean_square(x)
x (numeric): column of values to compute the root mean square over.
Skewness measures the degree of asymmetry of a random variable around its mean [1]
This calculation currently only supports sample skewness. skewness([x]) x.stats.skewness(x)
x (numeric): column of values to compute the skewness mean over.
Spearman's rho measures how monotonic the relationship between two random variables is [1]
spearman_rank_correlation([x], [y]) x.stats.spearman_r(y)
x (numeric): the first series in the Spearman rank correlation calculation.
y (numeric): the second series in the Spearman rank correlation calculation.
An implementation of standard deviation that can be used in a window.
The standard deviation measures the dispersion of a set of values around the mean [1]
window_stdev([x]) x.stats.stdev()
This only calculates the sample standard deviation (delta degrees of freedom = 1)
x (numeric): column of values to compute the standard deviation over.
The upper quartile is the value that bounds the upper quarter of a dataset [1]
It is equivalent to quantile 0.75 or the 75th percentile. quantile([x], 0.75) x.stats.upper_quartile()
x (numeric): column of values to compute the upper quartile over.
Computes the y intercept (alpha) of a regression line fitted to the given data [1]
linear_regression_alpha([x], [y]) x.linreg.alpha(y)
x (numeric): an expression corresponding to the independent variable.
y (numeric): an expression corresponding to the dependent variable.
Computes the standard error of the y intercept (alpha) of a regression line fitted to the given data [1]
This assumes the residuals are normally distributed and is calculated according to [2] linear_regression_alpha_error([x], x.linreg.alpha_std_err(y)
x (numeric): an expression corresponding to the independent variable.
y (numeric): an expression corresponding to the dependent variable.
Computes the gradient of a regression line (beta) fitted to the given data [1]
linear_regression_beta([x], [y]) x.linreg.beta(y)
x (numeric): an expression corresponding to the independent variable.
y (numeric): an expression corresponding to the dependent variable.
Computes the standard error of the gradient (beta) of a regression line fitted to the given data [1]
This assumes the residuals are normally distributed and is calculated according to [2] linear_regression_beta_error([x], [y]) x.linreg.beta_std_err(y)
x (numeric): the column corresponding to the independent variable.
y (numeric): the column corresponding to the dependent variable.
The Bray-Curtis distance is the elementwise sum of absolute differences between elements divided by the absolute value
of their sum [1]
braycurtis_distance([x], [y]) x.metric.braycurtis_distance(y)
x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.
The Canberra distance is the elementwise sum of absolute differences between elements divided by the sum of
their absolute values. It can be considered a weighted version of the Manhattan distance [1] canberra_distance([x], [y]) x.metric.canberra_distance(y)
x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.
The Chebyshev distance is the greatest difference between dimension values of two vectors. It is equivalent to the
Minkowski distance as p → ∞ [1]
chebyshev_distance([x], [y]) x.metric.chebyshev_distance(y)
x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.
The cosine distance is the cosine of the angle between two vectors subtracted from 1 [1]
cosine_distance([x], [y]) x.metric.cosine_distance(y)
x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.
The Euclidean distance is the familiar 'as the crow flies' distance. It is the square root of the sum of squared
differences between the elements of two vectors [1]
euclidean_distance([x], [y]) x.metric.euclidean_distance(y)
x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.
The F-score is a classifier performance metric which measures accuracy.
It is defined as the weighted harmonic mean of precision and recall scores. The beta parameter controls the relative
weighting of these two metrics.
The most common value of beta is 1: this is the F_1 score (aka balanced F-score). It weights precision and recall fbeta_score([x], [y], beta) x.metric.f_score(y, beta)
evenly. Values of beta greater than 1 weight recall higher than precision and less than 1 weights precision higher than
recall [1]
x (boolean): the column corresponding to the first series.
y (boolean): the column corresponding to the second series.
beta (float): the value of beta to use in the calculation.
The Manhattan distance (aka the taxicab distance) is the absolute sum of differences between the elements of two
It is the distance traced out by a taxicab moving along a city grid like Manhattan where the distance travelled is the manhattan_distance([x], [y]) x.metric.manhattan_distance(y)
sum of the sides of the squares [1]
x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.
The mean absolute error is the mean absolute elementwise difference between two series of values [1]
It is a common performance metric for regression models where one series is the predicted values and the other series is
the observed values. mean_absolute_error([x], [y]) x.metric.mean_absolute_error(y)
x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.
The mean fractional absolute error is the mean absolute elementwise fractional difference between two series of values.
It is a scale-invariant version of the mean absolute error [1]
It is a common performance metric for regression models where one series is the predicted values and the other series is mean_fractional_absolute_error([x], x.metric.mean_fractional_absolute_error
the observed values. [y]) (y)
x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.
The mean squared error between two series of values is the mean of the squared elementwise differences [1]
It is a common performance metric for regression models. mean_squared_error([x], [y]) x.metric.mean_squared_error(y)
x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.
The Minkowski distance is a generalisation of the Euclidean (p=2) or Manhattan (p=1) distance to other powers p [1]
Inputs minkowski_distance([x], [y], p) x.metric.minkowski_distance(y, p)
x (numeric): the column corresponding to the first series.
y (numeric): the column corresponding to the second series.
p (int): the order to use in the Minkowski distance calculation.
Precision is a classification performance metric which measures the fraction of true positive events in a set of events
that a classifier has predicted to be positive. It is calculated as follows
precision = tp / (tp + fp)
where tp is the number of true positives and fp is the number of false positives. precision_score([x], [y]) x.metric.precision_score(y)
Precision is a measure of the purity of the classifier's positive predictions. [1] It is also known as the positive
predictive value and purity.
x (boolean): the column corresponding to the first series.
y (boolean): the column corresponding to the second series.
Recall is a classification performance metric which measures the fraction of positive events that are successfully
predicted by a classifier. It is calculated as follows
recall = tp / (tp + fn)
where tp is the number of true positives and fn is the number of false negatives. recall_score([x], [y]) x.metric.recall_score(y)
Recall is a measure of the efficiency of the classifier at retrieving positive events. [1] It is also known as
sensitivity, hit rate, true positive rate (TPR) and efficiency.
x (boolean): the column corresponding to the first series.
y (boolean): the column corresponding to the second series.
Apply a R squared calculation between two value series in this window. This a measure of how well a regressor predicts
true values.
x.metric.r_squared(y) #column
Returns a window column instance representing this calculation. r_squared(x, y) w.metric.r_squared(x, y) #window
x (column): the first series in the calculation.
y (column): the second series in the calculation.
Apply an adjusted R squared calculation between two value series in this window. This a measure of how well a regressor
predicts true values, but with a penalisation term for more predictors (inputs).
x.metric.adjusted_r_squared(y, n) #
Returns a window column instance representing this calculation. adjusted_r_squared(x, y, n) column
w.metric.adjusted_r_squared(x, y, n) #
Inputs window
x (column): the first series in the calculation.
y (column): the second series in the calculation.
n (integer): the number of predictors that the model has.
Gain-loss ratio is the mean positive return of the series divided by the mean negative return of the series.
gain_loss_ratio([x]) x.finance.gain_loss_ratio()
x (numeric): column of return values to compute the gain-loss ratio over.
The information ratio is the mean excess return between a return series and a benchmark series divided by the standard
deviation of the excess return.
mean_stdev_ratio([x] - [y]) x.finance.information_ratio(y)
x (numeric): column of return values.
y (numeric): column of benchmark return values.
Drawdown is calculated as
dd_i = (x_h - x_i)/(x_h)
where x_h is high watermark (max) up to and including the point x_i [1]
Drawdown must be in a window. You can change the limits in the OVER statement to define how far back the calculation
goes. For example, the following looks back infinitely far:
drawdown([x]) OVER(
drawdown([x]) x.finance.drawdown()
...whereas the following only looks back 90 rows:
drawdown([x]) OVER(
In Lumipy this would be:
w = lm.window(lower=90)
x (numeric): column of price values to compute the drawdown over.
Drawdown length is calculated as the number of rows between the high watermark value and the current row. [1]
drawdown_length([x]) x.finance.drawdown_length()
x (numeric): column of price values to compute the drawdown length over.
Drawdown is calculated as
dd_i = (x_h - x_i)/(x_h)
where x_h is high watermark (max) up to and including the point x_i [1]
Max drawdown is then the maximum value of the drawdowns dd_i over the sequence of values. max_drawdown([x]) x.finance.max_drawdown()
This aggregation assumes that the column is in time order. This calculation will be applied before ORDER BY in SQL
syntax, so you should consider turning the table containing the data into a table variable, ordering that by the
time-equivalent column and then applying the method to the corresponding column in a select statement on the table
x (numeric): column of price values to compute the max drawdown over.
Drawdown length is calculated as the number of rows between the high watermark value and the current row. [1]
The max drawdown length is then the maximum value of the drawdown length in the time period.
This aggregation assumes that the column is in time order. This calculation will be applied before ORDER BY in SQL max_drawdown_length([x]) x.finance.max_drawdown_length()
syntax, so you should consider turning the table containing the data into a table variable, ordering that by the
time-equivalent column and then applying the method to the corresponding column in a select statement on the table
x (numeric): column of price values to compute the max drawdown length over.
Drawdown is calculated as
dd_i = (x_h - x_i)/(x_h)
where x_h is high watermark (max) up to and including the point x_i [1]
Mean drawdown is then the mean value of the drawdown over the sequence of values. mean_drawdown([x]) x.finance.mean_drawdown()
This aggregation assumes that the column is in time order. This calculation will be applied before ORDER BY in SQL
syntax, so you should consider turning the table containing the data into a table variable, ordering that by the
time-equivalent column and then applying the method to the corresponding column in a .select() on the table variable.
x (numeric): column of price values to compute the mean drawdown over.
Drawdown length is calculated as the number of rows between the high watermark value and the current row. [1]
The mean drawdown length is then the mean value of the drawdown length in the time period.
This aggregation assumes that the column is in time order. This calculation will be applied before ORDER BY in SQL mean_drawdown_length([x]) x.finance.mean_drawdown_length()
syntax, so you should consider turning the table containing the data into a table variable, ordering that by the
time-equivalent column and then applying the method to the corresponding column in a .select() on the table variable.
x (numeric): column of price values to compute the mean drawdown length over.
Compute a returns series from a prices series. Supports scaling by a simple scale factor or compounding.
Inputs prices_to_returns([Price], 1, 1.0, 0) x.finance.prices_to_returns(1, 1,
OVER( ROWS BETWEEN UNBOUNDED PRECEDING False)
x (numeric): column of price values to compute the returns from. AND CURRENT ROW )
interval (int): the number of time steps to use when scaling.
time_factor (float): a time scale factor for annualisation.
compound (bool): whether the time scale factor should be applied as a simple scale factor r*T or as a compounded
calculation (r+1)**T
Compute a price series from a returns series and an initial value. Supports scaling by a simple scale factor or
returns_to_prices([rets], 400, 1.0, 0)
Inputs OVER( ROWS BETWEEN UNBOUNDED PRECEDING x.finance.returns_to_prices(400, 1,
AND CURRENT ROW ) False)
x (numeric): column of returns values to compute the prices from.
initial (float): the initial price to apply return factors to.
time_factor (float): a time scale factor for anualisation.
compound (bool): whether the time scale factor should be applied as a simple scale factor r*T or as a compounded
calculation (r+1)**T
Semi-deviation is the standard deviation of values in a returns series below the mean return value.
semi_deviation([x]) x.finance.semi_deviation()
x (numeric): column of return values to compute the semi-deviation over.
The Sharpe ratio is calculated as the mean excess return over the risk-free rate divided by the standard deviation of
the excess return.
mean_stdev_ratio([x] - [r]) x.finance.sharpe_ratio(r)
x (numeric): column of return values to compute the Sharpe ratio for.
r (numeric): the risk-free rate of return. This can be a constant value (float) or a column.
The tracking error is the standard deviation of the difference between an index's return series and a benchmark.
window_stdev([x] - [y]) x.finance.tracking_error(y)
x (numeric): the index return series.
y (numeric): the benchmark return series.
The cumulative product is similar to cumulative sum except it multiplies rather than adds.
cumeprod(table.x) w.prod(table.x)
x (numeric): A column of numbers to multiply together. | {"url":"https://support.lusid.com/docs/supported-functions-in-luminesce-sql-queries","timestamp":"2024-11-10T11:26:13Z","content_type":"text/html","content_length":"1049343","record_id":"<urn:uuid:47cae000-b159-4b82-a12a-96c7ee320af4>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00334.warc.gz"} |
Water quality reliability based on an improved entropy in a water distribution system
Flow entropy/Improved flow entropy
Water quality entropy/Improved water quality entropy
Water quality reliability
Case 1
Case 2
Case 3
Application to Case 1
Application to Case 2
Application to case 3
Nodal mean WQE and nodal mean IWQE
Sensitivity analysis | {"url":"https://iwaponline.com/aqua/article/71/7/862/89793/Water-quality-reliability-based-on-an-improved","timestamp":"2024-11-09T13:11:32Z","content_type":"text/html","content_length":"364436","record_id":"<urn:uuid:d870b707-92d3-4d8d-9a05-d60f40184ae5>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00154.warc.gz"} |
Neal D. Goldstein, PhD, MBI
Nov 6, 2018
A simple presentation of error rates, effect size, and sample
In the introductory epidemiology class I teach, I've had this question posed to me several times: "Can you define the relationship between the error types and sample size?" Of course there is a
relationship between these, as well as the effect size, as those components are the required parameters in most sample size equations.
Lowering alpha or beta - and thus lowering risk of type I or II error rates - means you are asking for greater confidence in your answer. Requesting greater confidence (i.e., increasing precision) in
your answer translates to more data points, which often means more people (or more measurements of those people). For example, going from an standard alpha of 0.05 to an alpha of 0.01 will increase
sample size as will going from a beta of 0.20 (80% power) to a beta of 0.10 (90% power). Likewise, a weaker effect will also demand a larger sample.
I was planning to refer the student to previously published power and sample size curves, but having had difficulty finding them I decided to create a few graphs depicting this relationship. This
blog post serves as a way to share these graphs with others who may find this beneficial as well as share the R code I used to create these graphs. These were created by performing a sample size
calculation for Pearson correlations of two continuous variables across a range of effect sizes, alphas, and betas. Notes on each figure depict assumptions. All were calculated as two-sided
hypotheses. If you do find these useful, please cite this resources. Thanks and enjoy!
# Sample size curves
# Requires:
# 11/6/18 -- Neal Goldstein
### FUNCTIONS ###
### ALPHA (TYPE I ERROR) ###
#range of correlation coefficients
r = seq(0.2,0.9,0.1)
#range of alpha values
alpha = seq(0.001,0.20,0.001)
#initialize plot
plot(NULL, xlim=c(0,200), ylim=c(0,0.2), xlab="minimum sample required", ylab="alpha (type i error)", main="Sample vs. Alpha", sub="beta=0.20, power=0.80")
#calculate sample size over all correlations and alpha values
for (j in 1:length(r)) {
sample_alpha = NA
for (i in 1:length(alpha)) {
sample_alpha = c(sample_alpha, pwr.r.test(n=NULL, r=r[j], sig.level=alpha[i], power=0.8, alternative="two.sided")$n)
sample_alpha = sample_alpha[-1]
#add sample size curves to plot
lines(sample_alpha, alpha,col=palette()[j])
#add legend
legend("topright",legend=r, lty=rep(1,8),col=palette(), cex=0.8)
### BETA (TYPE II ERROR) ###
#range of correlation coefficients
r = seq(0.2,0.9,0.1)
#range of beta values
beta = seq(0.01,0.30,0.001)
#initialize plot
plot(NULL, xlim=c(0,300), ylim=c(0,0.3), xlab="minimum sample required", ylab="beta (type ii error)", main="Sample vs. Beta", sub="alpha=0.05")
axis(4, at=c(0,0.10,0.20,0.30), labels=(1-c(0,0.10,0.20,0.30)))
#calculate sample size over all correlations and alpha beta
for (j in 1:length(r)) {
sample_beta = NA
for (i in 1:length(beta)) {
sample_beta = c(sample_beta, pwr.r.test(n=NULL, r=r[j], sig.level=0.05, power=(1-beta[i]), alternative="two.sided")$n)
sample_beta = sample_beta[-1]
#add sample size curves to plot
lines(sample_beta, beta,col=palette()[j])
#add legend
legend("topright",legend=r, lty=rep(1,8),col=palette(), cex=0.8)
### EFFECT SIZE ###
#range of correlation coefficients
r = seq(0.1,0.9,0.01)
#initialize plot
plot(NULL, xlim=c(0,300), ylim=c(0,1), xlab="minimum sample required", ylab="r", main="Sample vs. Effect Size", sub="alpha=0.05, beta=0.20, power=0.80")
#calculate sample size over all correlations
sample_r = NA
for (j in 1:length(r)) {
sample_r = c(sample_r, pwr.r.test(n=NULL, r=r[j], sig.level=0.05, power=0.80, alternative="two.sided")$n)
sample_r = sample_r[-1]
#add sample size curves to plot
lines(sample_r, r)
Cite: Goldstein ND. A simple presentation of error rates, effect size, and sample. Nov 6, 2018. DOI: 10.17918/goldsteinepi. | {"url":"https://goldsteinepi.com/blog/a-simple-presentation-of-error-rates-effect-size-and-sample/index.html","timestamp":"2024-11-07T09:32:24Z","content_type":"text/html","content_length":"6285","record_id":"<urn:uuid:d132bbb0-3446-4d88-ac67-40296673fb12>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00451.warc.gz"} |
Homotopy and homology of p-subgroup complexes
NOTE: Text or symbols not renderable in plain ASCII are indicated by [...]. Abstract is included in .pdf document.
In this thesis we analyzed the simple connectivity of the Quillen complex at [...] for the classical groups of Lie type. In light of the Solomon-Tits theorem, we focused on the case where [...] is
not the characteristic prime. Given (p,q) = 1. let dp(q) be the order of [...] in [...]. In this thesis we proved the following result:
Main Theorem. When (p,q) = 1 we have the following results about the simple connectivity of the Quillen complex at p, Ap(G), for the classical groups of Lie type:
1. If G = GLn(q), dp(q) > 2 and mp(G) > 2, then Ap[...](G) is simply connected.
2. If G = [...], then: (a) Ap(G) is Cohen-Macaulay of dimension n - 1 if dp(q) = 1. (b) If nip(G) > 2 and dp(q) is odd, then Ap(G) is simply connected.
3. If G = [...], then: (a) Ap(G) is Cohen-IVlacaulay of dimension n - 1 if [...] and dp(q) = 1. (b) If mp(G) > 2 and dp(q) is odd, then ,Ap(G) is simply connected. (c) If n [...] 3, q [...] 5 is odd,
and dp(q) = 2, then Ap(G)(> Z) is simply connected, where Z is the central subgroup of G of order p.
In the course of analyzing the [...]-subgroup complexes we developed new tools for studying relations between various simplicial complexes and generated results about the join of complexes and the
[...]-subgroup complexes of products of groups. For example we proved: Theorem A. Let [...] be a map of posets satisfying:
(1) [...] is strict; that is,[...] (2)[...] (3) [...]connected for all [...] with [...].
Then Y n-connected implies X is n-connected.
Theorem A provides us with a tool for studying [...] in terms of [...]. For example, we used this method to prove:
Theorem 8.6. Let G = [...] where [...] is solvable and S is a p-group of symplectic type. Then [...]spherical.
In this thesis we also generated a library of results about geometric complexes which do not arise as [...]-subgroup complexes. This library includes, but is not restricted to, the following:
(l.) the poset of proper nondegenerate subspaces of a 2[...]-dimensional symplectic space -ordered by inclusion - is Cohen-Macaulay of dimension n-2. (2) If q is an odd prime power anal n [...] (with
n [...] 5 if q = 3), then the poset of proper nondegenerate subspaces of an n-dimensional unitary space over Fq2 is simply connected.
Item Type: Thesis (Dissertation (Ph.D.))
Degree Grantor: California Institute of Technology
Division: Physics, Mathematics and Astronomy
Major Option: Mathematics
Thesis Availability: Public (worldwide access)
Research Advisor(s): • Aschbacher, Michael
Thesis Committee: • Aschbacher, Michael (chair)
• Wales, David B.
• Wilson, Richard M.
• Guralnick, Robert M.
• Shahriari, Shahriar
Defense Date: 12 April 1994
Record Number: CaltechETD:etd-06032004-143153
Persistent URL: https://resolver.caltech.edu/CaltechETD:etd-06032004-143153
DOI: 10.7907/GAWP-0T18
Default Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 2418
Collection: CaltechTHESIS
Deposited By: Imported from ETD-db
Deposited On: 04 Jun 2004
Last Modified: 21 Dec 2019 02:57
Thesis Files
PDF (Das_km_1994.pdf) - Final Version
See Usage Policy.
Repository Staff Only: item control page | {"url":"https://thesis.library.caltech.edu/2418/","timestamp":"2024-11-02T09:44:03Z","content_type":"application/xhtml+xml","content_length":"29291","record_id":"<urn:uuid:619beae5-952e-4312-b431-f137b7600164>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00062.warc.gz"} |
Satoshi’s Focus on a “Transactional Currency”
Written in response to the following question:
In the bitcoin white paper, it sounds like Satoshi is talking about a transactional currency. And yet, it seems like the better application is bitcoin as a store of value. So why didn’t he argue
it more that way. It sounds more like he wanted to see a currency that could be used more day to day. So basically, if what I am saying is true, should be a concern that he didn’t really
understand what he built?
TL;DR: Investigations into who Satoshi is/was or what his original intentions were might be interesting from an academic or historical perspective, but they have no bearing on the current economics
or function of Bitcoin.
Does it Matter?
Does It Matter from an Economic Perspective?
Economics is solely a function of the appraisals of acting humans who appraise goods subjectively, based on what they believe the goods in question can help them achieve. What the inventor of a good
intended or how that good was created bears little on how those human actors will make use of and appraise the goods in question. Examples abound in everyday products on the shelves around us.
Incidentally, therefore, Economics is, in general, “past-agnostic”. For example, successful entrepreneurs use concepts such as “sunken cost”. Questions around how Bitcoin could have originally
“obtained value” and why “the first guy” accepted “worthless” Bitcoin as payment for a pizza might be interesting academic or historical questions (which I could expand upon at length), but actual
economic actors involved with Bitcoin, concerned with their future liquidity/portfolio-values focus on today’s exchanges, data and price quotes.
I therefore find it more useful in economic analysis to first examine the perceived nature of something, and then work back towards its value, rather than consider what something was ‘intended’ to
do. Yes, this can be rather difficult and complex (particularly when dealing with the value of a good, such as a money, that is appraised on its perceived value to others) due to second-order effects
and feedback loops. For example, as Bitcoin establishes a longer history and gradual price-rise due to increasing perceived value, its perceived value among the masses will rise.
Bitcoin provides us with the ability to independently verify that each transaction (a) is authorized and (b) didn’t create new Bitcoins. It begins and ends there. And so, its value proposition can
only be to serve as a “hard” money.
Does It Matter from a Technical Perspective?
No. Bitcoin has matured, been battle-tested and hardened over 11 years in the FLOSS wild. It is a different animal, is subjected to relentless peer-review and NASA-level, ultra-conservative coding
standards and practices. Users of Bitcoin fully rely on the technical competence of a tremendously decentralized network of open source developers — not the technical competence of Satoshi. For
better or worse (I think much better) — that’s how it is.
Satoshi: My Own Thoughts
I preface this part by reiterating my general lack of interest in this topic due to the lack of bearing it has on the economics or technicals of Bitcoin.
Do I Have Doubts About Satoshi’s Technical Competence?
No. In my opinion, as a programmer of over 30 years who has implemented Satoshi’s design in order to learn about it, I am dumbfounded by the genius of the design and most impressed by his ability to
get it (mostly) right in the first iteration. I’ll leave it at that.
What Is My Take on Satoshi’s Intentions (Hopes?) For Bitcoin’s Economic Role?
Here I believe that it is most useful to consider the fact that Satoshi was a Cypherpunk (and so he subscribes to Cypherpunk ethos) and, within that context, the message he sent by embedding the
infamous headline in the genesis block. This was him screaming loudly and clearly that Bitcoin was intended as a solution to central banks manipulating money supplies: a hard money (store of value
Why Did Satoshi Talk about Transactions and Cash?
First, I think it’s crucial in understanding this to think about Bitcoin from the perspective of a programmer / designer whose biggest hurdle in creating this hard money is how to transact in such a
way that one can independently verify that the money supply was not monkeyed with by solving the “double spend” problem. Clearly if one could not transact with Bitcoin, it would not be able to
function as a hard money — or, indeed, any kind of money. From this perspective, of course the focus of the design and the code must be on the transactions, because this was the hurdle to overcome
and the most complicated piece of the design/implementation.
Second, I have seen endless confusion in the space around money and its roles as a “store of value”, “medium of exchange” and “unit of account” vis-à-vis Bitcoin. Is Bitcoin a “money” if it is not a
“medium of exchange”? How “transactional” does a money have to be to be considered a “medium of exchange”? (Does it matter?) I can also write at length about this but suffice it to say that a lot of
previously buried ideas have been brought to light and there has been a lot of intellectual development in this space on those topics in the last 10 years — precisely due to the invention of Bitcoin.
It should also be appreciated that there is an important temporal dimension or natural evolution/progression of a money as it is used differently through different stages of its development.
We would have to assume that Satoshi (1) was so crystal clear in his own thinking about the nuances of these concepts, (2) had an appreciation for the extent to which his whitepaper would come under
scrutiny to the point that his not being precise on these nuances would lead to confusion and (3) had an extra appreciation for the progression of Bitcoin through various stages and that by not
drafting his whitepaper in such a way so as to be relevant to each stage of development during that particular stage (particularly when read by someone without an appreciation for those nuances and
that progression), would lead to such confusion. To assume this for Satoshi would be to confer him with some sort of omniscience.
I don’t believe it is useful to split hairs on wording in his white paper which was, let’s not forget, written by a flesh-and-blood, fallible human being and probably intended for a small audience of
Cypherpunks. Perhaps, if given a second stab at it after 11 years (and in seeing how people have been fixated on his focus on transactions), he would have drafted it differently.
I’ll end by saying that if the greatest gift that Satoshi gave to the world is Bitcoin, then the second greatest gift he gave us was disappearing. People have a habit of looking for leaders and
figures to follow. Such a tendency is toxic to The Decentralized Technology. | {"url":"https://bjdweck.medium.com/satoshis-focus-on-a-transactional-currency-404ea8ab9147?source=user_profile_page---------5-------------7045f49cef87---------------","timestamp":"2024-11-04T21:41:15Z","content_type":"text/html","content_length":"106794","record_id":"<urn:uuid:7d6e9dfa-e839-4440-8463-9738883493ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00879.warc.gz"} |
Quantum Tunneling in Cosmology | Academic Block
Quantum Tunneling in Cosmology
Quantum tunneling in cosmology involves particles or even entire spacetime overcoming energy barriers that classical physics deems impassable. This process is crucial in explaining early universe
phenomena, such as phase transitions triggered by tunneling from a false vacuum state. It also provides potential explanations for how the universe could emerge from “nothing”.
Exploring the Concept
The realm of cosmology, the study of the universe on its grandest scales, is filled with phenomena that challenge our understanding of fundamental physics. Among these enigmatic concepts lies quantum
tunneling, a phenomenon originating from the peculiar rules of quantum mechanics. While quantum tunneling is commonly associated with microscopic particles, its application in cosmology introduces a
fascinating perspective on the universe's evolution and structure. In this article by Academic Block, we embark on a journey to unravel the mysteries of quantum tunneling in cosmology, exploring its
theoretical underpinnings, observational implications, and profound implications for our understanding of the cosmos.
Understanding Quantum Tunneling
To comprehend quantum tunneling in the context of cosmology, we must first grasp its fundamentals within the framework of quantum mechanics. In classical physics, particles follow predictable
trajectories governed by Newton's laws of motion. However, the microscopic realm of quantum mechanics introduces uncertainty and wave-particle duality, where particles can behave as both particles
and waves simultaneously.
One of the most intriguing consequences of quantum mechanics is the phenomenon of tunneling. Classically, a particle encountering a potential barrier higher than its energy level would be unable to
surmount it. However, according to quantum mechanics, there exists a finite probability that the particle can "tunnel" through the barrier, appearing on the other side despite lacking sufficient
energy to traverse it classically. This phenomenon defies classical intuition and has profound implications for various physical systems, from semiconductor devices to nuclear fusion reactions.
Application to Cosmology
In the cosmic context, quantum tunneling finds application in scenarios where the universe undergoes phase transitions or quantum fluctuations. During the early stages of the universe's evolution,
immediately after the Big Bang, quantum fluctuations played a crucial role in seeding the cosmic structures we observe today, such as galaxies, clusters, and cosmic microwave background radiation.
Quantum tunneling becomes particularly intriguing when considering inflation, a hypothetical period of rapid expansion thought to have occurred moments after the Big Bang. Inflationary cosmology
posits that the universe underwent an exponential expansion, driven by the energy of a scalar field known as the inflaton. During inflation, quantum fluctuations in the inflaton field could lead to
the formation of regions with different energy densities, akin to potential barriers in the quantum tunneling analogy.
Inflationary Quantum Tunneling
The concept of inflationary quantum tunneling proposes that during the inflationary epoch, quantum fluctuations in the inflaton field could trigger the transition from a false vacuum state to a true
vacuum state in localized regions of the universe. This transition corresponds to the tunneling of the inflaton field through a potential barrier, resulting in the creation of a new inflationary
domain, or "bubble," within the existing inflating space-time.
Within these inflationary bubbles, the inflaton field settles into its true vacuum state, leading to the cessation of inflation and the onset of standard hot Big Bang expansion. Meanwhile, outside
the bubble, inflation continues unabated. This process generates a multiverse-like scenario, where different regions of space-time undergo distinct evolutionary trajectories, each encapsulated within
its own inflationary bubble.
Observable Consequences
While direct observation of inflationary quantum tunneling remains elusive, its theoretical predictions have profound implications for observational cosmology. One such consequence is the imprint of
inflationary bubbles on the cosmic microwave background (CMB) radiation, the relic radiation from the early universe. Inflationary bubbles leave characteristic signatures in the CMB, such as
temperature fluctuations and polarization patterns, which can potentially be detected by advanced cosmological observatories.
Furthermore, inflationary quantum tunneling can lead to the formation of primordial black holes (PBHs) within the inflationary bubbles. PBHs are hypothesized to have formed from the gravitational
collapse of overdense regions in the early universe. These black holes could have significant observational consequences, including gravitational lensing effects and the production of gravitational
waves, offering potential avenues for constraining inflationary models through astrophysical observations.
Challenges and Future Directions
Despite its theoretical appeal, the concept of inflationary quantum tunneling poses several challenges and open questions for cosmologists. One of the primary challenges is the lack of a concrete
theoretical framework for describing the dynamics of inflationary tunneling processes. Existing models often rely on simplified approximations and assumptions, making it difficult to make precise
predictions or testable hypotheses.
Additionally, the issue of eternal inflation complicates our understanding of inflationary quantum tunneling. In scenarios where inflation persists indefinitely, the formation of inflationary bubbles
becomes an ongoing process, leading to a fractal-like structure known as the "multiverse." Understanding the statistical properties of this multiverse and its observational consequences remains a
topic of active research in cosmology.
Future directions in the study of inflationary quantum tunneling involve the development of more sophisticated theoretical models and observational techniques. Advanced cosmological surveys and
experiments, such as the Atacama Cosmology Telescope and the Cosmic Microwave Background Stage 4 (CMB-S4) experiment, hold the promise of providing valuable insights into the inflationary paradigm
and the role of quantum tunneling in shaping the universe's large-scale structure.
Final Words
In conclusion, quantum tunneling in cosmology represents a fascinating intersection of quantum mechanics and cosmological theory, offering profound insights into the early universe's dynamics and
evolution. From the inflationary origins of the cosmos to the formation of cosmic structures, quantum tunneling leaves its indelible mark on the fabric of space-time, challenging our understanding of
the universe's fundamental principles. While many questions remain unanswered, ongoing research and observational efforts continue to shed light on the enigmatic phenomenon of quantum tunneling in
the cosmic tapestry of the universe. Please provide your views in the comment section to make this article better. Thanks for Reading!
This Article will answer your questions like:
+ What is quantum tunneling in cosmology? >
Quantum tunneling in cosmology refers to the phenomenon where particles pass through energy barriers that they classically shouldn't be able to surmount. This occurs due to the probabilistic nature
of quantum mechanics, allowing particles to "tunnel" through barriers. In cosmology, it is crucial for understanding processes such as the formation of primordial black holes and the dynamics of the
early universe, where tunneling effects can influence the evolution of the cosmos.
+ How does quantum tunneling relate to the early universe? >
In the early universe, quantum tunneling plays a role in the processes that led to cosmic inflation and the formation of the first structures. It allows particles and fields to escape from
high-energy states, facilitating transitions between different phases of the universe's evolution. This tunneling can lead to fluctuations in the energy density, influencing the rate of inflation and
the subsequent formation of cosmic structures.
+ What role does quantum tunneling play in the Big Bang theory? >
Quantum tunneling is thought to play a significant role in the Big Bang theory by enabling transitions from a high-energy state to the lower-energy state observed in the early universe. This
tunneling process could have initiated the rapid expansion known as cosmic inflation, providing a mechanism for the universe's initial conditions and the distribution of matter and energy immediately
following the Big Bang.
+ How does quantum tunneling affect cosmic inflation? >
Quantum tunneling can significantly impact cosmic inflation by allowing fields to tunnel through potential barriers, leading to rapid and exponential expansion of the universe. This tunneling
facilitates transitions between different energy states, which can drive the inflationary phase and influence the distribution of density fluctuations that seed the formation of large-scale
structures in the universe.
+ What is the significance of quantum tunneling for black hole formation? >
Quantum tunneling is significant for black hole formation as it provides a mechanism for the creation of primordial black holes. In the early universe, tunneling through high-energy barriers could
have allowed for the formation of these black holes from quantum fluctuations. This process helps explain the formation of black holes in the early cosmos and their role in cosmic evolution.
+ How does quantum tunneling impact the formation of cosmic structures? >
Quantum tunneling impacts cosmic structure formation by influencing the density fluctuations that lead to the growth of large-scale structures. Tunneling effects in the early universe could create
regions of varying density, which then collapse under gravity to form stars, galaxies, and clusters. This process contributes to the inhomogeneity observed in the cosmic structure today.
+ What are the implications of quantum tunneling for dark matter? >
Quantum tunneling has implications for dark matter as it might affect its distribution and interactions. Tunneling could influence the formation and dynamics of dark matter structures in the early
universe. Understanding tunneling processes might provide insights into the nature of dark matter and its role in cosmic evolution, potentially offering clues to its composition and behavior.
+ How is quantum tunneling related to the multiverse theory? >
Quantum tunneling is related to multiverse theory through the idea that different regions of space can transition between various quantum states, potentially creating multiple universes. Tunneling
could allow different regions of the universe to evolve into distinct, separate universes with varying physical laws and constants, supporting the concept of a multiverse where many parallel
universes exist.
+ What are the observable effects of quantum tunneling in cosmology? >
Observable effects of quantum tunneling in cosmology include the potential signatures in cosmic microwave background (CMB) radiation and large-scale structure. Tunneling events may contribute to
fluctuations in the CMB and influence the formation of cosmic structures, though direct observations are challenging. Indirect effects are often studied through theoretical models and simulations to
understand their implications.
+ How does quantum tunneling influence the understanding of singularities? >
Quantum tunneling influences the understanding of singularities by providing insights into how quantum effects might resolve or modify the behavior at singular points, such as those found in black
holes or the Big Bang. Tunneling processes could potentially alter the traditional view of singularities, suggesting that quantum mechanics might mitigate or alter their properties at extreme
+ What experiments or observations test quantum tunneling in cosmology? >
Testing quantum tunneling in cosmology often involves indirect methods such as analyzing cosmic microwave background (CMB) data, studying primordial black holes, and examining large-scale structure
formation. Observations of anomalies in these areas can provide evidence for tunneling effects. Additionally, theoretical models and simulations are used to predict and interpret potential tunneling
signatures in cosmological data.
+ How does quantum tunneling affect the concept of the cosmic landscape? >
Quantum tunneling affects the concept of the cosmic landscape by allowing transitions between different regions of the landscape, which represents various possible states of the universe. Tunneling
can facilitate the exploration of different vacuum states, influencing the evolution of the universe and potentially leading to diverse cosmological outcomes and structures within the landscape of
possible universes.
+ What theoretical models incorporate quantum tunneling in cosmology? >
Theoretical models incorporating quantum tunneling in cosmology include the inflationary universe model, which uses tunneling to explain the rapid expansion of the early universe. Other models
explore tunneling effects in the context of string theory, brane cosmology, and multiverse scenarios, where tunneling processes contribute to the dynamics and structure of the universe.
+ How does quantum tunneling relate to the formation of primordial black holes? >
Quantum tunneling is related to the formation of primordial black holes by providing a mechanism for their creation in the early universe. Tunneling through high-energy barriers could have allowed
the formation of these black holes from quantum fluctuations. This process helps explain their presence and distribution in the primordial universe and their role in cosmic evolution.
+ What are the challenges in studying quantum tunneling in cosmological contexts? >
Challenges in studying quantum tunneling in cosmological contexts include the difficulty of observing direct effects due to the extreme conditions involved. Theoretical models must be precise to make
accurate predictions, and observational evidence can be indirect or subtle. Additionally, integrating quantum tunneling with classical cosmological models requires complex calculations and
assumptions that can be difficult to test empirically.
Major discoveries because of Quantum Tunneling in Cosmology
Inflationary Cosmology: The concept of inflationary cosmology, which incorporates quantum tunneling as a mechanism for the rapid expansion of the early universe, has provided a framework for
explaining various observed phenomena, such as the large-scale homogeneity and isotropy of the universe, the flatness problem, and the origin of cosmic structures. While not a direct invention,
inflationary cosmology has revolutionized our understanding of the universe’s evolution and structure.
Cosmic Microwave Background (CMB) Radiation: Observations of the cosmic microwave background radiation have provided strong evidence in support of inflationary cosmology. The subtle temperature
fluctuations and polarization patterns observed in the CMB carry information about the early universe’s density fluctuations, which originated from quantum fluctuations during inflationary expansion,
including possible effects of quantum tunneling.
Primordial Black Holes (PBHs): While the formation of primordial black holes is not solely attributed to quantum tunneling, quantum fluctuations during inflationary periods are believed to contribute
to their formation. The study of PBHs has implications for various astrophysical phenomena, including dark matter, gravitational wave astronomy, and the evolution of galaxies.
Gravitational Waves: The detection of gravitational waves by experiments such as LIGO (Laser Interferometer Gravitational-Wave Observatory) and Virgo has provided direct evidence for the existence of
black hole mergers and neutron star collisions. Quantum mechanics plays a fundamental role in the generation of gravitational waves, including the quantum fluctuations that contribute to the
formation and evolution of black holes, potentially influenced by quantum tunneling processes during inflation.
Quantum Cosmology: The field of quantum cosmology explores the application of quantum mechanics to the entire universe, including its origin and evolution. While still in its infancy, quantum
cosmology aims to develop a unified framework that combines quantum mechanics and general relativity to describe the universe’s behavior at the most fundamental level, potentially shedding light on
phenomena such as the Big Bang singularity and the multiverse.
Advanced Cosmological Surveys: Quantum mechanics and its associated concepts, including quantum tunneling, inspire the development of advanced observational techniques and cosmological surveys aimed
at probing the early universe’s properties. Projects such as the Atacama Cosmology Telescope (ACT) and the Cosmic Microwave Background Stage 4 (CMB-S4) experiment seek to map the cosmic microwave
background with unprecedented precision, providing valuable insights into the universe’s early history and potentially confirming predictions of inflationary cosmology influenced by quantum
Controversies related to Quantum Tunneling in Cosmology
Initial Conditions and Fine-Tuning: One controversy surrounds the initial conditions required for inflationary quantum tunneling to occur. Some critics argue that the precise initial conditions
necessary for inflation to initiate via quantum tunneling seem finely-tuned, raising questions about the robustness of inflationary cosmology as a whole.
Quantum Fluctuations and Observable Effects: While inflationary quantum tunneling is theorized to leave observable signatures in the cosmic microwave background (CMB), the specific effects of quantum
fluctuations on the CMB are still subject to debate. Some researchers argue that distinguishing between the effects of quantum fluctuations and other cosmological processes on the CMB remains
challenging, leading to uncertainties in interpreting observational data.
Predictive Power and Testability: Critics of inflationary quantum tunneling contend that the theory lacks predictive power and testability. Since inflationary models can accommodate a wide range of
parameters and initial conditions, some argue that the theory is overly flexible and can be adjusted to fit observational data retroactively, raising concerns about its scientific validity.
Eternal Inflation and the Multiverse: The concept of eternal inflation and the existence of a multiverse resulting from inflationary quantum tunneling are highly controversial within the scientific
community. Skeptics question the observational consequences of eternal inflation and argue that the multiverse hypothesis lacks empirical support, leading to debates about the testability and
verifiability of such theories.
Alternative Cosmological Models: Critics of inflationary quantum tunneling advocate for alternative cosmological models that do not rely on inflation or quantum tunneling to explain the large-scale
structure of the universe. These alternative models often propose different mechanisms for generating cosmic structures and seek to address perceived shortcomings of inflationary cosmology.
Philosophical Implications: The concept of a multiverse resulting from inflationary quantum tunneling raises profound philosophical questions about the nature of reality and the role of observation
in cosmology. Critics argue that the multiverse hypothesis blurs the line between science and metaphysics, challenging traditional notions of testability and falsifiability in scientific inquiry.
Interpretational Issues: The interpretation of quantum tunneling in the cosmological context is not without controversy. Some researchers question the applicability of quantum mechanics to the
universe as a whole, leading to debates about the appropriate theoretical framework for understanding quantum tunneling in a cosmological context.
Facts on Quantum Tunneling in Cosmology
Inflationary Universe Theory: Quantum tunneling plays a crucial role in the framework of inflationary cosmology, which proposes that the universe underwent a period of rapid expansion shortly after
the Big Bang. During inflation, quantum fluctuations in the inflaton field can lead to the formation of inflationary bubbles through tunneling processes.
Multiverse Hypothesis: Inflationary quantum tunneling can give rise to a multiverse scenario, where different regions of space-time undergo distinct evolutionary trajectories. Each inflationary
bubble represents a separate universe within the overarching multiverse structure.
Phase Transitions: Quantum tunneling can trigger phase transitions in the early universe, leading to changes in its energy state and the formation of new cosmic structures. These phase transitions
are crucial for understanding the dynamics of the universe during its earliest moments.
Cosmic Microwave Background (CMB): Inflationary quantum tunneling leaves characteristic signatures in the cosmic microwave background radiation, such as temperature fluctuations and polarization
patterns. Observations of the CMB provide valuable insights into the early universe and help constrain inflationary models.
Primordial Black Holes (PBHs): Quantum fluctuations during inflationary tunneling processes can lead to the formation of primordial black holes within inflationary bubbles. These black holes have
potential observational consequences, including gravitational lensing effects and the production of gravitational waves.
Eternal Inflation: In scenarios of eternal inflation, where inflation persists indefinitely, the formation of inflationary bubbles becomes an ongoing process. This leads to the creation of a
fractal-like multiverse structure, with new universes continually emerging through quantum tunneling.
Challenges and Open Questions: Despite its theoretical appeal, inflationary quantum tunneling poses challenges for cosmologists, including the lack of a precise theoretical framework and the issue of
eternal inflation. Understanding the statistical properties of the multiverse and developing observational techniques to probe inflationary signatures remain areas of active research.
Academic References on Quantum Tunneling in Cosmology
1. Vilenkin, A. (2006). Many worlds in one: The search for other universes. Hill and Wang.: This book by Vilenkin explores the concept of a multiverse resulting from quantum tunneling and
inflationary cosmology.
2. Linde, A. (1990). Particle physics and inflationary cosmology. CRC Press.: Linde’s book looks into various aspects of inflationary cosmology, including quantum tunneling processes.
3. Guth, A. H. (1998). The inflationary universe: The quest for a new theory of cosmic origins. Vintage Books.: This book by Guth provides an in-depth exploration of inflationary cosmology, which
incorporates quantum tunneling as a key mechanism.
4. Hawking, S. W., & Penrose, R. (1996). The nature of space and time. Princeton University Press.: In this book, Hawking and Penrose discuss fundamental concepts in cosmology, including quantum
tunneling and its implications for the nature of space and time.
5. Coleman, S., & De Luccia, F. (1980). Gravitational effects on and of vacuum decay. Physical Review D, 21(12), 3305-3315.: This influential journal article by Coleman and De Luccia discusses
gravitational effects associated with vacuum decay, a process related to quantum tunneling in cosmology.
6. Hawking, S. W. (1982). The development of irregularities in a single bubble inflationary universe. Physics Letters B, 115(4), 295-297.: In this article, Hawking explores the formation of
irregularities in inflationary universes, which may arise from quantum tunneling processes.
7. Guth, A. H., & Weinberg, E. J. (1983). Could the universe have recovered from a slow first-order phase transition?. Nuclear Physics B, 212(2), 321-364.: Guth and Weinberg examine the possibility
of the universe recovering from a slow first-order phase transition, a scenario relevant to inflationary quantum tunneling.
8. Linde, A. D. (1982). Scalar field fluctuations in the expanding universe and the new inflationary universe scenario. Physics Letters B, 116(5-6), 335-339.: This article by Linde discusses scalar
field fluctuations in the expanding universe, a phenomenon relevant to inflationary cosmology and quantum tunneling.
9. Freese, K., Adams, F. C., & Frieman, J. A. (1985). Cosmology with decaying vacuum states. Nuclear Physics B, 287(4), 797-819.: Freese, Adams, and Frieman investigate cosmological scenarios
involving decaying vacuum states, which may be influenced by quantum tunneling processes.
10. Guth, A. H., & Pi, S. Y. (1982). Fluctuations in the new inflationary universe. Physical Review Letters, 49(15), 1110-1113.: This seminal article by Guth and Pi explores fluctuations in the new
inflationary universe, a topic closely tied to quantum tunneling in cosmology.
11. Hawking, S. W. (2000). The future of theoretical physics and cosmology: Celebrating Stephen Hawking’s 60th birthday. Cambridge University Press.: This book contains various essays and
contributions on theoretical physics and cosmology, including discussions on quantum tunneling. | {"url":"https://www.academicblock.com/science/physics-of-the-universe/quantum-tunneling-in-cosmology","timestamp":"2024-11-14T02:15:16Z","content_type":"text/html","content_length":"115755","record_id":"<urn:uuid:bfca48aa-b1e1-4253-9fe5-1a82c427f225>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00193.warc.gz"} |
Lines AC and BD intersect at point O.If m∠AOD = (10x − 7)° and m∠BOC = (7x + 11)°, what is m∠BOC?Group of answer choices53°89°6°106°
Lines AC and BD intersect at point O.If m∠AOD = (10x − 7)° and m∠BOC = (7x + 11)°, what is m∠BOC?Group of answer choices53°89°6°106°
Solution 1
To find the measure of ∠BOC, we first need to understand that lines AC and BD intersect at point O, forming vertical angles ∠AOD and ∠BOC. Vertical angles are always equal.
So, we can set the measures of these angles equal to each other and solve for x:
10x - 7 = 7x + 11
Subtract 7x from both si Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI
Upgrade your grade with Knowee
Get personalized homework help. Review tough concepts in more detail, or go deeper into your topic by exploring other relevant questions. | {"url":"https://knowee.ai/questions/21415998-lines-ac-and-bd-intersect-at-point-oif-maod-x-and-mboc-x-what-is","timestamp":"2024-11-05T04:03:10Z","content_type":"text/html","content_length":"353563","record_id":"<urn:uuid:766a0cdd-2e07-48eb-a2f3-115fa36fa4a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00495.warc.gz"} |
Decision making in finance future value of an investment sheet 1 answers
Decision Making in Finance: Future Value of an Investment V .A Student Activity Sheet I: You Have to Get Money to Make Money 1. Kafi is considering three job offers in educational publishing. One is
a full-time position as an editor that pays a salary of $37,500 per year. Decision Making in Finance: Future Value of an Investment VI.A Student Activity Sheet 1: You Have to Get Money to Make Money
Charles A. Dana Center at The University of Texas at Austin Advanced Mathematical Decision Making (2010) Activity Sheet 1, 6 pages 3 8. Estimate the monthly take-home income in Row 13 of Job Summary
Table 1. 9.
Advanced Mathematical Decision Making with Ms. Bridges at New Hampstead High School Ms. Bridges AMDM Unit 6: Decision Making in Finance. Unit 6 Vocabulary Project. Review Sheet #1. Review Sheet #1
KEY. Future Value of an Investment.PDF (587k) Decision Making in Finance: Present Value of an Investment VI.B Student Activity Sheet 5: A Coot Tool! Vanessa is a financial planner specializing in
retirement savings. She realizes the importance of using mathematical formulas and the appropriate tools to help her clients understand the reasoning behind the advice she is giving. TCSS – Advanced
Mathematical Decision Making Unit 6 Concept 1: Future Value of an Investment (MAMDMA3 a and b) Essential Questions: How can students analyze which income opportunities are best for a given situation
based on type of income, t ype of employment, taxes, benefits, and financial goals? The concept of time value of money is important to financial decision making because A) it emphasizes earning a
return of interest on the money you invested. B) it recognizes that $1 today has more value than $1 received a year from now.
Decision Making in Finance: Future Value of an Investment VI.A Student Activity Sheet 1: You Have to Get Money to Make Money Charles A. Dana Center at The University of Texas at Austin Advanced
Mathematical Decision Making (2010) Activity Sheet 1, 6 pages 3 8. Estimate the monthly take-home income in Row 13 of Job Summary Table 1. 9.
TCSS – Advanced Mathematical Decision Making. Unit 6. Concept 1: Future Value of an Investment (MAMDMA3 a and b). Essential Questions: ▫ How can 26 Sep 2019 The future value function is available on
most spreadsheet programs, including Microsoft Let's run through the variables in the future value function, one-by- one: but your answers might be slightly off as the calculator now assumes monthly
before making any investment or personal finance decisions. 10 Dec 2019 Formula to Calculate Net Present Value (NPV) in Excel NPV encompasses many financial topics in one formula: cash flows, the
time value of money, comes to the rescue for financial decision making, provided the investments, estimates, There are two methods to calculate the NPV in the Excel sheet. 25 Jun 2019 Investing
Decisions. Fundamental analysis depends heavily on a company's balance sheet, its statement of cash flows and its income statement. The key aspects of financial decision-making relate to financing,
investment, One determination of the amount required for running of business and Evaluating the size, timing, and risk of future cash flows (both cash inflows Capital budgeting decisions determine
the fixed assets composition of a firm's Balance Sheet.
TCSS – Advanced Mathematical Decision Making Unit 6 Concept 1: Future Value of an Investment (MAMDMA3 a and b) Essential Questions: How can students analyze which income opportunities are best for a
given situation based on type of income, t ype of employment, taxes, benefits, and financial goals?
Decision Making in Finance: Future Value of an Investment VI.A Student Activity Sheet 1: You Have to Get Money to Make Money Charles A. Dana Center at The University of Texas at Austin Advanced
Mathematical Decision Making (2010) Activity Sheet 1, 5 pages 2 Another consideration in comparing jobs is the benefits each provides, such as health Decision Making in Finance: Present Value of an
Investment VI.B Student Activity Sheet 4: Road to Sl Million In Student Activity Sheet 3, you analyzed the future value of an investment over time. You began with $2,600 invested in a savings account
for 30 years. After 30 years, your initial investment would be worth 9.062.70. Decision Making in Finance: Present Value of an Investment VI.B Student Activity Sheet 5: A Cool Tool! Charles A. Dana
Center at The University of Texas at Austin Advanced Mathematical Decision Making (2010) Activity Sheet 5, 9 pages Reginald wants to find the future value of an investment of $6,000 that earns 6.25%
Decision Making in Finance: Future Value of an Investment VI.A Student Activity Sheet 3: Time Value of Money Charles A. Dana Center at The University of Texas at Austin Advanced Mathematical Decision
Making (2010) Activity Sheet 3, 5 pages 11 The future value of an investment is the amount it will be worth after so many months or Decision Making in Finance: Future Value of an Investment VI.A
Student Activity Sheet 2: What Makes Money Work for You? Charles A. Dana Center at The University of Texas at Austin Advanced Mathematical Decision Making (2010) Activity Sheet 2, 2 pages 7 3. Based
on the processes you used to fill in the spreadsheet in Question 1, write a function
1 Decision Making in Finance: Future Value of an Investment VI.A Student Activity Sheet 1: You Have to Get Money to Make Money. Decision Making in Finance:
1 Apr 2011 Find out the future value of an investment with the Excel FV Function. For example if you're not making regular payments you can leave the pmt argument out. of the Excel functions for
personal financial decisions: the PV, FV, PMT, 1. calculate the compound interest up to the point in time where you 1. Investment environment and investment management process………………….. .7 analysis
and valuation for investment decision making, options pricing and common set of financial principles, such as the present value, the future value, the cost Markowitz portfolio theory answers this
question using efficient set. To answer this question, you can request your credit score (for which there is a Because different lenders have different criteria for making a loan, where you 3.307
percent interest for the loan.1 This means a monthly payment of $877. A history of prompt payments of at least the minimum amount due helps your score.
Decision Making in Finance: Future Value of an Investment VI.A Student Activity Sheet 3: Time Value of Money Charles A. Dana Center at The University of Texas at Austin Advanced Mathematical Decision
Making (2010) Activity Sheet 3, 5 pages 11 The future value of an investment is the amount it will be worth after so many months or
10 Dec 2019 Formula to Calculate Net Present Value (NPV) in Excel NPV encompasses many financial topics in one formula: cash flows, the time value of money, comes to the rescue for financial decision
making, provided the investments, estimates, There are two methods to calculate the NPV in the Excel sheet. 25 Jun 2019 Investing Decisions. Fundamental analysis depends heavily on a company's
balance sheet, its statement of cash flows and its income statement. The key aspects of financial decision-making relate to financing, investment, One determination of the amount required for running
of business and Evaluating the size, timing, and risk of future cash flows (both cash inflows Capital budgeting decisions determine the fixed assets composition of a firm's Balance Sheet. In finance,
discounted cash flow (DCF) analysis is a method of valuing a project, company, or asset using the concepts of the time value of money. Discounted cash flow analysis is widely used in investment
finance, real Thus the discounted present value (for one cash flow in one future period) is expressed as: D P V = F V 14 Feb 2019 Businesses consider the time value of money before making an
investment decision. They need to know what the future value is of their Credit Decision Making Economic Reasoning Entrepreneurism Financial Behavior Financial Goals General Information Holidays
Identity Theft Income Insurance Investments Are you having a hard time deciding what path to take in the future? Students are to toss the coin with the correct value to the target with the
To answer this question, you can request your credit score (for which there is a Because different lenders have different criteria for making a loan, where you 3.307 percent interest for the loan.1
This means a monthly payment of $877. A history of prompt payments of at least the minimum amount due helps your score. Interest and compound interest are central in Finance: Firms borrow funds and
of modern finance, that is, making the best use of the organization's financial resources. Potential borrowers and investors need answers to questions like these Exhibit 1. The FV formula in this
exhibit predicts investment future value ( FV). | {"url":"https://investingyzhs.netlify.app/kratochwil45528lu/decision-making-in-finance-future-value-of-an-investment-sheet-1-answers-342","timestamp":"2024-11-02T15:26:12Z","content_type":"text/html","content_length":"37771","record_id":"<urn:uuid:47b11e19-19ce-4e95-bf91-630451bb8853>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00459.warc.gz"} |
English Dictionary
amount /əˈmaʊnt/
1. (Be tantamount or equivalent to ( amount )
be tantamount or equivalent to
Her action amounted to a rebellion
2. (Develop into ( come , add up , amount )
develop into
This idea will never amount to anything
nothing came of his grandiose plans
3. (Add up in number or quantity ( add up , come , amount , number , total )
add up in number or quantity
The bills amounted to $2,000
The bill came to $2,000 | {"url":"http://ironservices.com/Dictionary/Search/hefa?word=amount","timestamp":"2024-11-02T04:55:55Z","content_type":"text/html","content_length":"23893","record_id":"<urn:uuid:5f42d1c0-2429-4334-b2df-62cc6a40244c>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00447.warc.gz"} |