content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
During an investigation into the properties of the SpeedSmart, the following was found in 2022:
With a clean browser on a wired Linux desktop the following download speeds and times it takes to display the final download speed are measured. The results are:
1. 75.37 Mbps (26 seconds)
2. 75.08 Mbps (27 seconds)
3. 75.23 Mbps (27 seconds)
4. 75.53 Mbps (27 seconds)
5. 75.55 Mbps (27 seconds)
Accuracy and consistency
Based on the measurements above the median is 75.37 Mbps and the standard deviation is 0.20.
Note that the standard deviation is larger than the significant number. Based on the standard deviaton the result should be presented as 75 * 10^0 Mbps.
Free to use
This speed test is free to use. The following ads are shown:
1. None, not applicable
Hence this speed earns 1 point.
Easy to use
This speed test works fine in:
1. Edge on Windows
2. Chrome on Windows
3. Chrome on Android
4. Safari on iOS
5. Firefox on Linux
According to Google (Gone) and Bing this speed test is mobile friendly.
The Tingtun Page Checker found the following barriers:
1. Use of pointing-device-specific only event handlers [2.1.1] (15)
2. Primary language of page [3.1.1] (1)
3. Define ids for elements [4.1.1] (4)
4. Provide role name for div/span with event handler [4.1.2] (15)
Hence this speed test earns 0 points.
When you use this speed test for the first time 1 click1 is needed to start the test. A restart of the speed test requires 1 click. Hence this speed test earns 2 points.
Privacy friendly
This speed test requires the following cookies:
1. PHPSESSID
2. uniqueID
Hence this speed test earns 0 points.
This speed test gives the following information:
1. a progress bar
2. the upload speed
3. ping / latency / jitter
Hence this speed test earns 3 points.
The average of the measured time to complete the test is 27 seconds.
Other findings
An analysis of this webpage with ZOMDir's Page Inspector yielded the following findings:
1. There are one or more findings regarding the alternative texts for images
2. There are 6 external stylesheets
3. There are one or more deprecated elements
4. There are hyperlinks to 4 different domains
5. There are no email addresses available
Overall score
Based on the above findings and the mutual comparison between the different speed tests, SpeedSmart has earned 12 points.
This puts SpeedSmart in position 3 in this speed test test. | {"url":"https://speedtesttest.zomdir.com/2022/speedsmart-2022.html","timestamp":"2024-11-04T10:34:28Z","content_type":"text/html","content_length":"11646","record_id":"<urn:uuid:98365dbb-c8f1-4165-9923-8f4088339bab>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00193.warc.gz"} |
Shobo's mother's present age is six times Shobo's present age. Shobo's age five years from now will be one third of his mother's present age. What are their present ages?
Assume the Shobo’s present age as $x$. Then according to the question find the mother’s present age in terms of $x$. After that, apply the condition of five years and you will get the equation.
Simplify it and you will get the answer.
Complete step-by-step answer:
In Mathematics, an equation is a statement that asserts the equality of two expressions. The word equation and its cognates in other languages may have subtly different meanings; for example, in
French an equation is defined as containing one or more variables, while in English any equality is an equation.
Solving an equation containing variables consists of determining which values of the variables make the equality true. Variables are also called unknowns and the values of the unknowns that satisfy
the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variable. A conditional equation
is only true for particular values of the variables.
Single variable equation can be $x+2=0$.
An equation is written as two expressions, connected by an equals sign ("$=$"). The expressions on the two sides of the equals sign are called the "left-hand side" and "right-hand side" of the
The most common type of equation is an algebraic equation, in which the two sides are algebraic expressions. Each side of an algebraic equation will contain one or more terms. For example, the
has left-hand side $A{{x}^{2}}+Bx+C$ which has three terms, and right-hand side $y$, consisting of just one term. The unknowns are $x$ and $y$ and the parameters are $A,$ $B,$ and $C$.
An equation is analogous to a scale into which weights are placed. When equal weights of something (grain for example) are placed into the two pans, the two weights cause the scale to be in balance
and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount of grain must be removed from the other pan to keep the scale in balance. Likewise, to keep an
equation in balance, the same operations of addition, subtraction, multiplication and division must be performed on both sides of an equation for it to remain true.
Let Shobo’s present age be $x$years.
Then according to the question, the present age of his mother will be $6x$years.
After five years,
Shobo’s age will be $(x+5)$years.
It is given in question that after five years, Shobo’s age will be one third of his mother's present age.
Simplifying above equation we get,
& (x+5)=2x \\
& 5=2x-x \\
& x=5 \\
Therefore, Shobo’s present age will be $5$years and his mother’s present age will be $6x=6\times 5=30$years.
Read the question carefully. Do not confuse yourself while simplifying. Also, you must understand the concept behind the question. Do not miss any term. Take care that no mistakes occur. Solve the
question in a step by step manner. | {"url":"https://www.vedantu.com/question-answer/shobos-mothers-present-age-is-six-times-shobos-class-8-maths-cbse-5ee1de14c9e6ad07956eef84","timestamp":"2024-11-10T11:02:47Z","content_type":"text/html","content_length":"156879","record_id":"<urn:uuid:ac701432-7670-4480-90c2-6c83f78073e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00249.warc.gz"} |
Equation of 137 part 2
• The star tetrahedron is encoded in the seed of life which is comprised of 7 circles and the star tetrahedron which is the hexagonal yod encoded in the tetractys is made out of the 7 sephiroth of
construction which corresponds to the 7 circles and the star tetrahedron encodes a cube(Cube of creation) which encodes 777 because 1+2+3+4+5+6=21=777(Gods number, 777=7:14=72) and the star
tetrahedron(6 points, 6 triangles and 6 points of the hexagon) is the number 666 and 1133+3311=4444(444), 1133=23:33=2:333=666 and this time number encodes 333 this shows that the star
tetrahedron which is God is also the Trinity Godhead, the star tetrahedron encodes the dodecahedron(E8 is comprised of dodecahedrons and the 64 tetrahedron grid is comprised of star tetrahedrons,
(infinite dimensional E8 lie group)=(infinite tetrahedron grid)=(infinite star dodecahedron fractal)) which is energy/light which is the multidimensional polygon 666 and the dodecahedron is based
off of the pentagon(Pentagon=pentagram=π) which is the multidimensional polygon 555(15) this shows the 6:5 connection
• The 5 tetractys which is 1+2+3+4+5=15 is very important because 15 is very important as shown in part 1 and the 3D form of the 5 tetractys the 5 tetrahedral tetractys constructs the 64
tetrahedron grid and 5 also constructs the E8 lie group which is the higher dimensional form of the 64 tetrahedron grid and 5 is encoded in the 10 tetractys(55) as shown below also 11 hours 33
mins(11:33) is 11.55 hours
• 19:19+15:36=34:55(Fibonacci time number and triangle time number), 10 tetractys=55 and tetractys=10/Tetragrammaton/Tree of life/10 superstring dimensions and the cosmic tree of life(91(91=13=
fruit of life) tree of lifes) is made up of 550(55=10=1) sephiroth and 10=1=singularity/zero-point energy/infinite god energy-consciousness
• The 777-tree of life/tetractys(10) connection to the star tetrahedron/seed of life and its connection to the 231 gates/E8 lie group/64 tetrahedron grid/metatrons cube/fruit of life connects it to
the infinite fractaling fruit of life/singularity which is encoded in the X and Y equation also the 7 star tetrahedron tetractys which is 1234567+7654321=8888888(infinite god energy) which
encodes 19:19(3311) and the 1133 and 8888888 connection to the fine structure constant is connected to the X and Y equation
• (3)×(1+1)×(4+7)×(1+3+9)×(2+5+3+1)×(5+4+9+7+9+7+1+8+4+4+9+1+9+1+7)×(1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1)=18451290
• 18451290/1765280≈10(tetractys/tetragrammaton/tree of life), π^7/π7≈137
• (2×23×499×19214759967251×55150662460749672076915609)×3=24324508537922225929610471973872677766597486×3=72973525613766677788831415921618033299792458, this number timesed by 3 is the number encoded
in the decimal so 3333333333333333333333333333333333333333333333 timesed by 3 is 9999999999999999999999999999999999999999999999 and this number is related to the infinite series of 9's produced
by the Fibonacci numbers but it is 46 nines so its not a complete so we have to add two 9's to get 4 sets of 24 repeating Fibonacci numbers(108×4=432) that reduce to 48 nines but if we turn the
middle number 9 in the number into an 8 and divide it by one we get 1/999999999999999999999998999999999999999999999999 which is equal to a decimal which encodes the first 24 Fibonacci numbers!
• The number (2×23×499×19214759967251×55150662460749672076915609)/(3×11×47×139×2531×549797184491917×11111111111111111111111) can be turned into a fraction(a picture of the fraction is shown below)
that contains 22 ones this corresponds to the 22 pathways of the tree of life and the 22 Hebrew letters and only 21 of the ones in there fractions in the big fraction have numbers added to there
fractions, this corresponds to the 231 gates and 777 which corresponds to the cosmic tree of life
• If you add all the numbers that are added to the fractions up you get 231+1 this shows that the number its self encodes the geometry that constructs the universe and also the 231 gates is
constructed out of the 22 Hebrew letters which connects 22 and 21 and the extra one in 231+1 can represent the singularity and can be the circle around the 231 gates and it can also be added to
the 22 ones to form the number 11111111111111111111111 which is used in the equation
• (2×23×499×19214759967251×55150662460749672076915609)×3=72973525613766677788831415921618033299792458=222, if you add all those numbers up you get 222! and 222 is the double of the trinity 111
which is the Godhead and that means that 222 forms the star tetrahedron(duality) The number 37 is very important, as we have seen, 37×3=111, this shows that the trinity of singularities is
encoded in 37(3×7=21) and 37×21=777!, the 7 represents the 7 sephiroth of construction and the 3 represents the other sephiroth which are the top 3 sephiroth, the number 37 timesed by any
multiple of 3 below and including 27 produces these trinity numbers, the 15 tetractys which is 1+2+3+4+5+6+7+8+9+10+11+12+13+14+15=120 and the singularity(10=1) can be used to form the 15
tetractys: 10(37)×12=120(4D dodecahedron) and the 4D dodecahedron is used to construct the E8 lie group/64 tetrahedron grid(444) and when 10 is turned into its tree of life components and forms
37 this happens: 37×12=444! and 37×144(12×12)=5328 which encodes the xen particle and 222+222=444 and (222×222)/444=111, 222×222 symbolizes a star tetrahedron constructed out of star tetrahedron
(these being 2D shapes) and this makes a 2D 64 tetrahedron grid(444) and (222×222)/3311≈15
• The Hebrew alphabet is encoded in the star tetrahedron and this can be represented as a time number which is 18:22(star tetrahedron: Hebrew letters), and the number 1133 encodes the 18:22 value
as shown: 11:33=693 and 18:22=396=3311
• 18:22=231 gates/E8 lie group/64 tetrahedron grid/Metatrons cube/fruit of life/Cosmic tetractys/Cosmic tree of life. When the tree of life(the 22 pathways and 11 sephiroth) unites into one as
shown by 10=1 you get the 12th sephiroth which is the 231 gates and this forms the Kathara grid/12 tree grid, 1133 as shown many times encodes 4444 and 444=4:44=176 and when you divide 176 by 444
you get 176/444=0.396396396, this value encodes infinite 4's, Virgos multidimensional polygon is 444 her vibrational dimension is the 4th one but its actual value is 4444... (infinite 4's) this
shows the connection between 1133 and 444
• 22+11=33(1+1+2+3+5+8+13), this shows that 13(91) that constructs the fruit of life also constructs the tree of life this makes sense because the tree of life encodes 777 which constructs the CTOL
(550) which is the cosmic tetractys/64 tetrahedron grid, 55=5:5(5 tetractys:5 tetractys)=15:15=30=6:5, 33=3:3=3 cubed=27(1+2+4+8+7+5, fingerprint of god), seed of life/star tetrahedron=18(9)=666=
3 cubes=27
• 777=7:77=8:11=88(1+1+2+3+5+8+13+21+34), 88 symbolizes infinite god energy, 88:8(888)=512(512 tetrahedron grid), 88:8=64:8=72=777
• 1+2+3+4+5+6+7+8(64 tetrahedron grid)=36(fruit of life(13=91=Cosmic tree of life)+star tetrahedrons)=8 tetractys=9, the 8 tetractys encases the 7 tetractys, the numbers that construct and are
encoded in the sacred geometries/superstrings are 84, 168(3×7×8) and 336 these numbers are all in the 3 times tables(3 6 9) the 7 times tables and 8 times tables(minus 84) and these numbers
construct the universe also 336 has a connection to 1133, √1133=33.6
• 1+2+3+4+5+6+7+8+9+10+11+12(12 vibrational dimensions)=78(12 tetractys, 78=15(15 vibrational dimensions))=7:8=(7 tetractys):(8 tetractys)=28+36=64, when the cosmic tree of life's top tree of life
is a 12 tree grid/Kathara grid it is made up of 552 sephiroth and 552=12 which constructs the 12 vibrational dimensions, 78=512 tetrahedron grid | {"url":"https://www.64tge8st.com/post/2019/07/10/equation-of-137-part-2","timestamp":"2024-11-12T18:48:14Z","content_type":"text/html","content_length":"1050507","record_id":"<urn:uuid:2d0ac2a7-e472-47ee-8e16-707de107c7d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00031.warc.gz"} |
Effect of a Magnetic Field on an Atomic Orbital
1. Introduction
Typically, classical electromagnetism predicts a change in the magnetic dipole moment of an orbital electron when an external magnetic field is set up normal to the plane of the electron’s orbit [1,
2]. It is usually assumed that the speed of the electron changes but the radius remains unchanged during the time through which the magnetic field is changing [3]. This assumption is not consistent
with the effect of perturbation on classical orbits for Coulomb potentials. Griffiths [3] discussed the effect of a magnetic field on an atomic orbital and derived the change in the magnetic dipole
moment of the electron keeping the orbital radius unchanged. He also mentioned (without derivation) that if one assumes constant speed while the radius changes then he would get a change in the
magnetic moment which is twice its magnitude for the case of fixed radius and change in speed. When an atomic electron is subject to an external magnetic field, most authors attribute the speed
change to the additional magnetic force. This justification is confusing since the students learned that magnetic forces cannot do work and thus incapable to change the electron’s speed. A more
thoughtful idea is to attribute the speed change to the modified centripetal force and to assert a fixed radius. Even though there has been considerable interest in the effect of magnetic field on
the motion of an electron [4-8], but there has been no direct investigation of the above problem for the general case in which both the speed and the radius are allowed to change. Therefore in the
present paper we discuss the effect of an external magnetic field on an atomic orbital electron. In particular, we derive the change in the electron’s magnetic moment for the general case in which
both the speed and the orbit radius are allowed to change during the time through which the magnetic field is increasing to its final value. Our treatment for the general case is motivated not only
by its fundamental importance but its close relation to the so-called satellite paradox which deals with the effect of atmospheric drag on a satellite orbit [9-11].
2. An Atomic Electron in a Magnetic Field
Consider an atomic electron that rotates counterclockwise with a speed
and thus its orbital dipole moment is
where we allow both the speed and the radius of the orbit to change from
which, with the help of Equation (3), can be written as
Using binomial expansion to first order in
which, with a further use of binomial expansion for the left hand side term, gives
Our result in Equation (8) is the general case which allows changes in the speed and the radius of the orbit. Before we proceed further, we need to consider the two special cases in the following
3. Special Cases
Our aim here is utilize our result of Equation (8) for two special cases: the first deals with fixed orbit (
3.1. The Fixed Orbit Radius Case
which corresponds to a change in the magnetic dipole moment predicted by Equation (2), namely
The above result is the same as that derived by Griffiths in ref. [3]. This change in magnetic moment corresponds to a change in the orbital angular momentum,
One may also derive
and thus the torque
which immediately gives the change in angular momentum, that is
Therefore, upon using
where we used for the average speed
Using the expression for
Alternatively, one may calculate the change in the kinetic energy of the electron,
which upon the substitution of
The above result for
3.2. The Fixed Speed Case
and in this case the change in the magnetic moment is given by
which upon using Equation (19) becomes
Obviously, our result shows that the magnitude of the change of the magnetic moment for fixed speed is twice its magnitude for the fixed radius case. The result in Equation (21) is what Griffiths
claimed in reference [3] but without any derivation. This change in the magnetic moment corresponds to a change in the angular momentum, that is
Going back to the general case in which both the speed and the radius are changing, one may rewrite Equation (8) in the form
In the present case, the variation in the magnetic moment is
which upon the use of Equation (23) it becomes
Note that the last term on the right hand-side looks like the same as for the fixed radius case. Therefore, using Equation (9), we get
The above result is exactly the same as that for the fixed radius case, and thus it corresponds to the same change in the angular momentum which is given in Equation (14).
4. Conclusions
In this work, the effect of an external magnetic field on an atomic electron has been examined. The usual treatment of this problem assumes (without justification) a fixed radius for the orbit but
allow the electron’s speed to change. In the present paper we considered the general case in which both the speed and the radius are allowed to change. We used binomial expansion to first order in
the change of speed and in the change of the radius. The two special cases for fixed orbit and for fixed speed were deduced from our general case and in each of these two cases the change in the
magnetic moment and the change in the angular momentum have been derived. Interestingly, our result for the general case yields a change in the magnetic moment which is the same as that for the fixed
radius case. | {"url":"https://scirp.org/journal/paperinformation?paperid=3761","timestamp":"2024-11-11T05:10:17Z","content_type":"application/xhtml+xml","content_length":"97640","record_id":"<urn:uuid:801bf113-c270-4c22-b96d-14fde819f34f>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00674.warc.gz"} |
Transactions Online
1. Introduction
The multi-armed bandit problem is a decision-making problem where a player agent repeatedly chooses an action, referred to as an arm, in order to maximize its cumulative reward over a sequence of
trials. The challenge of the problem lies in balancing exploration of new arms and exploitation of known high-reward arms. Different algorithms have been developed to address this trade-off, such as
the upper confidence bound algorithm [1] and Thompson sampling [2]. These algorithms aim to find the optimal balance between exploration and exploitation, and are widely applied in fields such as
online advertising, recommendation systems, and clinical trials [3]-[5].
In some applications, one needs to consider the situations where an agent must deal with an opponent. This type of the bandit problem is called the adversarial multi-armed bandit problem [6], [7]. In
the adversarial multi-armed bandit problem, an agent faces a more challenging scenario, where the reward distributions of the arms are not fixed but can be adversarially chosen to disrupt agent’s
learning process. The Exp3 (Exponential-weight algorithm for Exploration and Exploitation) policy is a widely used approach for solving the adversarial bandit problem [8]. The Exp3 algorithm balances
exploration and exploitation by assigning probabilities to each arm based on their past rewards.
Recently, the need for a distributed approach has increased in many large-scale machine learning and optimization problems, where the datasets or models are too large to be processed by a single
processor [9]-[16]. In a distributed bandit algorithm, multiple agents cooperatively learn the optimal arm by communicating with each other over a network [17]-[20]. Each agent cooperatively makes a
decision by combining the own estimation of the rewards of the arms with those of the nearby agents. The advantage of the distributed multi-armed bandit algorithm over the centralized one is that it
can achieve a smaller upper bound on the regret. This is because in a distributed setting, agents explore the arms more effectively, which allows for faster exploration and more efficient
For the cooperative adversarial multi-armed bandit problem, Cesa-Bianchi et al. proposed the Exp3-Coop algorithm with communication delay and analyzed the impact of delays between players’ decisions
and the potential benefits of cooperation [21]. Bar-On and Mansour proposed a distributed algorithm for the nonstochastic bandit problem, which allows agents to learn independently of one another
while still achieving a cooperative goal [22]. Alatur et al. addressed the multi-armed bandit problem of multiple players competing for limited resources based on an adaptation of the Exp3 policy
[23]. Yi and Vojnović proposed a decentralized follow-the-regularizer-leader algorithm with communication delays [24]. Although these methods with the adversarial settings assume undirected or static
communication graphs, considering the case with directed and time-varying communication graphs is particularly important because agents are limited to sending messages in specific directions in many
networked systems.
To relax such a limitation on the network topology, this paper focuses on a cooperative adversarial multi-armed bandit problem, in which multiple agents work together on directed and time-varying
communication graphs to maximize the collective reward. We propose a distributed Exp3 policy, in which the learning process is distributed across multiple player agents. Each agent maintains a local
estimate of the reward distribution of arms. These estimations are combined by the consensus algorithm [25]-[27] to update the probability distribution used for arm selection. As opposed to the
existing work, such as [21]-[24], we do not make the assumption of omnidirectionality of communication. Thus, the proposed algorithm can be used in a wider range of applications.
The remainder of this paper is organized as follows. Section 2 presents the distributed Exp3 policy for the adversarial multi-armed bandit problem. The regret analysis of the proposed policy is
conducted in Sect.3. The numerical example of the proposed method is shown in Sect.4. Finally, concluding remarks are given in Sect.5.
2. Distributed Exp3 Policy
In the distributed multi-armed bandit problem, a group of agents works together to learn the best arm to maximize a reward. Each agent can only observe the reward for the arm it chooses. Thus, agents
share information about the rewards with other agents over a communication network. In this paper, we model the communication network as a time-varying directed graph without a self-loop \(\mathcal
{G}(t)=(\mathcal{V}, \mathcal{E}(t))\), where \(\mathcal{V}=\{1,2,\dots,N\}\) and \(\mathcal{E}(t)\subset \mathcal{V}\times \mathcal{V}\) are the sets of agents and communication links at time \(t\in
\mathcal{T}=\{1,2,\dots,T-1\}\). We consider an adversarial multi-armed bandit problem with \(K\) arms. Let \(X_{i,k}(t)\in[0,1]\) be the reward of agent \(i\) for arm \(k\in \mathcal{K}=\{1,2,...,K
\}\) at time \(t\in \mathcal{T}\). In this paper, the unconstrained reward model is considered, that is, if two or more agents choose the same arm, they receive the same reward independently [18].
In the proposed distributed Exp3 algorithm, the probability of choosing arm \(k\) is updated by
\[ p_{i,k}(t) = (1-\alpha)\frac{w_{i,k}(t)}{W_{i}(t)} +\frac{\alpha}{K}, \tag{1} \]
where \(\alpha\in(0,1)\) is a trade-off parameter. Equation (1) implies that the probability of choosing arm \(k\) is computed by combining the Hedge algorithm to exploit the learned information and
the uniform search to explore better arms. The weights for exploitation are initialized as \(w_{i,k}(1) = 1/K^\nu\) for all \(i\in\mathcal{V}\) and \(k \in \mathcal K\), and \(W_i(1)=W(1)= K^{1-\nu}
\) for all \(i\in\mathcal{V}\), where \(0<\nu\le 1\).
After updating the probabilities \(p_{i,1}(t)\), \(p_{i,2}(t)\), \(\dots\), \(p_{i,K}(t)\), agent \(i\) chooses arm \(k_{i}(t)\) according to these probabilities. Then, the reward \(X_{i,k_{i}(t)}(t)
\) for arm \(k_{i}(t)\) is feedbacked to the agent. The information of the reward \(\hat{X}_{i,k}(t)\) of the nearby agents is unified by the consensus dynamics as follows:
\[ \hat{X}_{i,k}(t) = \begin{cases} \frac{\sum_{j\in \mathcal{V}} a_{ij}(t) X_{j,k}(t)}{p_{i,k}(t)}, & \mbox{if }k = k_{i}(t), \\ 0, &\mbox{if } k \neq k_{i}(t), \end{cases} \tag{2} \]
where \(a_{ij}(t)\) is the edge weight for the directed edge \((j,i)\in\mathcal{E}\). We note that if arm \(k\) is not chosen, then \(X_{i,k}(t)=0\) for all \(i\in\mathcal{V}\) and \(t\in\mathcal{T}
\). The edge weight is defined as
\[ a_{ij}(k) & \begin{cases} \ge \underline{a}, & j \in \mathcal{N}_{i},\\ = 0, & j \notin \mathcal{N}_{i}, \ j\neq i, \end{cases} \tag{3} \\ a_{ii}(k) &= 1- \sum_{j\in \mathcal{N}_{i}} a_{ij}(k) \ge
\underline{a}, \tag{4} \]
where \(\mathcal{N}_i(t)=\{\ell\in\mathcal{V}\mid (\ell,i)\in\mathcal{E}\}\) is the set of the nearby agents and \(\underline{a}\) is a positive constant.
Finally, the weights for exploitation are updated by
\[ w_{i,k}(t+1) &= w_{i,k}(t) e^{\beta \hat{X}_{i,k}(t)}, \tag{5} \\ W_{i}(t+1) &= \sum_{k\in\mathcal{K}} w_{i,k}(t+1), \tag{6} \]
where \(0<\beta\le \alpha/K\) is a learning parameter.
In this paper, we make the stochasticity for the edge weight.
Assumption 1: \(\sum_{j\in\mathcal{V}}a_{ij}(t)=1\) for all \(i\in\mathcal{V}\) and \(t\in\mathcal{T}\).
3. Regret Analysis
To evaluate the performance of the distributed Exp3 algorithm, we consider the following pseudo-regret for the multi-agent system:
\[ \overline{\mathrm{Regret}} = R^\ast - \sum_{i\in \mathcal{V}} \mathbb{E}\left[\sum_{t\in\mathcal{T}}X_{i,k_{i}(t)}(t)\right], \tag{7} \]
where \(R^\ast = \max_{k\in \mathcal{K}}\sum_{i\in \mathcal{V}} \mathbb{E}\left[\sum_{t\in\mathcal{T}}X_{i,k}(t)\right]\) is the maximum cumulative reward for continuing to choose the same arm.
The pseudo-regret (7) measures how much reward a group of agents loses by not selecting the arm with the highest expected reward. The upper bound on the pseudo-regret is sublinear if the total regret
grows slower than the number of iterations of the algorithm. This is desirable because it means that the agents choose the optimal arm with high probability as the iteration goes on [7], [8].
Therefore, the purpose of the multi-agent system is to search the optimal arm by achieving a sublinear regret bound.
The next result evaluates the upper bound of the pseudo-regret by the distributed Exp3 algorithm.
Theorem 1: The upper bound of the pseudo-regret by the distributed Exp3 algorithm is given by
\[ \overline{\mathrm{Regret}} \le (\alpha + \beta K) R^\ast + \frac{1-\alpha}{\beta}N\ln K. \tag{8} \]
Proof: From (1) and (6), for all \(t\in\mathcal{T}\), we have
\[\begin{aligned} \sum_{k\in\mathcal{K}} p_{i,k}(t) = (1 - \alpha)\frac{\sum_{k\in\mathcal{K}} w_{i,k}(t)}{W_{i}(t)} + \sum_{k\in\mathcal{K}} \frac{\alpha }{K}=1. \nonumber \end{aligned}\]
Moreover, from (1), the probability of choosing arm \(k\) is lower bounded by
\[ p_{i,k}(t) \ge \frac{\alpha }{K}>0. \tag{9} \]
From (5) and (6), we have
\[ W_{i}(T) = \sum_{k\in\mathcal{K}} w_{i,k}(T)= \sum_{k\in\mathcal{K}} w_{i,k}(T-1) e^{\beta \hat{X}_{i,k}(T-1)}. \tag{10} \]
From (2), we also have
\[ 0&\le {\beta \hat{X}_{i,k}(t)}\nonumber\\ &\le\beta \frac{\sum_{j\in \mathcal{V}} a_{ij}(t)X_{j,k}(t)}{p_{i,k}(t)} \nonumber\\ &\le \beta \frac{\sum_{j\in \mathcal{V}} a_{ij}(t)}{\frac{\alpha}{K}}
\le \beta \frac{K}{\alpha} \le 1, \tag{11} \]
where the third inequality follows from (9) and \(X_{i,k}(t)\le 1\), and the forth inequality follows from Assumption 1.
We note that \(e^x\le 1 + x + x^2\) holds for any \(x\in[0,1]\). Then, from (10) and (11), we have
\[ W_{i}(T) &= \sum_{k\in\mathcal{K}} w_{i,k}(T-1) e^{\beta \hat{X}_{i,k}(T-1)} \nonumber\\ &\le \sum_{k\in\mathcal{K}} w_{i,k}(T-1)(1 + \beta \hat{X}_{i,k}(T-1) \nonumber\\ &\quad + (\beta \hat{X}_
{i,k}(T-1))^2) \nonumber\\ &= W_{i}(T-1) \nonumber\\ &\quad \times\left(1 + \beta \sum_{k\in\mathcal{K}}\frac{w_{i,k}(T-1)}{W_{i}(T-1)}\hat{X}_{i,k}(T-1) \right. \nonumber\\ &\qquad \left. +\beta^2 \
sum_{k\in\mathcal{K}}\frac{w_{i,k}(T-1)}{W_{i}(T-1)}\hat{X}_{i,k}(T-1)^2 \right). \tag{12} \]
From (1), we have
\[\begin{aligned} p_{i,k}(T-1) = (1-\alpha)\frac{w_{i,k}(T-1)}{W_{i}(T-1)} +\frac{\alpha}{K}. \nonumber \end{aligned}\]
Thus, we have
\[ \frac{w_{i,k}(T-1)}{W_{i}(T-1)} &= \frac{1}{1-\alpha}p_{i,k}(T-1) -\frac{\alpha}{(1-\alpha)K}\nonumber\\ &\le \frac{1}{1-\alpha}p_{i,k}(T-1). \tag{13} \]
From (13), we obtain
\[ \nonumber &\beta \sum_{k\in\mathcal{K}}\frac{w_{i,k}(T-1)}{W_{i}(T-1)} \hat{X}_{i,k}(T-1) \nonumber\\ &\le \frac{\beta}{1-\alpha} \sum_{k\in\mathcal{K}} p_{i,k}(T-1)\hat{X}_{i,k}(T-1)\nonumber \\
&= \frac{\beta}{1-\alpha} p_{i,k_{i}(T-1)}(T-1)\hat{X}_{i,k_{i}(T-1)}(T-1) \nonumber\\ &\le \frac{\beta}{1-\alpha}p_{i,k_{i}(T-1)}(T-1)\nonumber\\ &\quad \times\frac{\sum_{j\in \mathcal{V}} a_{ij}
(T-1)X_{j,k_{i}(T-1)}(T-1)}{p_{i,k_{i}(T-1)}(T-1)} \nonumber \\ &= \frac{\beta}{1-\alpha} \sum_{j\in \mathcal{V}} a_{ij}(T-1)X_{j,k_{i}(T-1)}(T-1), \tag{14} \]
where the second inequality follows from (2).
From (13), we also have
\[ &\beta^2 \sum_{k\in\mathcal{K}}\frac{w_{i,k}(T-1)}{W_{i}(T-1)}\hat{X}_{i,k}(T-1)^2 \nonumber \\ &\le \frac{\beta^2}{1-\alpha}\sum_{k\in\mathcal{K}} p_{i,k}(T-1)\hat{X}_{i,k}(T-1)^2 \nonumber \\ &\
le \frac{\beta^2}{1-\alpha}\sum_{k\in\mathcal{K}} p_{i,k}(T-1)\hat{X}_{i,k}(T-1)\hat{X}_{i,k}(T-1)\nonumber \\ &\le \frac{\beta^2}{1-\alpha}\sum_{k\in\mathcal{K}} p_{i,k}(T-1)\nonumber\\ &\quad \
times\frac{\sum_{j\in \mathcal{V}} a_{ij}(T-1)X_{j,k}(T-1)} {p_{i,k}(T-1)}\hat{X}_{i,k}(T-1)\nonumber\\ &= \frac{\beta^2}{1-\alpha} \sum_{k\in\mathcal{K}} \left(\sum_{j\in \mathcal{V}} a_{ij}(T-1) X_
{j,k}(T-1) \right)\nonumber \\ &\quad \times \hat{X}_{i,k}(T-1)\nonumber\\ &\le \frac{\beta^2}{1-\alpha} \sum_{k\in\mathcal{K}} \left(\sum_{j\in \mathcal{V}} a_{ij}(T-1) \right)\hat{X}_{i,k}(T-1),\
nonumber \\ &=\frac{\beta^2}{1-\alpha} \sum_{k\in\mathcal{K}} \hat{X}_{i,k}(T-1), \tag{15} \]
where the last equality follows from Assumption 1.
Substituting (14) and (15) for (12) gives
\[ &\nonumber W_{i}(T) \nonumber\\ &\le W_{i}(T-1)\nonumber\\ &\quad \times \left(1+\frac{\beta}{1-\alpha} \sum_{j\in \mathcal{V}}a_{ij}(T-1)X_{j,k_{i}(T-1)}(T-1) \right. \nonumber \\ &\qquad \left.
+\frac{\beta^2}{1-\alpha}\sum_{k\in\mathcal{K}}\hat{X}_{i,k}(T-1) \right). \tag{16} \]
This yields
\[ &W_{i}(T) \nonumber\\ &\le W_{i}(1)\prod_{t=1}^{T-1} \left( 1 + \frac{\beta}{1-\alpha} \sum_{j\in \mathcal{V}}a_{ij}(t)X_{j,k_{i}(t)}(t) \right. \nonumber \\ &\quad \left. + \frac{\beta^2}{1-\
alpha}\sum_{k\in\mathcal{K}} \hat{X}_{i,k}(t) \right). \tag{17} \]
From (5) and (6), for any arm \(k\in \mathcal{K}\), we have
\[ W_{i}(T) &=\sum_{\ell=1}^{K} w_{i,\ell}(T) \nonumber\\ &\geq w_{i,k}(T) \nonumber\\ &= w_{i,k}(T-1) e^{\beta\hat{X}_{i,k}(T-1)} \nonumber\\ &= w_{i,k}(1) e^{\beta\sum_{t\in\mathcal{T}} \hat{X}_
{i,k}(t)}. \tag{18} \]
From (17) and (18), we have
\[ &w_{i,k}(1) e^{\beta\sum_{t\in\mathcal{T}} \hat{X}_{i,k}(t)}\nonumber \\ &\le W_{i}(1)\prod_{t=1}^{T-1} \left( 1 + \frac{\beta}{1-\alpha} \sum_{j\in \mathcal{V}}a_{ij}(t)X_{j,k_{i}(t)}(t) \right.
\nonumber \\ &\quad \left. + \frac{\beta^2}{1-\alpha}\sum_{k\in\mathcal{K}} \hat{X}_{i,k}(t) \right). \tag{19} \]
By taking the natural logarithm for (19) and using the initialization of \(w_{i,k}(1)=1/K^\nu\), we have
\[\begin{aligned} &-\ln K + \beta \sum_{t\in\mathcal{T}} \hat{X}_{i,k}(t) \nonumber \\ &\le \sum_{t\in\mathcal{T}} \ln \left( 1 + \frac{\beta}{1-\alpha}\sum_{j\in \mathcal{V}}a_{ij}(t)X_{j,k_{i}(t)}
(t) \right.\nonumber\\ &\quad \left.+ \frac{\beta^2}{1-\alpha}\sum_{k\in\mathcal{K}} \hat{X}_{i,k}(t) \right). \nonumber \end{aligned}\]
We note that \(\ln(1+x)\le x\) holds for any \(x \geq 0\). Then, we have
\[\begin{aligned} &-\ln K+\beta \sum_{t\in\mathcal{T}} \hat{X}_{i,k}(t) \nonumber \\ &\le \frac{\beta}{1-\alpha} \sum_{t\in\mathcal{T}} \sum_{j\in \mathcal{V}} a_{ij}(t)X_{j,k_{i}(t)}(t) \nonumber\\
&\quad + \frac{\beta^2}{1-\alpha}\sum_{k\in\mathcal{K}} \sum_{t\in\mathcal{T}} \hat{X}_{i,k}(t). \nonumber \end{aligned}\]
Since \(\hat{X}_{i,k}(t)\) is the unbiased estimator of \(X_{i,k}(t)\), by taking the expectation with respect to the estimated distribution of the rewards obtained by the distributed Exp3 algorithm,
we have
\[\begin{aligned} &-\ln K+\beta \sum_{t\in\mathcal{T}} X_{i,k}(t) \nonumber \\ &\le \frac{\beta}{1-\alpha} \sum_{t\in\mathcal{T}} \sum_{j\in \mathcal{V}} a_{ij}(t)X_{j,k_{i}(t)}(t) \nonumber\\ &\quad
+ \frac{\beta^2}{1-\alpha}\sum_{k\in\mathcal{K}} \sum_{t\in\mathcal{T}} X_{i,k}(t). \nonumber \end{aligned}\]
Furthermore, by taking the expectation with respect to the true distribution of the rewards, we have
\[\begin{aligned} &-\ln K+\beta \mathbb{E}\left[\sum_{t\in\mathcal{T}} X_{i,k}(t)\right] \nonumber \\ &\le \frac{\beta}{1-\alpha} \mathbb{E}\left[\sum_{t\in\mathcal{T}} \sum_{j\in \mathcal{V}} a_{ij}
(t)X_{j,k_{i}(t)}(t) \right] \nonumber\\ &\quad + \frac{\beta^2}{1-\alpha}\sum_{k\in\mathcal{K}} \mathbb{E}\left[\sum_{t\in\mathcal{T}} X_{i,k}(t) \right]. \nonumber \end{aligned}\]
Then, we have
\[\begin{aligned} &-N\ln K+\beta \sum_{i\in\mathcal{V}}\mathbb{E}\left[\sum_{t\in\mathcal{T}} X_{i,k}(t)\right] \nonumber \\ &\le \frac{\beta}{1-\alpha} \sum_{i\in\mathcal{V}}\mathbb{E}\left[\sum_{t\
in\mathcal{T}} \sum_{j\in \mathcal{V}} a_{ij}(t)X_{j,k_{i}(t)}(t) \right] \nonumber\\ &\quad + \frac{\beta^2}{1-\alpha}\sum_{i\in\mathcal{V}}\sum_{k\in\mathcal{K}} \mathbb{E}\left[\sum_{t\in\mathcal
{T}} X_{i,k}(t) \right]. \nonumber \end{aligned}\]
Therefore, we have
\[ &-N\ln K+\beta \sum_{i\in\mathcal{V}}\mathbb{E}\left[\sum_{t\in\mathcal{T}} X_{i,k}(t)\right] \nonumber \\ &\le \frac{\beta}{1-\alpha} \sum_{i\in\mathcal{V}}\mathbb{E}\left[\sum_{t\in\mathcal{T}}
\sum_{j\in \mathcal{V}} a_{ij}(t)X_{j,k_{i}(t)}(t) \right] + \frac{\beta^2 K}{1-\alpha} R^\ast, \tag{20} \]
where the last inequality follows from
\[\begin{aligned} R^\ast\ge \frac{1}{K}\sum_{i\in\mathcal{V}}\sum_{k\in\mathcal{K}} \mathbb{E}\left[\sum_{t\in\mathcal{T}} X_{i,k}(t) \right]. \end{aligned}\]
We note that (20) holds for any \(k\in\mathcal{K}\). Thus, we have
\[\begin{aligned} &-N\ln K+\beta R^\ast \nonumber \\ &\le \frac{\beta}{1-\alpha} \sum_{i\in\mathcal{V}}\mathbb{E}\left[\sum_{t\in\mathcal{T}} \sum_{j\in \mathcal{V}} a_{ij}(t)X_{j,k_{i}(t)}(t) \
right] + \frac{\beta^2 K}{1-\alpha} R^\ast. \nonumber \end{aligned}\]
It follows that
\[\begin{aligned} &-\frac{1-\alpha}{\beta}N\ln K+(1-\alpha)R^\ast \nonumber \\ &\le \sum_{i\in\mathcal{V}}\mathbb{E}\left[\sum_{t\in\mathcal{T}} \sum_{j\in \mathcal{V}} a_{ij}(t)X_{j,k_{i}(t)}(t) \
right] + \beta K R^\ast \nonumber\\ &= \sum_{i\in\mathcal{V}}\mathbb{E}\left[\sum_{t\in\mathcal{T}}X_{i,k_{i}(t)}(t) \right] + \beta K R^\ast, \nonumber \end{aligned}\]
where the last equality follows from the unconstrained reward model, and (3) and (4). This concludes the proof. \(\tag*{◻}\)
Theorem 1 holds even for the case when the connectivity of the communication network is not guaranteed. However, to achieve a better regret bound, sharing the estimated information between agents is
crucial; hence, uniform connectedness plays an important role. Investigating the relation between the connectedness of the communication graph and the regret bound is future research of this paper.
The next proposition shows that a sublinear regret can be obtained if the information on the upper bound of the accumulated reward is obtained in advance.
Proposition 1: Suppose that the trade-off parameter and the learning parameter are given as \(\alpha=\beta K\) and \(\beta=\min \{c/K, \sqrt{N\ln K/(2RK)}\}\), where \(0<c<1\) and \(R^\ast\le R\). If
each agent updates the estimation of the rewards by the distributed Exp3 algorithm, we have
\[ \overline{\mathrm{Regret}} \le \frac{2}{c}\sqrt{2NRK\ln K}. \tag{21} \]
Proof: We consider the case for \(\sqrt{N\ln K/(2RK)}\ge c/K\). Then, we have \(2R\le N (K\ln K)/c^2\) holds. This yields
\[ &\overline{\mathrm{Regret}} =R^\ast - \sum_{i\in \mathcal{V}}\mathbb{E}\left[\sum_{t\in\mathcal{T}}X_{i,k_{i}(t)}(t)\right] \nonumber\\ &\le R^\ast \le 4R =2\sqrt{2R}\sqrt{2R}\le \frac{2}{c}\sqrt
{2NRK\ln K}. \tag{22} \]
Next, we consider the case for \(\sqrt{N\ln K/(2RK)}< c/K\). In this case, \(\beta=\sqrt{N\ln K/(2RK)}\) holds. Moreover, we have \(\beta< c/K\). It follows that \(1-\alpha = 1-\beta K> 0\). Thus,
from (8), we have
\[\begin{aligned} \overline{\mathrm{Regret}} &\le (\alpha + \beta K) R^\ast + \frac{N}{\beta}\ln K \nonumber\\ &\le 2\beta K R^\ast + N\frac{1}{\sqrt{N}}\sqrt{\frac{2RK}{\ln K}} \ln K \nonumber\\ &\
le 2\sqrt{2NRK\ln K}. \nonumber \end{aligned}\]
For a single agent system, the regret bound is given by \((e-1) \alpha R^\ast + (1/\alpha)K\ln K\) for the case with \(\alpha=\beta\) [6]. Thus, by the analysis of [6], the regret is upper-bounded by
\(N((e-1) \alpha R^\ast + (1/\alpha)K\ln K)\) for the mutliagent system with \(N\) agents. Proposition 1 implies that the tighter regret bound can be obtained if the trade-off and learning parameters
are properly set.
Compared with the existing cooperative methods [21]-[24], in the proposed method, the condition of the communication topology is relaxed to time-varying directed networks. However, the regret bound
of the proposed algorithm is inferior to those of other methods at the cost of extending the applicable class. For example, in the Exp3-Coop algorithm [21], the number of agents \(N\) affects the
regret bound on the order of the square root of its reciprocal when the communication graph is fixed and undirected. This regret bound is more preferable for multiagent systems with the larger number
of agents. Further theoretical analysis of the proposed algorithm for large-scale networks is a future direction of this study.
4. Numerical Experiments
We consider a cooperative adversarial multi-armed bandit problem. Arm \(1\) is the best arm whose reward is randomly set from the interval \([0.8,1.0]\). The reward of arm \(k\in\mathcal{K}=\{2,3,\
dots,K\}\) is randomly set from the interval \([0.0,0.6]\) if the indices \(i\) and \(k\) are both even or both odd, and \([0.4,0.8]\) otherwise.
We evaluate the effectiveness of the proposed algorithm across different values of the trade-off parameter \(\alpha\). The communication networks at \(t=0\), \(1000\), \(2000\), and \(3000\) are
shown in Fig.1. Figure 2 illustrates the pseudo-regret (7) for agent \(1\). The learning parameter, number of arms, and number of agents are set to \(\beta=0.01\), \(K=10\), and \(N=10\),
respectively. We see that the evolution of the regret varies depending on the value of \(\alpha\). When \(\alpha=0.001\) and \(0.005\), the regret at the initial stage of the iteration remains small
but gradually increases over time. The probability of choosing an arm in (1) implies that a small value of \(\alpha\) hinders the exploration for better arms, whereas a large value restricts the
exploitation of the learned information of the rewards. In this example, a value of \(\alpha=0.01\) achieves a suitable balance between exploitation with the Hedge algorithm and exploration with
uniform search.
Next, we examine the impact of different values of the learning parameter \(\beta\) on the convergence performance. We evaluate the pseudo-regret of agent \(1\) using the proposed algorithm for \(\
alpha=0.01\), \(K=10\), and \(N=10\), with varying values of \(\beta\). Figure 3 shows that the choice of \(\beta\) influences the evolution of the regret. In this example, the case with \(\beta=0.01
\) yields better performance. This result shows that the selection of an appropriate value for the learning parameter \(\beta\) is also crucial.
Finally, we consider a comparative performance evaluation between the proposed algorithm and the Exp3-Coop Algorithm [21]. The Exp3-Coop is a distributed variant of the Exp3 algorithm for adversarial
multi-armed bandit problems. The number of arms and the number of agents are set as \(K=10\) and \(N=10\). Within the framework of the proposed algorithm, the trade-off parameter is set as \(\alpha=
0.01\). It is worth noting that the theoretical analysis of the Exp3-Coop Algorithm in [21] is conducted only for fixed undirected graphs, which is a primal difference from the analysis of this
paper. However, to investigate the performance comparison in more general situations, we extended its application to a time-varying directed network in this example. Figure 4 illustrates the
pseudo-regret of agent \(1\) with varying values of \(\beta\). We observe that the sublinear regret trajectories are achieved for both algorithms with suitable learning parameters. Moreover, in this
specific example, we see that the proposed consensus-based Exp3 policy outperformed the Exp3-Coop algorithm regarding regret minimization. For future research, it remains to be clarified in what
problem settings, such as the topology of the communication graph and the number of agents, the proposed algorithm performs better.
5. Conclusion
In this paper, we presented a distributed Exp3 algorithm for the adversarial bandit problem on directed and time-varying networks. We demonstrated that the proposed algorithm cooperatively estimates
the reward distribution for each arm with nearby agents. We provided an upper bound of the pseudo-regret, which quantifies the difference between the optimal reward and the expected reward.
Additionally, we derived a sufficient condition for achieving a sublinear regret bound. The numerical results illustrated that the sublinear regret can be achieved by appropriately tuning the
trade-off and learning parameters. As future work, we plan to determine optimal parameter settings and to investigate the impact of communication delays between agents on the adversarial bandit
This work was supported in part by Japan Society for the Promotion of Science KAKENHI Grant Number JP21K04121.
[1] P. Auer, N. Cesa-Bianchi, and P. Fischer, “Finite-time analysis of the multiarmed bandit problem,” Machine Learning, vol.47, no.2, pp.235-256, 2002.
[2] D.J. Russo, B.V. Roy, A. Kazerouni, I. Osband, and Z. Wen, “A tutorial on Thompson sampling,” Foundations and Trends in Machine Learning, vol.11, no.1, pp.1-96, 2018.
[3] D. Bouneffouf, I. Rish, and C. Aggarwal, “Survey on applications of multi-armed and contextual bandits,” Proc. 2020 IEEE Congress on Evolutionary Computation, pp.1-8, 2020.
[4] W. Xia, T.Q.S. Quek, K. Guo, W. Wen, H.H. Yang, and H. Zhu, “Multi-armed bandit-based client scheduling for federated learning,” IEEE Trans. Wireless Commun., vol.19, no.11, pp.7108-7123, 2020.
[5] S.U. Minhaj, A. Mahmood, S.F. Abedin, S.A. Hassan, M.T. Bhatti, S.H. Ali, and M. Gidlund, “Intelligent resource allocation in LoRaWAN using machine learning techniques,” IEEE Access, vol.11,
pp.10092-10106, 2023.
[6] P. Auer, N. Cesa-Bianchi, Y. Freund, and R.E. Schapire, “The nonstochastic multiarmed bandit problem,” SIAM J. Comput., vol.32, no.1, pp.48-77, 2002.
[7] T. Lattimore and C. Szepesvári, Bandit Algorithms, Cambridge University Press, 2019.
[8] S. Bubeck and N. Cesa-Bianchi, “Regret analysis of stochastic and nonstochastic multi-armed bandit problems,” Foundations and Trends in Machine Learning, vol.5, no.1, pp.1-122, 2012.
[9] A. Nedić and J. Liu, “Distributed optimization for control,” Annual Review of Control, Robotics, and Autonomous Systems, vol.1, no.1, pp.77-103, 2018.
[10] K. Ishikawa, N. Hayashi, and S. Takai, “Consensus-based distributed particle swarm optimization with event-triggered communication,” IEICE Trans. Fundamentals, vol.E101-A, no.2, pp.338-344, Feb.
[11] S. Izumi, Y. Shiomoto, and X. Xin, “Mass game simulator: An entertainment application of multiagent control,” IEEE Access, vol.9, pp.4129-4140, 2020.
[12] R. Adachi, Y. Yamashita, and K. Kobayashi, “Distributed optimal estimation with scalable communication cost,” IEICE Trans. Fundamentals, vol.E104-A, no.11, pp.1470-1476, Nov. 2021.
[13] M. Yamashita, N. Hayashi, T. Hatanaka, and S. Takai, “Logarithmic regret for distributed online subgradient method over unbalanced directed networks,” IEICE Trans. Fundamentals, vol.E104-A,
no.8, pp.1019-1026, Aug. 2021.
[14] K. Sakurama and T. Sugie, “Generalized coordination of multi-robot systems,” Foundations and Trends in Systems and Control, vol.9, no.1, pp.1-170, 2021.
[15] R. Adachi and Y. Wakasa, “Distributed filter using ADMM for optimal estimation over wireless sensor network,” IEICE Trans. Fundamentals, vol.E105-A, no.11, pp.1458-1465, Nov. 2022.
[16] K. Toda, N. Kuze, and T. Ushio, “Stability analysis and control of decision-making of miners in blockchain,” IEICE Trans. Fundamentals, vol.E105-A, no.4, pp.682-688, April 2022.
[17] S. Shahrampour, A. Rakhlin, and A. Jadbabaie, “Multi-armed bandits in multi-agent networks,” Proc. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing, pp.2786-2790,
[18] P. Landgren, V. Srivastava, and N.E. Leonard, “Distributed cooperative decision making in multi-agent multi-armed bandits,” Automatica, vol.125, p.109445, 2021.
[19] A. Moradipari, M. Ghavamzadeh, and M. Alizadeh, “Collaborative multi-agent stochastic linear bandits,” Proc. 2022 American Control Conference, pp.2761-2766, 2022.
[20] J. Zhu and J. Liu, “Distributed multi-armed bandits,” IEEE Trans. Autom. Control, vol.68, no.5, pp.3025-3040, 2023.
[21] N. Cesa-Bianchi, C. Gentile, Y. Mansour, and A. Minora, “Delay and cooperation in nonstochastic bandits,” Proc. 29th Annual Conference on Learning Theory, vol.49, pp.605-622, 2016.
[22] Y. Bar-On and Y. Mansour, “Individual regret in cooperative nonstochastic multi-armed bandits,” Advances in Neural Information Processing Systems, vol.32, 2019.
[23] P. Alatur, K.Y. Levy, and A. Krause, “Multi-player bandits: The adversarial case,” Journal of Machine Learning Research, vol.21, pp.77:1-77:23, 2020.
[24] J. Yi and M. Vojnović, “On regret-optimal cooperative nonstochastic multi-armed bandits,” CoRR, vol.abs/2211.17154, 2022. [Online]. Available: https://doi.org/10.48550/arXiv.2211.17154
[25] R. Olfati-Saber, J.A. Fax, and R.M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proc. IEEE, vol.95, no.1, pp.215-233, 2007.
[26] N. Hayashi and S. Takai, “A GTS scheduling for consensus problems over IEEE 802.15.4 wireless networks,” Proc. 2013 European Control Conference, pp.1764-1769, 2013.
[27] K. Kobayashi, “Predictive pinning control with communication delays for consensus of multi-agent systems,” IEICE Trans. Fundamentals, vol.E102-A, no.2, pp.359-364, Feb. 2019.
[28] A. Nedić and A. Olshevsky, “Distributed optimization over time-varying directed graphs,” IEEE Trans. Autom. Control, vol.60, no.3, pp.601-615, 2015. | {"url":"https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2023MAP0008/_f","timestamp":"2024-11-03T23:37:26Z","content_type":"text/html","content_length":"119302","record_id":"<urn:uuid:6b3d97cd-2837-4147-a8ba-014847f99c37>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00563.warc.gz"} |
Eye-Popping Pancakes
Artist Nathan Shields loves pancakes: his pancakes are pictures. By squirting 2 colors of batter on the pan to draw, he cooks it into a design that shows up once the pancake flips over. He draws
animals, flowers, and faces from the Harry Potter movies. Now we can wolf down storks, swans and turtles with our syrup.
Wee ones: If the pancake man uses white batter, wheat batter and cocoa-powder batter, how many colors can he cook into his pancake?
Little kids: If Nathan serves you 10 pancakes and the penguin and duck are the only birds, how many pancakes aren’t birds? Bonus: If 1/2 the remaining pancakes are animals, how many pancakes are not
Big kids: If the photo shows 5 rows of 4 pancakes each, how many pancakes do you see? Try to figure it out without counting one by one! Bonus: If Nathan can make 11 pancakes from 6 cups of batter,
how many can he make from 12 cups?
Wee ones: 3 colors.
Little kids: 8 pancakes. Bonus: 4 pancakes.
Big kids: 20 pancakes. Bonus: 22 pancakes, since he has doubled the batter. | {"url":"https://bedtimemath.org/fun-math-pancake-art/","timestamp":"2024-11-04T10:32:03Z","content_type":"text/html","content_length":"85966","record_id":"<urn:uuid:26b8483c-59fe-446b-9d0e-d95ce5b6da23>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00468.warc.gz"} |
Workshop: Bounds for the first (non-trivial) Neumann eigenvalue and partial results on a nice conjecture - Mittag-Leffler
Antoine Henrot
Let $\mu_1(\Omega)$ be the first non-trivial eigenvalue of the Laplace operator with Neumann boundary conditions
on a smooth domain $\Omega$. It is a classical task to look for estimates of the eigenvalues involving geometric quantities like the area, the perimeter, the diameter…
In this talk, we will recall the classical inequalities known for $\mu_1$. Then we will focus on the following conjecture: prove that $P^2(\Omega) \mu_1(\Omega) \leq 16 \pi^2$ for all plane convex
domains, the equality being achieved by the square AND the equilateral triangle. We will prove this conjecture assuming that $\Omega$ has two axis of symmetry.
This is a joint work with Antoine Lemenant and Ilaria Lucardesi | {"url":"https://www.mittag-leffler.se/seminar/workshop-bounds-for-the-first-non-trivial-neumann-eigenvalue-and-partial-results-on-a-nice-conjecture/","timestamp":"2024-11-12T09:32:02Z","content_type":"text/html","content_length":"44870","record_id":"<urn:uuid:b5a239b3-728d-499a-903b-2c291be1add5>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00097.warc.gz"} |
Worksheet on Fraction into Percentage | Fraction to Percent Worksheet
Worksheet on Fraction into Percentage | Fraction to Percent Worksheet with Answers
In this worksheet, we can see about Percentages and how to convert fractions into percentages. Here, the Percent refers to per hundred. A percent can be represented as a decimal and fraction, which
will be a number between zero and one. We can represent it using the percentage formula which is defined as a number that can be represented as a fraction of 100. If we want to turn a percentage into
a decimal, we can just divide by 100. In the below sections, we can see how we can convert a fraction into a percentage.
To convert a fraction into a percentage, we will divide the numerator of the fraction by the denominator of the fraction, and then we will multiply the result by 100. Then we will get the result as a
percent. Here, below we can see the solved examples. Refer to Percentage Worksheets to clear further queries on the concept.
1. Express the following fractions as a percentage:
(i) 7/20
(ii) 6/25
(iii) 9/13
(i) 7/20
Here, we will divide the numerator of the fraction by the denominator of the fraction after that we will multiply the result by 100. Then we will get the result.
= (7/20 × 100) %
= 35 %.
(ii) 6/25
Here, we will divide the numerator of the fraction by the denominator of the fraction after that we will multiply the result by 100. Then we will get the result.
= (6/25 × 100) %
= 24 %
(iii) 9/13
Here, we will divide the numerator of the fraction by the denominator of the fraction after that we will multiply the result by 100. Then we will get the result.
= (9/13 × 100) %
= 69.23 %
2. Convert 9/25 to percentage.
To convert into a percentage, we will divide the numerator of the fraction by the denominator of the fraction after that we will multiply the result by 100. Then we will get the result.
= (9/25 × 100) %
= 36 %
3. Convert the following statements in the percentage:
(i) 10 out of 25 people are sitting.
(ii) 45 oranges in a box of 250 are bad.
(i) 10 out of 25 people are sitting.
To convert into a percentage, we will divide the numerator of the fraction by the denominator of the fraction after that we will multiply the result by 100. Then we will get the result.
= 10/25 people are sitting
= (10/25 × 100) % people are sitting
= 40 % people are sitting.
(ii) 45 oranges in a box of 250 are bad.
To convert into a percentage, we will divide the numerator of the fraction by the denominator of the fraction after that we will multiply the result by 100. Then we will get the result.
= 45/250 oranges are bad
= (45/250 × 100) % oranges are bad
= 18 % of oranges are bad.
4. Mike ate the 2/7th part of the pizza. How much percent did Mike eat?
Mike ate 2/7th part of the pizza,
To find the percent how much did Mike eat, we will multiply the fraction with 100
2/7 × 100
On solving we will get approx 28%.
5. The total number of students in a class is 45 and in that 27 are boys. What will be the total percentage of boys in the class?
The total number of students is 45
Of that 27 are boys
So the percentage of boys in the class are
27/45 × 100
On solving we will get 60%.
Leave a Comment | {"url":"https://www.worksheetsbuddy.com/worksheet-on-fraction-into-percentage/","timestamp":"2024-11-14T10:44:56Z","content_type":"text/html","content_length":"131916","record_id":"<urn:uuid:b4f99738-7c19-42ca-a68c-5e8fd46ef334>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00158.warc.gz"} |
Chemical Kinetics Chemistry Class 12 Notes - ScienceMotive
Chemical Kinetics Chemistry Class 12 Notes
-ve sign before ΔR indicates the decrease in the concentration of reactant.
Chemical Kinetics Chemistry Class 12 Notes
Factors Affecting Rate of Reaction:
i). The concentration of reactant
ii). Surface area
iii). Temperature
iv). Nature of reactant
v). Presence of catalyst.
Dependence rate on Concentration:
Rate Law: The rate of reaction is directly proportional to the product of the concentration of reactant and each concentration is raised to some power which may or may not be equal to stereochemistry
For a general reaction, aA + bB → cC + dD,
Rate α [A]^x[B]^y
Rate = k[A]^x[B]^y (Where x and y may or may not equal to a & b).
k is proportionality constant and is called Rate Constant.
Rate Constant (k): Rate constant is a rate of reaction when molar concentration of reactant is unity.
Chemical Kinetics Chemistry Class 12 Notes
Molecularity of a Reaction: The no. of molecules of the reactants involved in the reaction is known as Molecularity. It cannot be zero or fractional. It can have values 1,2,3 etc. it is applicable
only to elementary reactions.
If the molecularity of a reaction is 1, it is called Unimolecular Reaction. e.g. Decomposition of ammonium nitrite NH[4]NO[2] → N[2] + 2 H[2]O.
If the molecularity of a reaction is 2, it is called a Bimolecular Reaction. e.g. Decomposition of Hydrogen Iodide 2 HI → H[2] + I[2]
Order of Reaction: Order is the sum of the powers of the concentration terms of the reactants in the rate law. It is an experimental quantity. It can have the values 0,1,2,3… or a fraction. It is
applicable to both elementary and complex reactions.
For a general reaction, aA + bB → cC + dD
der of a reaction is zero, it is called Zero Order Reaction, if it is one, it is called First Order Reaction, if it is two, it is called Second Order Reaction, and so on.
S.No. Order Molecularity
1. It is the sum of the powers of the concentration terms in the rate law expression. It is the total number of reactant species that collide simultaneously in a chemical reaction.
2. It is an experimental quantity. It is a theoretical quantity.
3. It can be zero or fractional. It cannot be zero or fractional.
4. It is applicable to both elementary and complex reactions. It is applicable only to elementary reactions.
Units of Rate Constant
Different ordered reactions have different units for k.
For an n^th order reaction
For the reaction, nA → Products
rate = k[A]^n
Therefore, k = rate …………………. (1)
From relation (1) we can find order for various reactions
Chemical Kinetics Chemistry Class 12 Notes
Integrated Rate Equations: These are equations relating to the rate of a reaction and concentration of reactants. Different ordered reactions have different integrated rate law equations.
1). For Zero Order Reaction: Zero-order reactions are reactions in which the rate of reaction is proportional to zero power of the concentration of reactants.
Consider a zero-order reaction R → P
The rate expression for the above reaction is
r = – d[R] ………………. (1)
Rate law for the above reaction is
r= k[R]^0 = k ………………… (2)
From equations (1) & (2), we can write
k = – d[R]
The above equation is known as the differential rate equation for the zero-order reaction.
d[R] = – kdt
On integrating the above equation, we get
[R] = – kt + C …………………. (3)
Where C is the constant of integration. To calculate the value of C, consider the initial conditions. i.e., when t=0, [R] = [R][0]
titute these values in equation (3)
[R][0] = – k × 0 + C
C = – [R][0]
Substituting C in equation (3), we get
[R] = – kt – [R][0] …………. (4)
[R][0] – [R] = kt
………………. (5)
This equation is of the form of a straight line y = mx + c. So if we plot [R] against t, we get a straight line with slope = – k and intercept equal to [R][0].
The decomposition of gaseous ammonia on a hot platinum surface at high pressure.
N[2] + H[2] → 2 NH[3]
At high pressure, molecules. So, a further change in reaction conditions does not change the rate of the reaction. So, it becomes a zero-order reaction.
Another e.g. is the thermal decomposition of HI on gold surface.
Chemical Kinetics Chemistry Class 12 Notes
2. For First Order Reaction: First order reactions are reactions in which the rate of reaction is proportional to the first power of the concentration of reactants.
Consider a first-order reaction R → P
The rate expression for the above reaction is
r = – d[R] ………………. (1)
Rate law for the above reaction is
r= k[R]^1 = k ………………… (2)
From equations (1) & (2), we can write
k [R] = – d[R]
d[R] = – k dt
On integrating the above equation, we get
In [R] = – kt + C …….. (3)
Where C is the constant of integration. To calculate the value of C, consider the initial conditions. i.e., when t=0, [R] = [R][0]
Substitute these values in equation (3)
In [R[0]] = – k × 0 + C
C = ln [R[0]]
Substituting C in equation (3), we get
ln[R] = – kt + ln[R[0]] ………………………………… (4)
Rearranging above equation we get
kt = ln[R[0]] – ln[R]
k = 1 ln [R[0]]
t [R]
k =2.303 log[R[0]]
t [R]
………… (A)
At time t[1] from equation (4)
In [R[1]] = – kt[1] + ln [R[2]]
At time t[2]
[R[1]] = – kt[2] + ln [R[2]]
where, [R[1]] and [R[2]] are the concentrations of the reactants at time t[1] and t[2] respectively.
ln [R[1]] – ln [R[2]] = – kt[1] – (–kt[2])
ln [R[1]] = k (t[2] – t[1])
Comparing equation (2) with y = mx + c, if we plot In [R] against t, we get a straight line with slope = – k and intercept equal to ln [R[0]].
The first-order rate equation (A) can also be written in the form
Log [R[0]] = kt
[R] 2.303
For Example:
Hydrogenation of ethene: C[2]H[4](g) + H[2](g) → C[2]H[6](g); r = k[C[2]H[4]]
All natural and artificial radioactive decay.
Chemical Kinetics Chemistry Class 12 Notes
Pseudo First Order Reaction: The reactions which are not truly of first order but become reactions of first-order under certain conditions are called pseudo-first-order reactions.
Let us take a general reaction
A + B → C
The rate of reaction depends upon the concentrations of both A and B but one of the components is present in large excess and thus its concentration hardly changes as the reaction proceeds so the
rate is independent of that particular component.
So, if component B is in large excess and the concentration of B is very high as compared to that of A, the reaction is considered to be a pseudo-first-order reaction with respect to A and if
component A is in large excess and the concentration of A is very high as compared to that of B, the reaction is considered to be pseudo-first order with respect to B.
For example,
i) Hydrolysis of ester in acid (H^+).
CH[3]COOC[2]H[5] + H[2]O → CH[3]COOH + C[2]H[5]OH
Rate = k[CH[3]COOC[2]H[5]] [H[2]O]
Since water is in excess so its concentrations remain the same during the reaction and [H[2]O] is constant and the rate is independent of the concentration of H[2]O so
Rate = k[CH[3]COOC[2]H[5]], Where k = k[H[2]O]
ii) Inversion of cane sugar (in acid)
C[12]H[22]O[11] + H[2]O → C[6]H[12]O[6] + C[6]H[12]O[6]
Cane sugar Glucose Fructose
Since water is in excess and [H[2]O] is constant, so
Rate = k [C[12]H[22]O[11]]
(In the above reactions the concentration of H[2]O is in excess so the rate of reaction is independent of the concentration of water).
Chemical Kinetics Chemistry Class 12 Notes
The half-life of a reaction (t[1/2]): It is the time in which the concentration of a reactant is reduced to one-half of its initial concentration. It is represented by t[1/2].
(i) Half-Life of a Zero Order Reaction: For a zero-order reaction, the integrated rate law is:
k = [R[0]] – [R]
When t = t[1/2], [R] = ½ [R[0]]
On substituting these values in the above equation,
k = [R[0]] – ½ [R[0]]
From the above relation, we can say Half-life of a zero-order reaction is directly proportional to the initial concentration of the reactants and inversely proportional to the rate constant.
(ii) Half-Life of a First Order Reaction: For a first-order reaction, the integrated rate law is:
Rate of Reaction and Temperature: Most of the chemical reactions are accelerated by the increase in temperature. It has been found that for a chemical reaction when the temperature is increased by
10°, the rate of the reaction and the rate constant is nearly doubled. The ratio of the rate constants of a reaction at two temperatures differing by 100 is called the temperature coefficient.
i.e., Temperature coefficient = Rate constant of the reaction at (T + 10)K
The rate constant of the reaction at T K
The temperature dependence of the rate of a chemical reaction can be accurately explained by the Arrhenius equation. The equation is:
k = A e ^– Ea /RT ……………………………….. (1)
Where A is a constant called the Arrhenius parameter or the frequency factor or the pre-exponential factor. It is constant for a particular reaction.
R is the universal gas constant and E[a] is activation energy measured in joules/mole (J mol^ –^1).
Chemical Kinetics Chemistry Class 12 Notes
Threshold Energy: The minimum energy which the colliding molecules must have in order that the collision between them may be effective is called threshold energy.
Activation Energy: The minimum extra amount of energy absorbed by the reactant molecules so that their energy becomes equal to the threshold value is called activation energy.
Threshold energy = Activation energy + Energy possessed by the reactants
Less is the activation energy, faster is the reaction. In order that the reactants may change into products, they have to cross an energy barrier (corresponding to threshold
energy). Reactant molecules absorb energy and form an intermediate called activated complex which immediately dissociates to form the products.
For Example:
Chemical Kinetics Chemistry Class 12 Notes
Arrhenius equation:
Quantitatively, the temperature dependence of the rate of a chemical reaction can be explained by Arrhenius equation
k = A e ^– Ea /RT
where A is the Arrhenius factor or the frequency factor or the pre-exponential factor. R is gas constant and E a is activation energy measured in joules/mole.
The factor e ^– Ea /RT corresponds to the fraction of molecules that have kinetic energy greater than E[a].
Thus, it has been found from the Arrhenius equation that increasing the temperature or decreasing the activation energy will result in an increase in the rate of the reaction and an exponential
increase in the rate constant.
Taking the natural logarithm of both sides of the equation
ln K = – E[a ][ ]+ ln A
At temperature T[1], equation
ln K[1] = – E[a ][ ]+ ln A
At temperature T[2], equation
ln K[2] = – E[a ][ ]+ ln A
(since A is constant for a given reaction)
K[1] and K[2] are the values of rate constants at temperatures T[1] and T[2] respectively.
Subtracting equation form, we obtain
ln K[2] – ln K[1] = E[a ][ ] – E[a ][ ]
RT[1] RT[2]
Effect of Catalyst: A catalyst is a substance that alters the rate of a reaction without itself undergoing any permanent chemical change. The action of the catalyst can be explained by intermediate
complex theory. According to this theory, a catalyst participates in a chemical reaction by forming an intermediate complex. This is unstable and decomposes to yield products and the catalyst.
A catalyst increases the rate of a chemical reaction by providing an alternate pathway or reaction mechanism by reducing the activation energy between reactants and products. The important
characteristics of a catalyst are:
1. A small amount of the catalyst can catalyze a large number of reactants.
2. A catalyst does not alter Gibb’s energy, ΔG of a reaction. It catalyzes spontaneous reactions but does not catalyze non-spontaneous reactions.
3. A catalyst does not change the equilibrium constant of a reaction, but it helps to attain the equilibrium faster by increasing the rate of both forward as well as backward reactions.
Collision Theory: This theory was developed by Max Trautz and William Lewis. It is based on the kinetic theory of gases. According to this theory, the reactant molecules are assumed to be hard
spheres, and reaction has occurred when molecules collide with each other. The number of collisions per second per unit volume of the reaction mixture is known as collision frequency (Z).
Another factor that affects the rate of chemical reactions is the activation energy.
For a bimolecular elementary reaction
A + B → Products
The rate of reaction can be expressed as
Rate (r)= Z[AB] e ^– Ea /RT
Where Z[AB] represents the collision frequency of reactants, A and B, and e^– Ea/RT represents the fraction of molecules with energies equal to or greater than E[a]. Comparing with the Arrhenius
equation, we can see that A is related to collision frequency.
A third factor that affects the rate of a chemical reaction is the proper orientation. To account for this, a factor P called the Probability or Steric factor is introduced. So the above equation
Rate (r)= PZ[AB] e ^– Ea /RT
Chemical Kinetics Chemistry Class 12 Notes | {"url":"https://www.sciencemotive.com/chemical-kinetics/chemical-kinetics-chemistry-class-12-notes/","timestamp":"2024-11-03T04:08:37Z","content_type":"text/html","content_length":"132792","record_id":"<urn:uuid:4dcdd8c2-f6e6-43dd-96ca-55d8cdcbc391>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00453.warc.gz"} |
Applied Multivariate Statistical Analysis : Summary- Part I
Last few weeks have been good , for most of the trades have gone right. When something works, I generally attribute it to 90% luck + 10% logic. When something doesn’t work, I usually have the
proportions reversed in my mind as it helps in creating better logic/algo. It really doesn’t matter what one believes when something has already worked, but one’s attitude/belief system DOES matter
when things don’t work . On this “feeling lucky” note, I thought I should write something about Multivariate Stats .
Knowingly or Unknowingly , any analyst deals with multivariate stuff, if his/her study contains an analysis of more than one variable OR variable with more than one dimensions. Average/ Quartiles,/
Median / Mode are all known to most of us one way or the other. But things become interesting as well as complicated when we move to the multivariate world.
This book is probably the most easily understandable text out there. Typically books like classic texts by Anderson are laden with complicated math that a novice to this area would be overwhelmed by
it. In contrast, this book can be termed as a more MBAish book where there is less emphasis on theorems / proofs/lemmas and more emphasis on the applications. The latest book is the sixth edition
which obviously means that this book is a hit in some part of the reader community. Ok, let me get on with summarizing the chapters of the book
To start with , this book is organized in such a way that the first 4 chapters give all the math that is needed to understand multivariate analysis. One thing about doing work in the multivariate
area is that “a knowledge of matrix algebra” is vital to doing the most basic analysis in MV world.
Chapter 1 : Applications of Multivariate Techniques
Data reduction / data sorting / investigation of dependence among variables / prediction / hypo testing are some of the useful apps of MV Techniques. One of the first thing that anyone can do with
out going through math is GRAPHIC display. It is often said that a good graphic display in all its variation is half the analysis done. Tools that are available to any analyst are the usual
scatterplot, marginal dot plot, scatterplot plus boxplots on the same graphic, 3d scatter plots, Star plots, Distance plots. Chernoff faces. I have typically used all these graphics at some point in
time or the other, except the last one, Chernoff faces. When I first came across this type of graphic (Chernoff faces) , i thought it was pretty cool technique though I am yet to use it in real life.
The funda behind it is simple. It is easy to recognize faces by humans, a little change in a feature of a face and we can instantly recognize it. This aspect of human brain is used to create a
graphic of multidimensional data in the form of human faces so that patterns can easily be detected.
As a side note, this chapter uses Mahalanobis Distance to show contours of equal density. This made me think about the concept of distance itself. Probably any high school kid who learns coordinate
geometry knows the distance formula between 2 points. As he progresses, he learns more and more complicated formulae, theorems, etc…alas! uncertainty is never discussed.. I don’t recollect any
teacher till date, posing a question as following :
If there is uncertainty in the measurement of x coordinates and y coordinates and let’s say you know by what uncertainty the measurements along x axis and along y axis are collected, Can you come up
with an alternate formula than the usual distance formula ?
If you think about it, this is what we find in reality. Take any application of the real world, uncertainty is unavoidable. So, a question as simple as above one, is good enough to motivate a young
mind to explore the problem and come up with an alternate measures of distance. Well, one measure which has appeared in various applications is from an Indian Scientist , Prasanta Chandra Mahalanobis
. Anyways I have digressed from the intent of the post.
Chapter 2 : Matrix Algebra & Random Vectors
Matrices are your best friends in dealing with multidimensional data. For any number cruncher a thorough understanding is imperative. In that sense, this book merely scratches the surface. Obviously
it gives all the important results that are needed to get your hands dirty doing MV Stats. Personally I found the proof of Extended Cauchy Schwartz inequality much more intuitive to understand than
other books. I have always been fascinated by math inequalities. Inequalities become very powerful when used in the right application. Touch any math-fin-stat area, you are bound to see innumerable
inequalities applied to real life problems like valuation / hedging / forecasting etc. I had my crush :) on inequalities after reading the book “The Cauchy-Schwartz Master Class” by Prof Steele. If
you want to know the kind of applications where inequalities can be used, the book will be a fascinating account. Will blog about Prof.Steels’s book some other day.. Anyways coming back, one of the
applications of Extended Cauchy Schwartz is the Optimization of Quadratic forms where the inequality helps one to connect an Optimization problem to the eigen values of the matrix involved in the
optimization. Truly beautiful linkage between optimization and matrix algebra through an inequality.
**Basic properties of a p dimensional data matrix such as sample mean, sample covariance are given a geometric interpretation. Basic stuff like mean being a projection on each column of the data on a
unit vector, connection between determinant of the covariance matrix with generalized sample variance and significance of the same, etc are provided. Given a dataset , if you know already the way to
compute mean, covariance of the original dataset OR compute the mean , cov of linear combination of the columns in the dataset, you can safely ignore this chapter.
**Well, basically this chapter is about data which is generated from a multinorm distribution. The framework of this chapter is again intuitive and nothing fancy. Took a brief pause before the going
through chapter and asked myself , “ What would I teach somebody about Multivariate Dist , if I were asked to ?“. Well , the following would be the basic stuff that I would cover in relation to X(
a pdim Normal Random Variable)
• Basic density form of X
• Properties which would help in checking whether subsets of X converge to the same distribution
• How to identify independent components of the X ?
• Sample mean and Sample Covariance of the p dimensional Random Variable( RV)
• Relevance of Mahalanobis distance and constant probability contours for a p dim RV
• How to connect between ChiSquare distribution and Ellipsoids arising out of a p dim RV ?
• How do you simulate a pdim RV ? Can you simulate given any customized estimator of mean and covariance ?
• What are the ways of estimating the covariance from the sample ? What are the robust estimators ? Which one to choose and Why ?
• Sampling distribution of Sample mean and Sample covariance matrix. The former is again a p dim Normal RV while the latter is a Wishart Random variable.
• Where is Wishart distribution used ? How do you simulate a RV from Wishart ?
• What are the characteristics of Wishart distribution ?
• Law of Large Numbers and CLT in the context of X and sample mean
• How can you test whether the data actually comes out of p dim normal RV ?
• How can you test whether the data has no tail dependency ?
• How do you transform the data so that you have marginals and joint distribution as normal distribution ?
Out of this laundry list, the book covers most of the aspects…Again the treatment is MBAish..So you might get an intuitive feel of things..crunching data is the only way to understand the above
Now, why the hell should real life data that we see should be a realization of a multivariate normal distribution ? In 99% of the cases, especially financial data, it will not be true… So, what’s the
point in going through the above stuff ? All I can say for now is that it will make you skeptical and enthuse you to figure out something in the non parametric world . Subsequently you can marry
stuff from parametric and non parametric worlds. Also, it will make you extremely skeptical of the off-the-shelf solutions that sell side vendors provide in the name of quant models.
**Chapter 5 : Inferences about the Mean Vector
**t test is a classic test that is covered in any stats101 class for testing sample means. By squaring the t test statistic, one can use an equivalent F statistic. This t^2 statistic in a
multivariate case becomes Hotelling T square in honor of Harold Hotelling, a pioneer in multivariate analysis. Thankfully there is a way to compare Hotelling T square with F distribution and hence it
becomes easier to check the null, create confidence intervals for the component means. The importance of this chapter lies in the formulation of control charts for multidimensional data. Having
univariate control charts with a specific sigma level is not going to be useful and instead a chart based on Hotelling T Square is used.
Chapter 6 : Comparison of Several Multivariate means
This chapter is basically the extension of Chap 5 to more than one mean. Well, the statistic remains the same , Hotelling T Square, except that it is valid under specific assumption relating to
covariance matrix. This chapter is pretty useful as it mentions testing covariance matrices across populations and this is something that is pretty useful in finance. Imagine you have n assets and
you have a sample covariance matrix in a time period t1 to t2. One of the basic questions to ask is , “Has the covariance matrix changed “?” . Well, this chapter clearly show you the way to test the
invariance of covariance structure. There is also a mention of Path Analysis , MANOVA and the stats behind them.. I am going to refer to this chapter very often for it has a lot of relevant stuff
relating to finance.
Chapter 7 : Multivariate Linear Regression Models
Multivariate Linear Regression models are the most basic models which any econometric test would cover extensively. Starting from the data matrix and formulation of a linear regression equation, the
entire regression structure is built ground up. Thankfully matrix notation is used extensively as it makes the transition from a single predictor to multiple predictor analysis easier. MLE estimates,
their distribution, inferences about the regression model , Likelihood ratio rests for the parameters are all the discussed thoroughly. If you are well versed with the reg model, this chapter would
serve as a quick recap of all the concepts including outlier detection, residuals analysis etc.
Chapter 8 : Principal Components
PCA is basically used for data reduction and integration. Unlike its counterpart which is extremely popular in finance, PCA works on covariance matrix. There is no need for the underlying data to be
a realization of multivariate distribution. PCA basically tries to line up the various combination of the p dimensional vectors in such a way that the principal components are ordered by the
variation captured. So, the resulting principal components are nothing but finding an appropriate bases for the data matrix so that maximum variation is captured on each of the bases vectors.
Spectral decomposition is used to calculate the principal factors and the eigen values associated with the components play a very important role in the analysis. Very low eigen values typically means
that there is data dependency in the structure and a subset of variables need to be removed for better data interpretation. Very few high eigen values could mean that there are a few major modes
which gives rise to the variation in the data, meaning most of the variation seen is the common cause variation. There are some graphical tools mentioned in the chapter like scree plot, T square
control chart , Constant elliptical density charts that can be used in the context of multivariate data. What graphic needs to be used is obviously dependent on the context and the nature of the data
used for the analysis
Chapter 9 : Factor Analysis
Factor analysis is something used synonymously with PCA but there is fundamental difference between the two. PCA works on covariance matrix and has no model assumptions while Factor analyses by
definition hypothesizes a model and works on Correlation Matrices. So, when one hypothesizes a model, one obviously needs to estimate the parameters and test the assumptions. This is where Factor
analysis gets tricky. There are a ton of assumptions and at least 3-4 estimation methods. Also the solutions are not unique , so it has spawned a ton of literature on factor rotations etc. I read
somewhere that if you initial hypothesized model does not work properly out of sample, don’t rotate factors and all the crap. Just ditch your model and start from scratch.
Will summarize the rest of the three chapters of the book at a later date. After a typing marathon of 2 posts today, my fingers and my mind are crying for a break :) | {"url":"https://www.rksmusings.com/2010/08/29/applied-multivariate-statistical-analysis-summary-part-i/","timestamp":"2024-11-14T08:00:22Z","content_type":"text/html","content_length":"23663","record_id":"<urn:uuid:5bf872da-421f-4c1a-b1a3-ab5ce1cd687f>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00540.warc.gz"} |
Re: [tlaplus] [TLAPS] Is there a way to see the steps taken by the automated prover to prove a goal ?
So what you are saying is that when you find out that some assumption that you asserted was actually wrong, you'd like to know where it was used in the proof.
We advise users to thoroughly model check their specifications before embarking on any proof. In our experience, using the model checker is a great way for validating a spec. (It could certainly
detect the fallacy in the example that you give.) Doing so helps you avoid the problem in the first place.
Second, the proof system has an approximation for the functionality that you request: when you modify the assertion of some fact (or change a definition), you can relaunch the prover. Since all
previous results of proof attempts are cached, steps that do not syntactically depend on the definition or fact will not be rechecked, and you quickly see which steps fail. Of course, if your
assumption appears in a global USE, all steps will need to be reproved.
A proof trace would allow us to detect if asserted assumptions which are wrong are being used by the backend provers to prove a goal.
For example, for the proof of Paxos, if we wrongly assert that ~ ({} SUBSET Quorums) <which should be equivalent to FALSE since SUBSET gives the powerset> instead of using \A Q \in Quorums : Q #
{}, the backend provers may use this FALSE to prove any goal.
Therefore, if we can see the trace to make sure that a wrong assumption like FALSE has not been used by the backend, it will allow us to trust the proofs better. Sometimes, wrong assumptions
which are not very evident may escape our notice while writing the specification.
currently we do not export detailed proofs from the automatic backends. What would be your use case for such detailed proofs? Most of the backends could export a proof trace, and we imagine
that such traces could be useful for certifying the proof by a trusted backend such as Isabelle/TLA+. (There is an option to do this for proofs found by Zenon.) However, proof obligations
often undergo quite significant transformations between the TLA+ statement and the formulas that are sent to the backend, so I am skeptical that a proof trace would be intelligible at the
TLA+ level.
It would probably not be so difficult to extract the user-provided facts that were actually used in a proof, although the transformations done in pre-processing would again have to be
Is there a way to see the steps taken by the automatic backend provers in order to prove a goal ("path to success") and to know exactly which lemmas/assumptions were used?
Thank you
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit
You received this message because you are subscribed to a topic in the Google Groups "tlaplus" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/tlaplus/yTQfEdEdQzQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/247BE5EB-9628-4E7A-958A-48DEB3004CEE%40gmail.com.
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/85FC19AF-DD4C-479F-B36B-1DF1F3D53F47%40gmail.com. | {"url":"https://discuss.tlapl.us/msg03194.html","timestamp":"2024-11-12T22:52:52Z","content_type":"text/html","content_length":"12287","record_id":"<urn:uuid:fb48af0f-a8f6-4e03-8211-aef89aa177ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00895.warc.gz"} |
If a straight line touch a circle, and from the point of contact a straight line be drawn cutting the circle ; the angles which this line makes with the line touching the circle, shall be equal to
the angles which are in the alternate segments of the... Euclid in Paragraphs: The Elements of Euclid: Containing the First Six Books ... - Page 69by Euclid - 1845 - 199 pagesFull view
About this book
Education Ministry of - 1880 - 238 pages
...circle at right angles to each other. Show that they are sides of a square inscribed in the circle. 3. If a straight line touch a circle, and from the point of contact a straight line be drawn
cutting the circle, the angle which this line makes with the line touching the circle shall be equal...
Isaac Todhunter - Euclid's Elements - 1880 - 426 pages
...therefore FC is perpendicular to DE. \VTierefore, if a straight line &c. QED PROPOSITION 19. THEOREM. If a straight line touch a circle, and from the point of contact a straight line lie drawn
at right angles to the touching line, the centre of the circle shall be in that line. Let...
Sandhurst roy. military coll - 1880 - 68 pages
...has to the fourth ? 8. The sides about the equal angles of equiangular triangles are proportional. If a straight line touch a circle, and from the point of contact two chords be drawn, and if
from the extremity of one of them a straight line be drawn parallel to...
Education, Higher - 1883 - 536 pages
...the square on the line between the points of section, is equal to the square on half the line. 11. If a straight line touch a circle, and from the point...line, the centre of the circle shall
be in that line. 12. If from a point without a given circle any two lines be drawn cutting the circle, determine a point...
Euclides - Euclid's Elements - 1881 - 236 pages
...extremities without producing it, by means of the first part of this proposition. VS. OP. XXXII THEOREM. If a straight line touch a circle, and from the point of contact a straight line be
drawn cutting the circle ; the. angles which this straight line makes with the tangent are equal to the anyles...
Education, Higher - 1884 - 538 pages
...of a parallelogram is equal to the sum of the squares on the diagonals. 8. If a straight line iouch a circle, and from the point of contact a straight line be drawn cutting the circle, the
angles which this line makes with the line touching the circle shall be equal...
Education Ministry of - 1882 - 300 pages
...circle at right angles to each other. Show that they are sides of a square inscribed in the circle. 3, If a straight line touch a circle, and from the point of contact a straight line be drawn
cutting the circle, the angle which this line makes with the line touching the circle shall be equal...
College of preceptors - 1882 - 528 pages
...to the circumference, and prove that it is not equal to either of . these three straight lines. 7. If a straight line touch a circle, and from the point of contact a straight line be drawn
cutting the circle, the angles which this straight line makes with the line touching the circle shall...
Isaac Sharpless - Geometry - 1882 - 286 pages
...right angle; wherefore the other, ADC, is greater than a right angle. Proposition 22. Theorem. — If a straight line touch a circle, and from the point of contact a straight line be drawn
cutting the circle, the angles made by this line with the line which touches the circle, will be equal...
Mary W I. Shilleto - 1882 - 418 pages
...intersect in E, the angles subtended by AC and BD at the centre are together double of the angle AEG. 2. If a straight line touch a circle, and from the point of contact a straight line be
drawn cutting the circle, the angles made by this line with the line touching the circle must be equal to... | {"url":"https://books.google.com.jm/books?qtid=ed7e789c&lr=&id=9e4DAAAAQAAJ&sa=N&start=100","timestamp":"2024-11-02T01:50:10Z","content_type":"text/html","content_length":"32502","record_id":"<urn:uuid:469e90a9-62fc-4b59-a33a-a5dae3045457>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00547.warc.gz"} |
Local Lorentz boost in coordinate-independent notation
The Lorentz boost between two reference frames can be expressed as a (1,1)-tensor
Recall the prototypical Lorentz boost on Minkowski spacetime:
This is a boost in the Lorentz factor
Consider two 4-velocity vectors
where local Lorentz transformation. While the plus sign makes the above appear an inverse boost, this is only because vectors (as whole entities) transform inversely to coordinates. Rearranging:
This is the relative velocity of the observer explained previously. It is equivalent to the
Hence the relative velocity vectors are boosted into one-another, aside from a minus sign (Jantzen+ 1992
The “flat” symbol just means: lower an index. Equivalently, in terms of the initial observer and boost velocity alone:
in which case the relative speed may be obtained from
Formulae are useful machines, allowing you to blithely turn the handle to crank out a result. This contrasts with my usual emphasis on conceptual understanding, and drawing a picture (at least
mentally). However Lorentz boosts have many counter-intuitive or seemingly paradoxical effects. It is easier to make a mistake if you reason from first-principles alone. Of course the algebra does
originate from careful thinking about foundations, and having multiple approaches is a check of consistency.
Boosts are paramount for comparing physical quantities between frames. Some textbooks present the general Lorentz boost in Minkowski spacetime with Cartesian coordinates. Our abstract vector
formulation allows direct application to local boosts in arbitrary spacetime, such as Kerr or FLRW, in any coordinate system. I don’t remember seeing the formulae here in the literature, but someone
should have done it somewhere. The Jantzen+ paper was an inspiration, and the same authors define various further quantities (projections, in fact) in Bini+ 1995 | {"url":"http://cmaclaurin.com/2021/05/16/local-lorentz-boost-in-coordinate-independent-notation/","timestamp":"2024-11-04T17:58:33Z","content_type":"text/html","content_length":"55240","record_id":"<urn:uuid:636a4ec1-a4a0-4060-9c5e-07893ae45930>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00244.warc.gz"} |
讲解 COMPSCI 369、辅导 Java/Python语言编程
FIRST SEMESTER, 2023
Computational Methods in Interdisciplinary Science
NOTE: This is a restricted book exam. You are allowed a single sheet of A4 paper with notes written
on it.
This exam has 16 questions, and it is worth 120 marks in total.
There are 4 sections.
Section A consists 4 short answer questions worth 30 marks in total.
Section B consists 5 short answer questions worth 20 marks in total.
Section C consists 4 short answer questions worth 32 marks in total.
Section D consists 3 short answer questions worth 38 marks in total.
Answer all questions
The exam is worth 55% of the final grade
COMPSCI 369
Section A: Computational Biology, Numerical Integration &
Game Theory
Computational Game Theory
1. In lectures we discussed David Chess’s paper ‘Simulating the evolution of behavior: the iterated
prisoners’ dilemma problem’. In this paper, Chess reported on four phases in his model: “The Era
of Exploitation,” “The Nadir,” “The Growth of Trust,” and “Equilibrium.”
(a) Describe each of the four phases and their relation to each other. [4 marks]
(b) Explain two reasons why it was necessary to use computational methods to study this model.
[3 marks]
Modelling Dynamical Systems
2. The following equation specifies a discrete-time dynamical system. In this equation, α is a parameter.
= α min(xt, 1 − xt)
(a) When α < 1, there is a single fixed point. What is it? [1 mark]
(b) When α = 1, there are an infinite number of fixed points. What are they? [2 marks]
(c) What would be appropriate to use as labels for each axis of a bifurcation diagram of this
system? [2 marks]
(d) Write pseudocode for generating a bifurcation diagram for this system. [10 marks]
3. Briefly describe the Euler and Runge-Kutta methods for numerical integration and explain the
relationship between them. [4 marks]
4. Identify a situation where Euler integration would be perfectly accurate and explain why this is the
case. [4 marks]
COMPSCI 369
Section B: Sequence Alignment
5. The partially completed F matrix for calculating the local alignment of the sequences GCT and
TAACT is given below. The score matrix is given by s(a, b) = −2 when a 6= b and s(a, a) = 4.
The linear gap penalty is d = −3.
T C C A T
G 0 0 0 0 0 0
C 0 0 4 4 1 u
T 0 4 1 v w x
(a) Complete the matrix by finding values for u, v, w and x and showing traceback pointers.
[4 marks]
(b) Give the score for the best local alignment of these two sequences and provide an alignment
that has this score. [3 marks]
6. What is the biological motivation for using an affine rather than a linear gap penalty? [2 marks]
7. Computationally, how can one efficiently perform alignment with an affine gap penalty and what
is the computational cost of doing so when compared to a linear gap? Use asymptotic notation as
part of your answer. [4 marks]
8. Describe the main barrier to finding an exact solution to the multiple alignment problem. Use
asymptotic notation as part of your answer. [2 marks]
9. Describe the main steps of the heuristic algorithm we discussed in lectures for solving the multiple
alignment problem, including the use of neutral characters. (You do not need to give precise
formulae for how the distances are calculated.) [5 marks]
COMPSCI 369
Section C: Simulation and HMMs
10. What does it mean for a sequence of random variables X0, X1, X2, . . . to have the Markov property?
Express your answer in plain English and in mathematical notation. [2 marks]
11. You are given a method choice(x,prob), where the arrays x and prob are of equal length,
and the sum of the elements of prob is 1. choice(x,prob) returns x[i] with probability
Write a pseudo-code method simHMM(a,e,L,s) that takes as input a transition matrix a, an
emission matrix e, a length L and a start state s. It should return state and symbol sequences of
length L with the state sequence starting in state s. Use integers corresponding to array indices to
represent states and emissions. [6 marks]
12. Given the method choice(x,prob) as defined in Question 11, write a pseudo-code method
randwalk(k) that simulates a random walk of length k starting at 0 where steps of -1 and +1
are equally likely. Assume the argument k is a positive integer. Your method should return an
array of length k where walk[i] is the position of the random walk after i steps. Show how you
can use this method to estimate the probability that the position of a random walker after 50 steps
is more than 10 steps from its starting point. [5 marks]
COMPSCI 369
13. Consider an HMM with states A, B, C each of which emit symbols Q, R, S, T. The transitions are
given by the following table which has omitted the transition probabilities into state C.
The model starts in state A 60% of the time, state C 40% of the time and never in state B.
The emission probabilities for the model are given by the following table.
Q R S T
A 0.4 0.2 0.15 0.15
B 0.2 0.6 0.1 0.1
C 0.05 0.2 0.2 0.55
(a) Write down the values of the missing elements in the transition matrix. [2 marks]
(b) Sketch a diagram of the HMM, showing all states, possible transitions and transition probabilities.
Include the begin state but no end state. Do not include emission probabilities in the
diagram. [3 marks]
(c) Explain why the length of a run of Bs in a state sequence follows a geometric distribution and
give the length of an average run of Bs. [3 marks]
(d) What is the joint probability P(x, π) of the state sequence π = ABB and the symbol sequence
x = QTR? Leave your answer as a product or sum of numbers. [3 marks]
(e) Complete the entries i, j and k in the forward matrix below using the recursion fk(i + 1) =
alkfl(xi). Remember to show your working.
0 Q T
A 0 0.24 k
B 0 i
C 0 j
[5 marks]
(f) The forward algorithm is used to calculate P(x). When π = ABB and x =QRR, is P(x)
greater than, less than, or equal to P(x, π)? Justify your answer. [3 marks]
COMPSCI 369
Section D: Trees
14. Let the symmetric matrix
specify the pairwise distances, Dij , between the four sequences x1, . . . , x4.
(a) Construct a UPGMA tree from D showing your working. [5 marks]
(b) Will UPGMA or neighbour-joining (or both or neither) reconstruct the correct tree in this
case? Explain your answer. [2 marks]
(c) Describe when you would use neighbour-joining and when you would use UPGMA. [3 marks]
15. Consider the four aligned sequences, W,X,Y, and Z:
W: CCGTT
X: GCAAT
Y: CCATT
Z: GAGAT
(a) Explain what parsimony informative means, and identify the parsimony informative sites in
the alignment. [2 marks]
(b) By calculating the parsimony score for each possible tree topology for these four taxa, find
the maximum parsimony tree. [5 marks]
(c) Demonstrate (for example, on a single branch in a one of your trees) how ancestral reconstructions
can be used to estimate branch length on the maximum parsimony tree. [4 marks]
(d) Describe two significant drawbacks of the parsimony method. [3 marks]
COMPSCI 369
16. (a) Why do we rely on heuristic methods to find a maximum likelihood tree? Describe one such
heuristic and explain whether this heuristic will typically find the tree that maximises the
likelihood. [4 marks]
(b) Given mutation rate parameter µ and normalised rate matrix Q, how do you calculate the
probability that a C mutates to a T along a lineage of length t = 3? (Recall we denote, for
example, the (A, A)th entry of a matrix B by BAA.) [3 marks]
(c) Let X and Y be sequences of length L. How can you use the calculation in part (b) to
calculate the probability that X mutates into Y over a lineage of length t = 3? Explain any
assumptions you are making. [2 marks]
(d) In order to efficiently calculate the likelihood of the tree, what assumption do we make about
the mutation process on different lineages? [2 marks]
(e) In parsimony and distance based methods, sites that are constant across all sequences are
not informative about the tree. Explain whether or not the same applies to likelihood based
methods. [3 marks] | {"url":"http://7daixie.com/202406123923685311.html","timestamp":"2024-11-06T15:31:28Z","content_type":"application/xhtml+xml","content_length":"58142","record_id":"<urn:uuid:00d81824-5d1f-4582-be3e-6f679473c9ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00381.warc.gz"} |
In the physical sciences, relaxation usually means the return of a perturbed system into equilibrium. Each relaxation process can be categorized by a relaxation time τ. The simplest theoretical
description of relaxation as function of time t is an exponential law exp(−t/τ) (exponential decay).
In simple linear systems
Mechanics: Damped unforced oscillator
Let the homogeneous differential equation:
${\displaystyle m{\frac {d^{2}y}{dt^{2}}}+\gamma {\frac {dy}{dt}}+ky=0}$
model damped unforced oscillations of a weight on a spring.
The displacement will then be of the form ${\displaystyle y(t)=Ae^{-t/T}\cos(\mu t-\delta )}$ . The constant T (${\displaystyle =2m/\gamma }$ ) is called the relaxation time of the system and the
constant μ is the quasi-frequency.
Electronics: RC circuit
In an RC circuit containing a charged capacitor and a resistor, the voltage decays exponentially:
${\displaystyle V(t)=V_{0}e^{-{\frac {t}{RC}}}\ ,}$
The constant ${\displaystyle \tau =RC\ }$ is called the relaxation time or RC time constant of the circuit. A nonlinear oscillator circuit which generates a repeating waveform by the repetitive
discharge of a capacitor through a resistance is called a relaxation oscillator.
In condensed matter physics
In condensed matter physics, relaxation is usually studied as a linear response to a small external perturbation. Since the underlying microscopic processes are active even in the absence of external
perturbations, one can also study "relaxation in equilibrium" instead of the usual "relaxation into equilibrium" (see fluctuation-dissipation theorem).
Stress relaxation
In continuum mechanics, stress relaxation is the gradual disappearance of stresses from a viscoelastic medium after it has been deformed.
Dielectric relaxation time
In dielectric materials, the dielectric polarization P depends on the electric field E. If E changes, P(t) reacts: the polarization relaxes towards a new equilibrium, i.e., the surface charges
equalize. It is important in dielectric spectroscopy. Very long relaxation times are responsible for dielectric absorption.
The dielectric relaxation time is closely related to the electrical conductivity. In a semiconductor it is a measure of how long it takes to become neutralized by conduction process. This relaxation
time is small in metals and can be large in semiconductors and insulators.
Liquids and amorphous solids
An amorphous solid such as amorphous indomethacin displays a temperature dependence of molecular motion, which can be quantified as the average relaxation time for the solid in a metastable
supercooled liquid or glass to approach the molecular motion characteristic of a crystal. Differential scanning calorimetry can be used to quantify enthalpy change due to molecular structural
The term "structural relaxation" was introduced in the scientific literature in 1947/48 without any explanation, applied to NMR, and meaning the same as "thermal relaxation".^[1]^[2]^[3]
Spin relaxation in NMR
In nuclear magnetic resonance (NMR), various relaxations are the properties that it measures.
Chemical relaxation methods
In chemical kinetics, relaxation methods are used for the measurement of very fast reaction rates. A system initially at equilibrium is perturbed by a rapid change in a parameter such as the
temperature (most commonly), the pressure, the electric field or the pH of the solvent. The return to equilibrium is then observed, usually by spectroscopic means, and the relaxation time measured.
In combination with the chemical equilibrium constant of the system, this enables the determination of the rate constants for the forward and reverse reactions.^[4]
Monomolecular first-order reversible reaction
A monomolecular, first order reversible reaction which is close to equilibrium can be visualized by the following symbolic structure: ${\displaystyle {\ce {A}}~{\overset {k}{\rightarrow }}~{\ce {B}}~
{\overset {k'}{\rightarrow }}~{\ce {A}}}$ ${\displaystyle {\ce {A <=> B}}}$
In other words, reactant A and product B are forming into one another based on reaction rate constants k and k'.
To solve for the concentration of A, recognize that the forward reaction (${\displaystyle {\ce {A ->[{k}] B}}}$ ) causes the concentration of A to decrease over time, whereas the reverse reaction ($
{\displaystyle {\ce {B ->[{k'}] A}}}$ ) causes the concentration of A to increase over time.
Therefore, ${\displaystyle {d{\ce {[A]}} \over dt}=-k{\ce {[A]}}+k'{\ce {[B]}}}$ , where brackets around A and B indicate concentrations.
If we say that at ${\displaystyle t=0,{\ce {[A]}}(t)={\ce {[A]}}_{0}}$ , and applying the law of conservation of mass, we can say that at any time, the sum of the concentrations of A and B must be
equal to the concentration of ${\displaystyle A_{0}}$ , assuming the volume into which A and B are dissolved does not change: ${\displaystyle {\ce {[A]}}+{\ce {[B]}}={\ce {[A]}}_{0}\Rightarrow {\ce
{[B]}}={\ce {[A]}}_{0}-{\ce {[A]}}}$
Substituting this value for [B] in terms of [A][0] and [A](t) yields ${\displaystyle {d{\ce {[A]}} \over dt}=-k{\ce {[A]}}+k'{\ce {[B]}}=-k{\ce {[A]}}+k'({\ce {[A]}}_{0}-{\ce {[A]}})=-(k+k'){\ce
{[A]}}+k'{\ce {[A]}}_{0},}$ which becomes the separable differential equation ${\displaystyle {\frac {d{\ce {[A]}}}{-(k+k'){\ce {[A]}}+k'{\ce {[A]}}_{0}}}=dt}$
This equation can be solved by substitution to yield ${\displaystyle {\ce {[A]}}={k'-ke^{-(k+k')t} \over k+k'}{\ce {[A]}}_{0}}$
In atmospheric sciences
Desaturation of clouds
Consider a supersaturated portion of a cloud. Then shut off the updrafts, entrainment, and any other vapor sources/sinks and things that would induce the growth of the particles (ice or water). Then
wait for this supersaturation to reduce and become just saturation (relative humidity = 100%), which is the equilibrium state. The time it takes for the supersaturation to dissipate is called
relaxation time. It will happen as ice crystals or liquid water content grow within the cloud and will thus consume the contained moisture. The dynamics of relaxation are very important in cloud
physics for accurate mathematical modelling.
In water clouds where the concentrations are larger (hundreds per cm^3) and the temperatures are warmer (thus allowing for much lower supersaturation rates as compared to ice clouds), the relaxation
times will be very low (seconds to minutes).^[5]
In ice clouds the concentrations are lower (just a few per liter) and the temperatures are colder (very high supersaturation rates) and so the relaxation times can be as long as several hours.
Relaxation time is given as
T = (4π DNRK)^−1 seconds,
• D = diffusion coefficient [m^2/s]
• N = concentration (of ice crystals or water droplets) [m^−3]
• R = mean radius of particles [m]
• K = capacitance [unitless].
In astronomy
In astronomy, relaxation time relates to clusters of gravitationally interacting bodies, for instance, stars in a galaxy. The relaxation time is a measure of the time it takes for one object in the
system (the "test star") to be significantly perturbed by other objects in the system (the "field stars"). It is most commonly defined as the time for the test star's velocity to change by of order
Suppose that the test star has velocity v. As the star moves along its orbit, its motion will be randomly perturbed by the gravitational field of nearby stars. The relaxation time can be shown to be^
{\displaystyle {\begin{aligned}T_{r}&={0.34\sigma ^{3} \over G^{2}m\rho \ln \Lambda }\\&\approx 0.95\times 10^{10}\!\left({\sigma \over 200\,\mathrm {km\,s} ^{-1}}\right)^{\!3}\!\!\left({\rho \
over 10^{6}\,M_{\odot }\,\mathrm {pc} ^{-3}}\right)^{\!-1}\!\!\left({m \over M_{\odot }}\right)^{\!-1}\!\!\left({\ln \Lambda \over 15}\right)^{\!-1}\!\mathrm {yr} \end{aligned}}}
where ρ is the mean density, m is the test-star mass, σ is the 1d velocity dispersion of the field stars, and ln Λ is the Coulomb logarithm.
Various events occur on timescales relating to the relaxation time, including core collapse, energy equipartition, and formation of a Bahcall-Wolf cusp around a supermassive black hole.
See also | {"url":"https://www.knowpia.com/knowpedia/Relaxation_(physics)","timestamp":"2024-11-09T20:35:26Z","content_type":"text/html","content_length":"132552","record_id":"<urn:uuid:bc2462ed-9b47-452c-a22e-e157a19fea2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00629.warc.gz"} |
EchoVector Pivot Points
EchoVector Theory and EchoVector Analysis are a price pattern impact theory and a technical analysis methodology and approach postulated, created, and invented by Kevin John Bradford Wilbur.
EchoVector Analysis is also presented as a behavioral economic application and securities analysis tool in price pattern theory and in price pattern behavior, study, and forecasting, and in price
EchoVector Pivot Points are a further technical analysis tool and application within EchoVector Analysis, derived from EchoVector Theory in practice.
EchoVector Theory and EchoVector Analysis assert that a securities prior price patterns may influence its present and future price patterns. Present and future price patterns may then, in part, be
considered as 'echoing' these prior price patterns to some identifiable and measurable degree.
EchoVector Theory and EchoVector Analysis also assert that these influences may be observable, identifiable, and measurable in price pattern behavior and price pattern history, and potentially
observable in future price pattern formation, and potentially efficacious in future price pattern forecasting, to some measure or degree.
EchoVector Analysis is also used to forecast and project potential price Pivot Points (referred to as PPP's --potential pivot points, or EVPP's --EchoVector Pivot Points) and active, past and future
coordinate forecast echovector support and resistance echovectors (SREV's) for a security from a starting reference price at a starting reference time, based on the securities prior price pattern
within a given and significant and definable cyclical time frame.
EchoVector Pivot Points and EchoVector Support and Resistance Vectors are fundamental components of EchoVector Analysis. EchoVector SREV's are constructed from key components in the EchoVector Pivot
Point Calculation. EchoVector SREV's are defined and calculated and also referred to as Coordinate Forecast EchoVectors (CFEV's) to the initial EchoVector (XEV) calculation and construction, where X
designates not only the time length of the EchoVector XEV, but also the time length of XEV's CFEVs. The EchoVector Pivot Points are found as the endpoints of XEV's CFEVs' calculations and the CFEVs'
The EchoVector Pivot Point Calculation is a fundamentally different and more advanced calculation than the traditional pivot point calculation.
The EchoVector Pivot Point Calculation differs from traditional pivot point calculation by reflecting this given and specified cyclical price pattern length and reference, and its significance and
information, within the pivot point calculation. This cyclical price pattern and reference is included in the calculations and constructions of the echovector and its respective coordinate forecast
echovectors, as well as in the calculation of the related echovector pivot points.
While a traditional pivot point calculation may use simple price averages of prior price highs, lows and closes indifferent to their sequence in time to calculate its set of support and resistance
levels, the echovector pivot point calculation begins with any starting time and price point and respective cyclical time frame reference X, and then identifies the corresponding
“Echo-Back-Time-Point” within this cyclical time frame reference coordinate to the starting reference price and time point A. It then calculates the echovector (XEV) generated by the starting
reference time/price point and the echo-back-time-point, and includes the pre-determined and pre-defined accompanying constellation of “Coordinate Forecast EchoVector” origins derived from the prior
price pattern evidenced around the echo-back-time-point within a certain pre-selected and specified range (time and/or price version) that occurred within the particular referenced cyclical
time-frame and period X. Security I's EchoVector Pivot Point constructions then calculate and project the scope relative echovector pivot points that follow A, and the support and resistance levels
determined by the ensuing coordinate forecast echovectors and their selected range definition inclusion (fully differentiating the time-sequence of the origins), the cyclical time-frame X, and to
XEV's slope.
EchoVector Pivot Point Price Projection (EV-PPPPs) vectors are therefore advanced and fluid calculations of projected coordinate support and resistance levels and pivot points that follow the
starting reference point time and price point A (SRP-TPP) of the focus EchoVector (X-EV of Cycle Time Period Length X), levels and pivot points which are derived from ascending, descending and/or
lateral support and resistance Coordinate Forecast EchoVectors (CFEVs) calculated and emanating from particular 'scale and range defined' nearby EchoBackPeriod time and price points (NPP-TPPs)
reflecting the time and price points at pivoting action that followed the focus EchoVector's EchoBackDate TimeAndPricePoint (EBD-TPP) within, and relative to, the focus EchoVector's starting
reference time and price point, SRP-TPP, and the EchoVector's given and specified cyclically-based time period X, and the EchoVector's related slope momentum measure (X-EV's incline). This enables
EchoVector Pivot Points to be constructed from any relevantly identified time cycle period length, or the aggregation of any set of time cycle period lengths. Intraday, Daily, Weekly, Monthly,
Quarterly, Bi-Quarterly, Annual, Congressional Cycle, Presidential Cycle, Senatorial Cycle, etc., and in accordance with the economic calendar, earnings calendar, political economic calendar, or any
statistically significant 'calendar' cycle periodicity evident and recursive in backdata mining.
The EchoVector Support and Resistance Vectors, referred to as the Coordinate Forecast Echovectors, are used to generate the EchoVector Pivot Points.
The coordinate forecast echovectors originate in a predefined range C of price pivots O's that occurred proximate to the echo-back-time-point B within the given cyclical time frame X of the starting
reference price A.
The coordinate forecast echovectors reflect the price momentum relationship of the starting reference point price and the echo-back-time-point in their calculation and in the calculation of the
echovector pivot points.
In these respects, an echovector pivot point may be considered a price level of particular significance in the technical analysis of a security or financial market that may be usable by trader's as a
forecasting indicator of a securities (or future market's) time and price vector influenced cyclical price movements.
EchoVector pivot points and their related support and resistance levels also have the calculation advantage of being able to be calculated on the basis of short-term, intermediate-term, or
longer-term time frames within varied cyclical price references and 'elected or posited echo-characteristic based' time-spans.
Defining the EchoVector (EV) of Time Length X (for price/time point A
(at market trade print price p and at time point t) of security I
(investment) with Echo-Back-Time-Point A,t-X, p-N.
S Start Point
EVSF of Length (time-frame) X
The EchoVector "X-EV" of Security (Investment) "I" Measured from Market
Trade Time-Point/Print-Price Point, Starting Point, "A"
Definition: The EchoVector
"For any base security I at price/time point A, A having real market transaction and exchange recorded print price p at exchange of record print time t, then EchoVector XEV of security I and of time
length (cycle length) X with ending time/price point A would be designated and described as (I, Apt, XEV); EchoVector XEV's end point is (I, Apt) and EchoVector XEV's starting point is (I, Ap-N,
t-X), where N is the found exchange recorded print price difference between A and the Echo-Back-Time-Point of A (A, p-N, t-X at t-X) of Echo-Back-Time-Length X (being Echo-Cycle Length X).
A, p-n, t-X shall be called B (or B of I), being the EBDTPP (Echo-Back-Date-Time-And-Price-Point)*, or EBD (Echo-Back-Date)*, or EBTP (Echo-Back-Time-Point) of A of I.
N = the difference of p at A and p at B (B is the 'echo-back price-point and time-point of A found at (A, p-N, t-X.)
And security I (I, Apt, XEV) shall have an echo-back-time-point (EBTP) of At-X (or I-A-EBTP of At-X; or echo-back-date (EBD) I-A-EBD of At-X): t often displayed on a chart measured and referenced in
discrete d measurement length units (often OHLC or candlestick widthed and lengthed units[often bars or blocks]), such as minute, 5-minute, 15-minute, 30-minute, hourly, 2-hour, 4-hour, 6-hour,
8-hour, daily, weekly, etc."
Significant "Non-Intraday Single-Exchange" EchoVector Time Period
Lengths, or EchoVector Echo-Cycle Period lengths, occur as follows.
(X is the time or cycle length of the EchoVector Period, the EP, the
Asian Market Echo
European Market Echo
24-Hour Echo
2-Day Echo
3-Day Echo
Weekly Echo
Bi-Weekly Echo
Tri-Weekly Cycle Echo
Monthly Echo, 4-Weeks
Monthly Echo, 5-Weeks
Bi-Monthly Echo
Quarterly Echo
Bi-Quarterly Echo
3-Quarter Echo
Annual Echo
5-Quarters Cycle Echo, 15-Month Echo
6-Quarters cycle Echo, 18-Month Echo
7-Quarters cycle Echo, 21-Month Echo
2-Year Congressional Cycle Echo
4-Year Presidential Cycle and Mid-Regime Change Cycle Echo
5-Year US FRB Cycle Echo
6-Year (Tri-Congressional Cycle) Echo
8-Year Regime Change Cycle Echo
10-Year Decennial Cycle Echo
16-Year Maturity Cycle Echo (Bi-Regime Change Cycle Echo)
32-Year bi-maturity cycle echo (Quad-Regime Change Cycle Echo)
Shorter, Intra-day market EchoVectors are readily calculable, as well
are longer-term study EchoVectors.
Calculating the Coordinate Forecast EchoVector and the Coordinated
Potential Pivot Price-point and the Coordinated Potential Pivot Time-point
Definition: The Coordinate Forecast EchoVector: (CFEV)
For any EV of X base length (time frame, or time length, or cycle
period), such as lengths intra-day, daily, 3-day, weekly, bi-weekly,
monthly, bi-monthly, quarterly, bi-quarterly, annual, bi-annual,
presidential cycle, or regime change cycle, (and with lengths measured
in incremental -often OHLC- units such as minutes, hours, days, weeks,
or, shorter or longer unit) constructed for security I from starting EV
reference point A with print price point p at time point t (Apt) and
EBTP or EBD A-Length X at print price p (t-X), Coordinate Forecast
EchoVectors (CFEVs) can be constructed having the same slope and lengths
(phase adjusted) as XEV of A of I.
Coordinate Forecast EchoVectors and their starting points shall have the
following relationships to A of I and B of I, or EchoVector XEV-BA of I.
Base Construction Versions For the CFEV:
A. Time-Frame Distance Versions
B. Price Distance Versions
C. Model PreSet Value Distance Versions
Versions: 1,2,3,4,5,6
A. CFEV Starting Point Construction: Time-Frame Distance Based Versions
1. Absolute Time Distance Based (Within Select Range) From B, Selected
2. Percent X Time Distance Based (Within Select Range) From B, Selected
B. CFEV Starting Point Construction: Price Distant Based Versions
3. Absolute Price Distance Based (Within Select Focus Interest Range,
Extension) From B, Selected
4. Percent Price Distance Based (Within Select Range Focus Interest)
From B, Selected
C. PreSet Value Distance Based CFEV Starting Point Construction
5. PreSet Absolute Time Distance Based (Range Specified)
6. Preset Percent X Time Distance Based (Range Specified)
*The EchoVectoVEST MDPP Precision Pivots Forecast Model And Alert
Paradigm primarily utilizes specifically defined values of 5. and 6. above
in its CFEV Constellation Illustrations and Highlights for specified
focus interest opportunity securities and their projected forward focus
interest opportunity time-periods and price points. Example CFEV
constructions that follow will utilize these constructions unless noted
The CFEVs found relevant and proximate to BA on the basis of one of the
above 6 relationships shall be called EchoVector XEV BA's CFEV
Constellation, Base 1, 2, 3, 4, 5, or 6.
The CFEV PPPs correlate to A, B, and O, and are referred to as Ps or PP's --pivot points or potential pivots (or PPPs --projected pivot points or potential pivot points or price pivot points), or
flex points.
They are 'forecast-ed' and 'focus interest' coordinate to A, B and O support, resistance, or flex points, derived from the CFEV's found originating within Range C from the 'scope relevant' pivots
occurring there. These 'EchoVector and Coordinate Forecast EchoVectors-based Pivot Points' constitute trade position opportunitities, refererred to as Focus Interest Opportunity Points (or Pivots),
FIOPs, or simply as OPs (opportunities or options) or POPs (potential opportunities or options) or PPPOPs (potential pivot point opportunities or options).
Projecting The SRV PPs Of A of I from B to A to EVBA to O to CFEV-OPP, (Given V {Version}, Range C, and EV-TimeLength X, XEV)
EV-TLX (EchoVector Time Length X, XEV)
V = Version N (Selected)
RC = Range C (Selected)
Forward PP (Derived from O occurring in Construction Version N Range C After B)
Back PP (Derived from O occurring in Construction Version N Range C Before B)
Up PP (Derived from O occurring in Construction Version N Range C Above B)
Down PP (Derived from O occurring in Construction Version N Range C Below B)
The FIOP CFEV price origins O will occur above or below the EBTP B.
As any CFEV origin, O, occurring within Range C, occurs above or below EBTP B, so therefore will the correlate PP of CFEV O-PP relating to XEV occur above or below A.
As all FIOP CFEV origin prices occur above and/or below B, all PPPs of CFEV POPP will correspondingly occur above and/or below A.
PPPs will be designated S1 S2 and S3 and R1 R2 and R3, as they occur in extension O, and depending on applied V and Range C and the resulting time distance and/or price distance given by V, and the
directional pivots of record and relative scale reference occuring subsequent to O (up-wave, down-wave, sideways, flex-point)
S = Support
R = Resistance
Definition: The Coordinate Forecast EchoVector: (CFEV)
For any EV of X base length (time frame, or time length, or cycle
period), such as lengths intra-day, daily, 3-day, weekly, bi-weekly,
monthly, bi-monthly, quarterly, b-quarterly, annual, bi-annual,
presidential cycle, or regime change cycle, (and with lengths measured
in incremental -often OHLC- units such as minutes, hours, days, weeks,
or, shorter or longer unit) constructed for security I from starting EV
reference point A with print price point p at time point t (Apt) and
EBTP or EBD A-Length X at print price p (t-X), Coordinate Forecast
EchoVectors (CFEVs) can be constructed having the same slope and lengths
(phase adjusted) as XEV of A of I.
Coordinate Forecast EchoVectors and their starting points shall have the
following relationships to A of I and B of I, or EchoVector XEV-BA of I.
Base Construction Versions For the CFEV:
A. Time-Frame Distance Versions
B. Price Distance Versions
C. Model Pre-set Value Distance Versions
Versions: 1,2,3,4,5,6
A. CFEV Starting Point Construction: Time-Frame Distance Based Versions
1. Absolute Time Distance Based (Within Select Range) From B, Select
2. Percent X Time Distance Based (Within Select Range) From B, Select
B. CFEV Starting Point Construction: Price Distant Based Versions
3. Absolute Price Distance Based (Within Select Focus Interest Range,
Extension) From B, Select
4. Percent Price Distance Based (Within Select Range Focus Interest)
From B, Select
C. Pre-set Value Distance Based CFEV Starting Point Construction
5. Pre-set Absolute Time Distance Based (Range Specified)
6. Pre-set Percent X Time Distance Based (Range Specified)
*The EchoVectoVEST MDPP Precision Pivots Forecast Model And Alert
Paradigm primarily utilizes specifically refined values of 5. and 6. above
in its CFEV Constellation Illustrations and Highlights for specified
focus interest opportunity securities and their projected forward focus
interest opportunity time-periods and price points. Example CFEV
constructions that follow will utilize these constructions unless noted
Defining, Calculating, Constructing and Generating the Coordinate
Forecast EchoVectors
EBD is the Echo-BackDate
EBD-T is the Echo-BakeDateTime
EBTP is the Echo-Back-Time-Point with print price found to be p-N.
EBD-TI is the Echo-BackDateTimeIncrement, or EBDT-ITMU incremental time
measurement unit, or imu or mu, or d for designated unit (EBDT-MU, or
EBDT-U, or EBDT-d)
(EX.: minute, 5min, 15min, hourly, 2-hour, 4-hour, daily weekly,
monthly, etc.)
EBTP is also often referred to as the XEV EBPP, or the Echo-Back-Price-Point
of XEV
L is length of the Echo-Period (EP); or X, being the time-length and
cycle length of the EchoVector EV. XEV or LEV
Primary Model Focus Interest EP or Echo-Period Lengths, or EchoVector
Time Frame or Cycle Time Period Lengths, X
AMEV Asian Market Echo (t, open, lunchtime, close, etc.)
EMEV European Market Echo
24HEV 1day, 24hour Echo
48HEV 2day Echo
72HEV 3day Echo
WEV weekly Echo
2WEV biweekly Echo
MEV monthly Echo
2MEV bimonthly Echo
QEV quarterly Echo
2QEV biquarterly Echo
3QEV 3quarter Echo
AEV annual Echo
15MEV OR 5QEV, ETC 15month Echo
6QEV ETC 18month Echo
2AEC OR CCEV 2year Congressional Cycle Echo
4AEV OR PCEV 4year Presidential Cycle and mid-Regime Cycle Echo
5AEV OR FRBEV 5year FRB Cycle Echo
6AEV OR 3CCEV 6year (Tri-Congressional Cycle) Echo
RCCEV OR 2PCEV OR 4CCEV ETC 8year Regime Change Cycle Echo
10year Decennial Cycle Echo
16year Bi-Regime Change Cycle Echo (Nascent Echo)
32year Quad-Regime Change Cycle Echo (Maturity Echo)
The Starting Date, time and price point of the EV: The Stp, or Apt, or
A, or SDTPP-A, or EV origin point O.
A is The Sdtp, the summation and reference origin date, time and price
point (Starting time and price point of reference for XEV BA).
The Sdtp is an actual' market price trade point print,' being an exchange
actual buy/sell trade occurrence 'print' posted as taking place at the
specific referenced time and price.
XEV BA is the "Echo" vector from ebtp (EchoBackTimePoint B) to sdtp
(Summation-Date-Time PricePoint).
A is considered as having 'echo characteristic relationship' in some
definable or designate-able form, manner, or degree a 'parameter
relationship contiguity,' to B, the EBTP, on some dimension.
The notion of 'similar' or 'contiguous' on some measure, parameter,
index, dimension, aspect or reference is implied and/or is axiomatic.
Likeness in quality, time, space, context, origin, or relation... to
some degree or measure is axiomatic to the methodology.
FIO EV: focus interest opportunity echovector
The FIOP (focus or forecast interest opportunity point (or period, or pivot). It is derived from the S-dtp, that is, from A.
From the constellation of 'scope relative' prior and following pivot point highs and lows occurring within Range C (elect Version 1 to 6), the applied CFEV Constellation Range, the CFEV or PPV EBTP
origins are derived.
In some citations Delta EV BA is "delta'd" to EchoVector EV AB and is referenced to have A as the echovector time and price point 'antecedent,' (the echo-back-date and/or echo-back-time-point and/or
echo-back-date-time-point) and B as XEV's Construction beginning or starting price and time point reference, or starting dtpS, B; that is, as the EchoVector's starting price and starting time point
of reference, being the EV's origin.
Also definition-ally derived from XEV AB are the CFEVs, the Coordinate Forecast EchoVectors, also having EchoVector AB length X and EchoVector AB slope m (for momentum, or slope w for wave), and
having Echo-Back-Date-Time-And-Price-Point A as the "CFEV Constellation Base Point," or "Origin," from which the Key Immediate Coordinate Forecast EchoVectors, the Focus Interest Opportunity PPVs
(Potential Pivot Vectors), or SRV Constellation (Support/Resistance, or Flex-point Vector Constellation Projections) are, by specified relation (Versions 1 to 6), derived.
In EchoVector AB, A is the Echo-Back-Date-Time-And-Price-Point, and B is the starting reference, and the echovector 'originating' time and price point, or its 'origin.'
From the constellation of 'scope relative' prior and following pivot point highs and lows occurring within Range C (elect Version 1 to 6), the applied CFEV Constellation Range, the CFEV or PPV EBTP
origins are derived. These origins are referred to as O... OSP1, OSP2, OSP3, ORP1, ORP2, ORP3, etc.
These CFEV origins O are identified as the scope-relative wave point pivot price point highs and lows occurring before and after THE ECHOBACKDATETIMEANDPRICEPOINT B in XEV BA occurring within Range C
(Range C is, again, specified by specific time length distance backward and forward from B by percentage of X from B, or by (within) absolute time-length distance from B, or by (within) percent of B
price distance from B, or by (within) absolute price length distance (N) from B; or by percent N from B, or by stochastically-related 'Modeling process determined' (and then preset in distance
--absolute or percent-- from B designated; B being the EBDTPP of A.
These Range C subsumed wave point pivot highs and lows constitute the CFEV Focus Interest Origin Points (FIOPs O), from which the relative CFEV are derived, and from which the Focus Interest
Opportunity SRV PP Constellation Set is derived (FIOP PPPs). These CFEV origin points, O's, relate to the EBTP of A, being B, and to (occuring within) the Base Construction Version and Range C of the
CFEV, and the Range and Version's mathematical definition.
From these CFEV origin points (correlated to B by V), and from EV-AB attributes, being of the samelength and slope, the CFEVs are derived, as are the CFEV PPPs, the FIOP PPs.
The CFEV PPPs correlate to A, B, and O, and are referred to as Ps or PP's
--pivot points or potential pivots (or PPPs --projected pivot points or
potential pivot points), or flex points.
They are 'forecast-ed' and 'focus interest' coordinate to A, B and O
support, resistance, or flex points, derived from the CFEV's found
originating within Range C from the 'scope relevant' pivots occuring
there. These EchoVector and their coordinate forecast EchoVector based
Pivot Points constitute trade position opportunities, referred to as
Focus Interest Opportunity Points (or Pivots), FIOPs, or simply as OPs
(opportunities or options) or POPs (potential opportunities or options).
Projecting The SRV PPs Of A of I from A to B to EVAB to O to CFEV-OPP,
(Given V, Range C, and EVTLX, XEV)
EVTL (EchoVector Time Length X)
Forward PP (Derived from O occurring in Range C After B).
Back PP (Derived from O occurring in Range C Before B)
Up PP (Derived from O occurring in Range C Above B)
Down PP (Derived from O occurring in Range C Below B)
The FIOP CFEV price origins O will occur above or below the EBTP B.
As any CFEV origin, O, occurring within Range C, occurs above or below
EBTP B, so therefore will the correlate PP of CFEV OPP occur above or
below A.
As all FIOP CFEV origin prices occur above and/or below B, all PPPs of
CFEV POPP will correspondingly occur above and/or below A.
PPPs will be designated S1 S2 and S3 and R1 R2 and R3, as they occur in
proximity to O, B, and A, depending on applied V and Range C specifications, and the
resulting time distance and/or price distance given by V (Version), and the
directional pivots of O (up-wave, down-wave, sideways flex-point)
COPYRIGHT 2013 ECHOVECTORVEST
TRADEMARK 2013 ECHOVECTORVEST
Posted by EchoVectorVEST MDPP, PRECISION PIVOTS at 1:01 PM
Posted by EchoVectorVEST MDPP PRECISION PIVOTS at 1:23 PM No comments:
Introduction to EchoVector Analysis and EchoVector Pivot Points
EchoVector Theory and EchoVector Analysis are a price pattern impact theory and a technical analysis methodology and approach postulated, created, and invented by Kevin John Bradford Wilbur.
EchoVector Analysis is also presented as a behavioral economic application and securities analysis tool in price pattern theory and in price pattern behavior, study, and forecasting, and in price
EchoVector Pivot Points are a further technical analysis tool and application within EchoVector Analysis, derived from EchoVector Theory in practice.
EchoVector Theory and EchoVector Analysis assert that a securities prior price patterns may influence its present and future price patterns. Present and future price patterns may then, in part, be
considered as 'echoing' these prior price patterns to some identifiable and measurable degree.
EchoVector Theory and EchoVector Analysis also assert that these influences may be observable, identifiable, and measurable in price pattern behavior and price pattern history, and potentially
observable in future price pattern formation, and potentially efficacious in future price pattern forecasting, to some measure or degree.
EchoVector Analysis is also used to forecast and project potential price Pivot Points (referred to as PPP's --potential pivot points, or EVPP's --EchoVector Pivot Points) and future support and
resistance vectors (SRV's) for a security from a starting reference price at a starting reference time, based on the securities prior price pattern within a given and significant and definable
cyclical time frame.
EchoVector Pivot Points and EchoVector Support and Resistance Vectors are fundamental components of EchoVector Analysis. EchoVector SRV's are constructed from key components in the EchoVector Pivot
Point Calculation. EchoVector SRV's are calculated Coordinate Forecast EchoVectors (CFEV's) to the initial EchoVector (XEV) calculation and construction.
The EchoVector Pivot Point Calculation is a fundamentally different and more advanced calculation than the traditional pivot point calculation.
The EchoVector Pivot Point Calculation differs from traditional pivot point calculation by reflecting this given and specified cyclical price pattern reference and its significance and information in
the echovector pivot point calculation. This cyclical price pattern and reference is included in the calculations and constructions of the echovector and its respective coordinate forecast
echovectors, as well as in the calculation of the related echovector pivot points.
While a traditional pivot point calculation may use simple price averages of prior price highs, lows and closes to calculate its set of support and resistance levels, the echovector pivot point
calculation begins with any starting time and price point and respective cyclical time frame reference X, and then identifies the corresponding “Echo-Back-Time-Point” within this cyclical time frame
reference coordinate to the starting reference price and time point A. It then calculates the echovector (XEV) generated by the starting reference time/price point and the echo-back-time-point, and
includes the pre-determined and pre-defined accompanying constellation of “Coordinate Forecast EchoVector” origins derived from the prior price pattern evidenced around the echo-back-time-point
within a certain pre-selected and specified range (time and/or price version) that occurred within the particular referenced cyclical time-frame and period X. Security I's EchoVector Pivot Point
constructions then calculate and project the scope relative echovector pivot points that follow A, and the support and resistance levels determined by the ensuing coordinate forecast echovectors and
their selected range definition inclusion, the cyclical time-frame X, and to XEV's slope.
EchoVector Pivot Point Time And Price Projections (EV-PPPPs) are therefore advanced and fluid calculations of projected and coordinate support and resistance levels following the starting reference
price and time point A (SRP-TPP) of the subject focus echovector (X-EV of Cycle Period time length X), levels which are derived from ascending, descending and/or lateral coordinate support and
resistance forecast echovectors (CFEVs) calculated and eminating from particular range defined echobackperiod times and price points (NPP-TPPs) related to the price points and time points of
proximate scale and scope and the relative pivoting action that had followed the focus echovector's echo-back-time-point B (EBD-TPP) within, and relative to, the focus echovector's starting
time-point and price-point, SRP, and the echovector's given and specified cyclically-based focus interest time-span X, and the echovector's related slope momentum measure (X-EV's incline). This
enables EchoVector Pivot Points to be constructed from any relevantly identified time cycle period length, or the aggregation of any set of time cycle period lengths. Intraday, Daily, Weekly,
Monthly, Quarterly, Biquarterly, Annual, Congressional Cycle, Presidential Cycle, Senatorial Cycle, etc., and in accordance with the economic calendar, earnings calendar, political economic calendar,
or any statistically significant 'calendar' cycle periodicity evident and recursive in backdata mining.)
The Support and Resistance Vectors, referred to as the Coordinate Forecast Echovectors, are used to generate the EchoVector Pivot Points.
The coordinate forecast echovectors originate in a predefined range C of price pivots O's that occurred proximate to the echo-back-time-point B within the given cyclical time frame X of the starting
reference price A. The coordinate forecast echovectors reflect the price momentum relationship of the starting reference point price and the echo-back-time-point in their calculation and in the
calculation of the echovector pivot points.
In these respects, an echovector pivot point may be considered a price level of particular significance in the technical analysis of a security or financial market that may be usable by trader's as a
forecasting indicator of a securities (or future market's) time and price vector influenced cyclical price movements.
EchoVector pivot points and their related support and resistance levels also have the calculation advantage of being able to be calculated on the basis of short-term, intermediate-term, or
longer-term time frames within varied cyclical price references and 'elected or posited echo-characteristic based' time-spans.
Defining the EchoVector (EV) of Time Length X (for price/time point A
(at market trade print price p and at time point t) of security I
(investment) with Echo-Back-Time-Point A,t-X, p-N.
S Start Point
EVSF of Length (time-frame) X
The EchoVector "X-EV" of Security (Investment) "I" Measured from Market
Trade Time-Point/Print-Price Point, Starting Point, "A"
Definition: The EchoVector
"For any base security I at price/time point A, A having real market transaction and exchange recorded print price p at exchange of record print time t, then EchoVector XEV of security I and of time
length (cycle length) X with ending time/price point A would be designated and described as (I, Apt, XEV); EchoVector XEV's end point is (I, Apt) and EchoVector XEV's starting point is (I, Ap-N,
t-X), where N is the found exchange recorded print price difference between A and the Echo-Back-Time-Point of A (A, p-N, t-X at t-X) of Echo-Back-Time-Length X (being Echo-Cycle Length X).
A, p-n, t-X shall be called B (or B of I), being the EBDTPP (Echo-Back-Date-Time-And-Price-Point)*, or EBD (Echo-Back-Date)*, or EBTP (Echo-Back-Time-Point) of A of I.
N = the difference of p at A and p at B (B is the 'echo-back price-point and time-point of A found at (A, p-N, t-X.)
And security I (I, Apt, XEV) shall have an echo-back-time-point (EBTP) of At-X (or I-A-EBTP of At-X; or echo-back-date (EBD) I-A-EBD of At-X): t often displayed on a chart measured and referenced in
discrete d measurement length units (often OHLC or candlestick widthed and lengthed units[often bars or blocks]), such as minute, 5-minute, 15-minute, 30-minute, hourly, 2-hour, 4-hour, 6-hour,
8-hour, daily, weekly, etc."
Significant "Non-Intraday Single-Exchange" EchoVector Time Period
Lengths, or EchoVector Echo-Cycle Period lengths, occur as follows.
(X is the time or cycle length of the EchoVector Period, the EP, the
Asian Market Echo
European Market Echo
24-Hour Echo
2-Day Echo
3-Day Echo
Weekly Echo
Bi-Weekly Echo
Monthly Echo
Bi-Monthly Echo
Quarterly Echo
Bi-Quarterly Echo
3-Quarter Echo
Annual Echo
15-Month Echo
18-Month Echo
2-Year Congressional Cycle Echo
4-Year Presidential Cycle and mid-Regime Cycle Echo
5-Year FRB Cycle Echo
6-Year (Tri-Congressional Cycle) Echo
8-Year Regime Change Cycle Echo
10-Year Decennial Cycle Echo
16-Year Bi-Regime Change Cycle Echo (Nativity echo)
32-Year Quad-Regime Change Cycle Echo (Maturity Echo)
Shorter, Intra-day market EchoVectors are readily calculable, as well
are longer-term study echovectors.
Calculating the Coordinate Forecast EchoVector and The Coordinated
Potential Pivot Price Point and Coordinated Potential Pivot Time-point
Definition: The Coordinate Forecast EchoVector: (CFEV)
For any EV of X base length (time frame, or time length, or cycle
period), such as lengths intra-day, daily, 3-day, weekly, bi-weekly,
monthly, bi-monthly, quarterly, bi-quarterly, annual, bi-annual,
presidential cycle, or regime change cycle, (and with lengths measured
in incremental -often OHLC- units such as minutes, hours, days, weeks,
or, shorter or longer unit) constructed for security I from starting EV
reference point A with print price point p at time point t (Apt) and
EBTP or EBD A-Length X at print price p (t-X), Coordinate Forecast
EchoVectors (CFEVs) can be constructed having the same slope and lengths
(phase adjusted) as XEV of A of I.
Coordinate Forecast EchoVectors and their starting points shall have the
following relationships to A of I and B of I, or EchoVector XEV-BA of I.
Base Construction Versions For the CFEV:
A. Time-Frame Distance Versions
B. Price Distance Versions
C. Model PreSet Value Distance Versions
Versions: 1,2,3,4,5,6
A. CFEV Starting Point Construction: Time-Frame Distance Based Versions
1. Absolute Time Distance Based (Within Select Range) From B, Selected
2. Percent X Time Distance Based (Within Select Range) From B, Selected
B. CFEV Starting Point Construction: Price Distant Based Versions
3. Absolute Price Distance Based (Within Select Focus Interest Range,
Extension) From B, Selected
4. Percent Price Distance Based (Within Select Range Focus Interest)
From B, Selected
C. PreSet Value Distance Based CFEV Starting Point Construction
5. PreSet Absolute Time Distance Based (Range Specified)
6. Preset Percent X Time Distance Based (Range Specified)
*The EchoVectoVEST MDPP Precision Pivots Forecast Model And Alert
Paradigm primarily utilizes specifically defined values of 5. and 6. above
in its CFEV Constellation Illustrations and Highlights for specified
focus interest opportunity securities and their projected forward focus
interest opportunity time-periods and price points. Example CFEV
constructions that follow will utilize these constructions unless noted
The CFEVs found relevant and proximate to BA on the basis of one of the
above 6 relationships shall be called EchoVector XEV BA's CFEV
Constellation, Base 1, 2, 3, 4, 5, or 6.
The CFEV PPPs correlate to A, B, and O, and are referred to as Ps or PP's --pivot points or potential pivots (or PPPs --projected pivot points or potential pivot points or price pivot points), or
flex points.
They are 'forecast-ed' and 'focus interest' coordinate to A, B and O support, resistance, or flex points, derived from the CFEV's found originating within Range C from the 'scope relevant' pivots
occurring there. These 'EchoVector and Coordinate Forecast EchoVectors-based Pivot Points' constitute trade position opportunitities, refererred to as Focus Interest Opportunity Points (or Pivots),
FIOPs, or simply as OPs (opportunities or options) or POPs (potential opportunities or options) or PPPOPs (potential pivot point opportunities or options).
Projecting The SRV PPs Of A of I from B to A to EVBA to O to CFEV-OPP, (Given V {Version}, Range C, and EV-TimeLength X, XEV)
EV-TLX (EchoVector Time Length X, XEV)
V = Version N (Selected)
RC = Range C (Selected)
Forward PP (Derived from O occurring in Construction Version N Range C After B)
Back PP (Derived from O occurring in Construction Version N Range C Before B)
Up PP (Derived from O occurring in Construction Version N Range C Above B)
Down PP (Derived from O occurring in Construction Version N Range C Below B)
The FIOP CFEV price origins O will occur above or below the EBTP B.
As any CFEV origin, O, occurring within Range C, occurs above or below EBTP B, so therefore will the correlate PP of CFEV O-PP relating to XEV occur above or below A.
As all FIOP CFEV origin prices occur above and/or below B, all PPPs of CFEV POPP will correspondingly occur above and/or below A.
PPPs will be designated S1 S2 and S3 and R1 R2 and R3, as they occur in extension O, and depending on applied V and Range C and the resulting time distance and/or price distance given by V, and the
directional pivots of record and relative scale reference occuring subsequent to O (up-wave, down-wave, sideways, flex-point)
S = Support
R = Resistance
Definition: The Coordinate Forecast EchoVector: (CFEV)
For any EV of X base length (time frame, or time length, or cycle
period), such as lengths intra-day, daily, 3-day, weekly, bi-weekly,
monthly, bi-monthly, quarterly, b-quarterly, annual, bi-annual,
presidential cycle, or regime change cycle, (and with lengths measured
in incremental -often OHLC- units such as minutes, hours, days, weeks,
or, shorter or longer unit) constructed for security I from starting EV
reference point A with print price point p at time point t (Apt) and
EBTP or EBD A-Length X at print price p (t-X), Coordinate Forecast
EchoVectors (CFEVs) can be constructed having the same slope and lengths
(phase adjusted) as XEV of A of I.
Coordinate Forecast EchoVectors and their starting points shall have the
following relationships to A of I and B of I, or EchoVector XEV-BA of I.
Base Construction Versions For the CFEV:
A. Time-Frame Distance Versions
B. Price Distance Versions
C. Model Pre-set Value Distance Versions
Versions: 1,2,3,4,5,6
A. CFEV Starting Point Construction: Time-Frame Distance Based Versions
1. Absolute Time Distance Based (Within Select Range) From B, Select
2. Percent X Time Distance Based (Within Select Range) From B, Select
B. CFEV Starting Point Construction: Price Distant Based Versions
3. Absolute Price Distance Based (Within Select Focus Interest Range,
Extension) From B, Select
4. Percent Price Distance Based (Within Select Range Focus Interest)
From B, Select
C. Pre-set Value Distance Based CFEV Starting Point Construction
5. Pre-set Absolute Time Distance Based (Range Specified)
6. Pre-set Percent X Time Distance Based (Range Specified)
*The EchoVectoVEST MDPP Precision Pivots Forecast Model And Alert
Paradigm primarily utilizes specifically refined values of 5. and 6. above
in its CFEV Constellation Illustrations and Highlights for specified
focus interest opportunity securities and their projected forward focus
interest opportunity time-periods and price points. Example CFEV
constructions that follow will utilize these constructions unless noted
Defining, Calculating, Constructing and Generating the Coordinate
Forecast EchoVectors
EBD is the Echo-BackDate
EBD-T is the Echo-BakeDateTime
EBTP is the Echo-Back-Time-Point with print price found to be p-N.
EBD-TI is the Echo-BackDateTimeIncrement, or EBDT-ITMU incremental time
measurement unit, or imu or mu, or d for designated unit (EBDT-MU, or
EBDT-U, or EBDT-d)
(EX.: minute, 5min, 15min, hourly, 2-hour, 4-hour, daily weekly,
monthly, etc.)
EBTP is also often referred to as the XEV EBPP, or the Echo-Back-Price-Point
of XEV
L is length of the Echo-Period (EP); or X, being the time-length and
cycle length of the EchoVector EV. XEV or LEV
Primary Model Focus Interest EP or Echo-Period Lengths, or EchoVector
Time Frame or Cycle Time Period Lengths, X
AMEV Asian Market Echo (t, open, lunchtime, close, etc.)
EMEV European Market Echo
24HEV 1day, 24hour Echo
48HEV 2day Echo
72HEV 3day Echo
WEV weekly Echo
2WEV biweekly Echo
MEV monthly Echo
2MEV bimonthly Echo
QEV quarterly Echo
2QEV biquarterly Echo
3QEV 3quarter Echo
AEV annual Echo
15MEV OR 5QEV, ETC 15month Echo
6QEV ETC 18month Echo
2AEC OR CCEV 2year Congressional Cycle Echo
4AEV OR PCEV 4year Presidential Cycle and mid-Regime Cycle Echo
5AEV OR FRBEV 5year FRB Cycle Echo
6AEV OR 3CCEV 6year (Tri-Congressional Cycle) Echo
RCCEV OR 2PCEV OR 4CCEV ETC 8year Regime Change Cycle Echo
10year Decennial Cycle Echo
16year Bi-Regime Change Cycle Echo (Nascent Echo)
32year Quad-Regime Change Cycle Echo (Maturity Echo)
The Starting Date, time and price point of the EV: The Stp, or Apt, or
A, or SDTPP-A, or EV origin point O.
A is The Sdtp, the summation and reference origin date, time and price
point (Starting time and price point of reference for XEV BA).
The Sdtp is an actual' market price trade point print,' being an exchange
actual buy/sell trade occurrence 'print' posted as taking place at the
specific referenced time and price.
XEV BA is the "Echo" vector from ebtp (EchoBackTimePoint B) to sdtp
(Summation-Date-Time PricePoint).
A is considered as having 'echo characteristic relationship' in some
definable or designate-able form, manner, or degree a 'parameter
relationship contiguity,' to B, the EBTP, on some dimension.
The notion of 'similar' or 'contiguous' on some measure, parameter,
index, dimension, aspect or reference is implied and/or is axiomatic.
Likeness in quality, time, space, context, origin, or relation... to
some degree or measure is axiomatic to the methodology.
FIO EV: focus interest opportunity echovector
The FIOP (focus or forecast interest opportunity point (or period, or pivot). It is derived from the S-dtp, that is, from A.
From the constellation of 'scope relative' prior and following pivot point highs and lows occurring within Range C (elect Version 1 to 6), the applied CFEV Constellation Range, the CFEV or PPV EBTP
origins are derived.
In some citations Delta EV BA is "delta'd" to EchoVector EV AB and is referenced to have A as the echovector time and price point 'antecedent,' (the echo-back-date and/or echo-back-time-point and/or
echo-back-date-time-point) and B as XEV's Construction beginning or starting price and time point reference, or starting dtpS, B; that is, as the EchoVector's starting price and starting time point
of reference, being the EV's origin.
Also definition-ally derived from XEV AB are the CFEVs, the Coordinate Forecast EchoVectors, also having EchoVector AB length X and EchoVector AB slope m (for momentum, or slope w for wave), and
having Echo-Back-Date-Time-And-Price-Point A as the "CFEV Constellation Base Point," or "Origin," from which the Key Immediate Coordinate Forecast EchoVectors, the Focus Interest Opportunity PPVs
(Potential Pivot Vectors), or SRV Constellation (Support/Resistance, or Flex-point Vector Constellation Projections) are, by specified relation (Versions 1 to 6), derived.
In EchoVector AB, A is the Echo-Back-Date-Time-And-Price-Point, and B is the starting reference, and the echovector 'originating' time and price point, or its 'origin.'
From the constellation of 'scope relative' prior and following pivot point highs and lows occurring within Range C (elect Version 1 to 6), the applied CFEV Constellation Range, the CFEV or PPV EBTP
origins are derived. These origins are referred to as O... OSP1, OSP2, OSP3, ORP1, ORP2, ORP3, etc.
P FOR PIVOT OR FLEX POINT
These CFEV origins O are identified as the scope-relative wave point pivot price point highs and lows occurring before and after THE ECHOBACKDATETIMEANDPRICEPOINT B in XEV BA occurring within Range C
(Range C is, again, specified by specific time length distance backward and forward from B by percentage of X from B, or by (within)
absolute time-length distance from B, or by (within) percent of B price distance from B, or by (within) absolute price length distance (N) from B; or by percent N from B, or by stochastically-related
'Modeling process determined' (and then preset in distance --absolute or percent-- from B designated; B being the EBDTPP of A.
These Range C subsumed wave point pivot highs and lows constitute the CFEV Focus Interest Origin Points (FIOPs O), from which the relative CFEV are derived, and from which the Focus Interest
Opportunity SRV PP Constellation Set is derived (FIOP PPPs).
These CFEV origin points, O's, relate to the EBTP of A, being B, and to (occuring within) the Base Construction Version and Range C of the CFEV, and the Range and Version's mathematical definition.
From these CFEV origin points (correlated to B by V), and from EV-AB attributes, being of the samelength and slope, the CFEVs are derived, as are the CFEV PPPs, the FIOP PPs.
The CFEV PPPs correlate to A, B, and O, and are referred to as Ps or PP's
--pivot points or potential pivots (or PPPs --projected pivot points or
potential pivot points), or flex points.
They are 'forecast-ed' and 'focus interest' coordinate to A, B and O
support, resistance, or flex points, derived from the CFEV's found
originating within Range C from the 'scope relevant' pivots occuring
there. These EchoVector and their coordinate forecast EchoVector based
Pivot Points constitute trade position opportunities, referred to as
Focus Interest Opportunity Points (or Pivots), FIOPs, or simply as OPs
(opportunities or options) or POPs (potential opportunities or options).
Projecting The SRV PPs Of A of I from A to B to EVAB to O to CFEV-OPP,
(Given V, Range C, and EVTLX, XEV)
EVTL (EchoVector Time Length X)
Forward PP (Derived from O occurring in Range C After B).
Back PP (Derived from O occurring in Range C Before B)
Up PP (Derived from O occurring in Range C Above B)
Down PP (Derived from O occurring in Range C Below B)
The FIOP CFEV price origins O will occur above or below the EBTP B.
As any CFEV origin, O, occurring within Range C, occurs above or below
EBTP B, so therefore will the correlate PP of CFEV OPP occur above or
below A.
As all FIOP CFEV origin prices occur above and/or below B, all PPPs of
CFEV POPP will correspondingly occur above and/or below A.
PPPs will be designated S1 S2 and S3 and R1 R2 and R3, as they occur in
proximity to O, B, and A, depending on applied V and Range C specifications, and the
resulting time distance and/or price distance given by V (Version), and the
directional pivots of O (up-wave, down-wave, sideways flex-point)
COPYRIGHT 2013 ECHOVECTORVEST
TRADEMARK 2013 ECHOVECTORVEST
Also See "EchoVector Analysis: Topics In EchoVector Analysis" COPYRIGHT 2013 ECHOVECTORVEST
Posted by EchoVectorVEST MDPP, PRECISION PIVOTS at 1:01 PM
Posted by EchoVectorVEST MDPP PRECISION PIVOTS at 1:07 PM No comments:
Email This
Share to Twitter
Share to Facebook
Introduction to EchoVector Analysis and EchoVector Pivot Points
EchoVector Theory and EchoVector Analysis are a price pattern impact theory and a technical analysis methodology and approach postulated, created, and invented by Kevin John Bradford Wilbur.
EchoVector Analysis is also presented as a behavioral economic application and securities analysis tool in price pattern theory and in price pattern behavior, study, and forecasting, and in price
EchoVector Pivot Points are a further technical analysis tool and application within EchoVector Analysis, derived from EchoVector Theory in practice.
EchoVector Theory and EchoVector Analysis assert that a securities prior price patterns may influence its present and future price patterns. Present and future price patterns may then, in part, be
considered as 'echoing' these prior price patterns to some identifiable and measurable degree.
EchoVector Theory and EchoVector Analysis also assert that these influences may be observable, identifiable, and measurable in price pattern behavior and price pattern history, and potentially
observable in future price pattern formation, and potentially efficacious in future price pattern forecasting, to some measure or degree.
EchoVector Analysis is also used to forecast and project potential price Pivot Points (referred to as PPP's --potential pivot points, or EVPP's --EchoVector Pivot Points) and future support and
resistance vectors (SRV's) for a security from a starting reference price at a starting reference time, based on the securities prior price pattern within a given and significant and definable
cyclical time frame.
EchoVector Pivot Points and EchoVector Support and Resistance Vectors are fundamental components of EchoVector Analysis. EchoVector SRV's are constructed from key components in the EchoVector Pivot
Point Calculation. EchoVector SRV's are calculated Coordinate Forecast EchoVectors (CFEV's) to the initial EchoVector (XEV) calculation and construction.
The EchoVector Pivot Point Calculation is a fundamentally different and more advanced calculation than the traditional pivot point calculation.
The EchoVector Pivot Point Calculation differs from traditional pivot point calculation by reflecting this given and specified cyclical price pattern reference and its significance and information in
the echovector pivot point calculation. This cyclical price pattern and reference is included in the calculations and constructions of the echovector and its respective coordinate forecast
echovectors, as well as in the calculation of the related echovector pivot points.
While a traditional pivot point calculation may use simple price averages of prior price highs, lows and closes to calculate its set of support and resistance levels, the echovector pivot point
calculation begins with any starting time and price point and respective cyclical time frame reference X, and then identifies the corresponding “Echo-Back-Time-Point” within this cyclical time frame
reference coordinate to the starting reference price and time point A. It then calculates the echovector (XEV) generated by the starting reference time/price point and the echo-back-time-point, and
includes the pre-determined and pre-defined accompanying constellation of “Coordinate Forecast EchoVector” origins derived from the prior price pattern evidenced around the echo-back-time-point
within a certain pre-selected and specified range (time and/or price version) that occurred within the particular referenced cyclical time-frame and period X. Security I's EchoVector Pivot Point
constructions then calculate and project the scope relative echovector pivot points that follow A, and the support and resistance levels determined by the ensuing coordinate forecast echovectors and
their selected range definition inclusion, the cyclical time-frame X, and to XEV's slope.
EchoVector Pivot Points are therefore advanced and fluid calculations of projected and coordinate support and resistance levels following the starting reference price and time point A (endpoint) of
the subject focus echovector, levels which are derived from ascending, descending and/or lateral coordinate support and resistance forecast echovectors calculated from particular range defined
starting times and price points, related to the price points and time points of proximate scale and scope and the relative pivoting action that had followed the focus echovector's
echo-back-time-point B within, and relative to, the focus echovector's starting time-point and price-point, and the echovector's given and specified cyclically-based focus interest time-span X, and
the echovector's slope relative momentum measures.
The Support and Resistance Vectors, referred to as the Coordinate Forecast Echovectors, are used to generate the EchoVector Pivot Points.
The coordinate forecast echovectors originate in a predefined range C of price pivots O's that occurred proximate to the echo-back-time-point B within the given cyclical time frame X of the starting
reference price A. The coordinate forecast echovectors reflect the price momentum relationship of the starting reference point price and the echo-back-time-point in their calculation and in the
calculation of the echovector pivot points.
In these respects, an echovector pivot point may be considered a price level of particular significance in the technical analysis of a security or financial market that may be usable by trader's as a
forecasting indicator of a securities (or future market's) time and price vector influenced cyclical price movements.
EchoVector pivot points and their related support and resistance levels also have the calculation advantage of being able to be calculated on the basis of short-term, intermediate-term, or
longer-term time frames within varied cyclical price references and 'elected or posited echo-characteristic based' time-spans.
Defining the EchoVector (EV) of Time Length X (for price/time point A
(at market trade print price p and at time point t) of security I
(investment) with Echo-Back-Time-Point A,t-X, p-N.
S Start Point
EVSF of Length (time-frame) X
The EchoVector "X-EV" of Security (Investment) "I" Measured from Market
Trade Time-Point/Print-Price Point, Starting Point, "A"
Definition: The EchoVector
"For any base security I at price/time point A, A having real market transaction and exchange recorded print price p at exchange of record print time t, then EchoVector XEV of security I and of time
length (cycle length) X with ending time/price point A would be designated and described as (I, Apt, XEV); EchoVector XEV's end point is (I, Apt) and EchoVector XEV's starting point is (I, Ap-N,
t-X), where N is the found exchange recorded print price difference between A and the Echo-Back-Time-Point of A (A, p-N, t-X at t-X) of Echo-Back-Time-Length X (being Echo-Cycle Length X).
A, p-n, t-X shall be called B (or B of I), being the EBDTPP (Echo-Back-Date-Time-And-Price-Point)*, or EBD (Echo-Back-Date)*, or EBTP (Echo-Back-Time-Point) of A of I.
N = the difference of p at A and p at B (B is the 'echo-back price-point and time-point of A found at (A, p-N, t-X.)
And security I (I, Apt, XEV) shall have an echo-back-time-point (EBTP) of At-X (or I-A-EBTP of At-X; or echo-back-date (EBD) I-A-EBD of At-X): t often displayed on a chart measured and referenced in
discrete d measurement length units (often OHLC or candlestick widthed and lengthed units[often bars or blocks]), such as minute, 5-minute, 15-minute, 30-minute, hourly, 2-hour, 4-hour, 6-hour,
8-hour, daily, weekly, etc."
Significant "Non-Intraday Single-Exchange" EchoVector Time Period
Lengths, or EchoVector Echo-Cycle Period lengths, occur as follows.
(X is the time or cycle length of the EchoVector Period, the EP, the
Asian Market Echo
European Market Echo
24-Hour Echo
2-Day Echo
3-Day Echo
Weekly Echo
Bi-Weekly Echo
Monthly Echo
Bi-Monthly Echo
Quarterly Echo
Bi-Quarterly Echo
3-Quarter Echo
Annual Echo
15-Month Echo
18-Month Echo
2-Year Congressional Cycle Echo
4-Year Presidential Cycle and mid-Regime Cycle Echo
5-Year FRB Cycle Echo
6-Year (Tri-Congressional Cycle) Echo
8-Year Regime Change Cycle Echo
10-Year Decennial Cycle Echo
16-Year Bi-Regime Change Cycle Echo (Nativity echo)
32-Year Quad-Regime Change Cycle Echo (Maturity Echo)
Shorter, Intra-day market EchoVectors are readily calculable, as well
are longer-term study echovectors.
Calculating the Coordinate Forecast EchoVector and The Coordinated
Potential Pivot Price Point and Coordinated Potential Pivot Time-point
Definition: The Coordinate Forecast EchoVector: (CFEV)
For any EV of X base length (time frame, or time length, or cycle
period), such as lengths intra-day, daily, 3-day, weekly, bi-weekly,
monthly, bi-monthly, quarterly, bi-quarterly, annual, bi-annual,
presidential cycle, or regime change cycle, (and with lengths measured
in incremental -often OHLC- units such as minutes, hours, days, weeks,
or, shorter or longer unit) constructed for security I from starting EV
reference point A with print price point p at time point t (Apt) and
EBTP or EBD A-Length X at print price p (t-X), Coordinate Forecast
EchoVectors (CFEVs) can be constructed having the same slope and lengths
(phase adjusted) as XEV of A of I.
Coordinate Forecast EchoVectors and their starting points shall have the
following relationships to A of I and B of I, or EchoVector XEV-BA of I.
Base Construction Versions For the CFEV:
A. Time-Frame Distance Versions
B. Price Distance Versions
C. Model PreSet Value Distance Versions
Versions: 1,2,3,4,5,6
A. CFEV Starting Point Construction: Time-Frame Distance Based Versions
1. Absolute Time Distance Based (Within Select Range) From B, Selected
2. Percent X Time Distance Based (Within Select Range) From B, Selected
B. CFEV Starting Point Construction: Price Distant Based Versions
3. Absolute Price Distance Based (Within Select Focus Interest Range,
Extension) From B, Selected
4. Percent Price Distance Based (Within Select Range Focus Interest)
From B, Selected
C. PreSet Value Distance Based CFEV Starting Point Construction
5. PreSet Absolute Time Distance Based (Range Specified)
6. Preset Percent X Time Distance Based (Range Specified)
*The EchoVectoVEST MDPP Precision Pivots Forecast Model And Alert
Paradigm primarily utilizes specifically defined values of 5. and 6. above
in its CFEV Constellation Illustrations and Highlights for specified
focus interest opportunity securities and their projected forward focus
interest opportunity time-periods and price points. Example CFEV
constructions that follow will utilize these constructions unless noted
The CFEVs found relevant and proximate to BA on the basis of one of the
above 6 relationships shall be called EchoVector XEV BA's CFEV
Constellation, Base 1, 2, 3, 4, 5, or 6.
The CFEV PPPs correlate to A, B, and O, and are referred to as Ps or PP's --pivot points or potential pivots (or PPPs --projected pivot points or potential pivot points or price pivot points), or
flex points.
They are 'forecast-ed' and 'focus interest' coordinate to A, B and O support, resistance, or flex points, derived from the CFEV's found originating within Range C from the 'scope relevant' pivots
occurring there. These 'EchoVector and Coordinate Forecast EchoVectors-based Pivot Points' constitute trade position opportunitities, refererred to as Focus Interest Opportunity Points (or Pivots),
FIOPs, or simply as OPs (opportunities or options) or POPs (potential opportunities or options) or PPPOPs (potential pivot point opportunities or options).
Projecting The SRV PPs Of A of I from B to A to EVBA to O to CFEV-OPP, (Given V {Version}, Range C, and EV-TimeLength X, XEV)
EV-TLX (EchoVector Time Length X, XEV)
V = Version N (Selected)
RC = Range C (Selected)
Forward PP (Derived from O occurring in Construction Version N Range C After B)
Back PP (Derived from O occurring in Construction Version N Range C Before B)
Up PP (Derived from O occurring in Construction Version N Range C Above B)
Down PP (Derived from O occurring in Construction Version N Range C Below B)
The FIOP CFEV price origins O will occur above or below the EBTP B.
As any CFEV origin, O, occurring within Range C, occurs above or below EBTP B, so therefore will the correlate PP of CFEV O-PP relating to XEV occur above or below A.
As all FIOP CFEV origin prices occur above and/or below B, all PPPs of CFEV POPP will correspondingly occur above and/or below A.
PPPs will be designated S1 S2 and S3 and R1 R2 and R3, as they occur in extension O, and depending on applied V and Range C and the resulting time distance and/or price distance given by V, and the
directional pivots of record and relative scale reference occuring subsequent to O (up-wave, down-wave, sideways, flex-point)
S = Support
R = Resistance
Definition: The Coordinate Forecast EchoVector: (CFEV)
For any EV of X base length (time frame, or time length, or cycle
period), such as lengths intra-day, daily, 3-day, weekly, bi-weekly,
monthly, bi-monthly, quarterly, b-quarterly, annual, bi-annual,
presidential cycle, or regime change cycle, (and with lengths measured
in incremental -often OHLC- units such as minutes, hours, days, weeks,
or, shorter or longer unit) constructed for security I from starting EV
reference point A with print price point p at time point t (Apt) and
EBTP or EBD A-Length X at print price p (t-X), Coordinate Forecast
EchoVectors (CFEVs) can be constructed having the same slope and lengths
(phase adjusted) as XEV of A of I.
Coordinate Forecast EchoVectors and their starting points shall have the
following relationships to A of I and B of I, or EchoVector XEV-BA of I.
Base Construction Versions For the CFEV:
A. Time-Frame Distance Versions
B. Price Distance Versions
C. Model Pre-set Value Distance Versions
Versions: 1,2,3,4,5,6
A. CFEV Starting Point Construction: Time-Frame Distance Based Versions
1. Absolute Time Distance Based (Within Select Range) From B, Select
2. Percent X Time Distance Based (Within Select Range) From B, Select
B. CFEV Starting Point Construction: Price Distant Based Versions
3. Absolute Price Distance Based (Within Select Focus Interest Range,
Extension) From B, Select
4. Percent Price Distance Based (Within Select Range Focus Interest)
From B, Select
C. Pre-set Value Distance Based CFEV Starting Point Construction
5. Pre-set Absolute Time Distance Based (Range Specified)
6. Pre-set Percent X Time Distance Based (Range Specified)
*The EchoVectoVEST MDPP Precision Pivots Forecast Model And Alert
Paradigm primarily utilizes specifically refined values of 5. and 6. above
in its CFEV Constellation Illustrations and Highlights for specified
focus interest opportunity securities and their projected forward focus
interest opportunity time-periods and price points. Example CFEV
constructions that follow will utilize these constructions unless noted
Defining, Calculating, Constructing and Generating the Coordinate
Forecast EchoVectors
EBD is the Echo-BackDate
EBD-T is the Echo-BakeDateTime
EBTP is the Echo-Back-Time-Point with print price found to be p-N.
EBD-TI is the Echo-BackDateTimeIncrement, or EBDT-ITMU incremental time
measurement unit, or imu or mu, or d for designated unit (EBDT-MU, or
EBDT-U, or EBDT-d)
(EX.: minute, 5min, 15min, hourly, 2-hour, 4-hour, daily weekly,
monthly, etc.)
EBTP is also often referred to as the XEV EBPP, or the Echo-Back-Price-Point
of XEV
L is length of the Echo-Period (EP); or X, being the time-length and
cycle length of the EchoVector EV. XEV or LEV
Primary Model Focus Interest EP or Echo-Period Lengths, or EchoVector
Time Frame or Cycle Time Period Lengths, X
AMEV Asian Market Echo (t, open, lunchtime, close, etc.)
EMEV European Market Echo
24HEV 1day, 24hour Echo
48HEV 2day Echo
72HEV 3day Echo
WEV weekly Echo
2WEV biweekly Echo
MEV monthly Echo
2MEV bimonthly Echo
QEV quarterly Echo
2QEV biquarterly Echo
3QEV 3quarter Echo
AEV annual Echo
15MEV OR 5QEV, ETC 15month Echo
6QEV ETC 18month Echo
2AEC OR CCEV 2year Congressional Cycle Echo
4AEV OR PCEV 4year Presidential Cycle and mid-Regime Cycle Echo
5AEV OR FRBEV 5year FRB Cycle Echo
6AEV OR 3CCEV 6year (Tri-Congressional Cycle) Echo
RCCEV OR 2PCEV OR 4CCEV ETC 8year Regime Change Cycle Echo
10year Decennial Cycle Echo
16year Bi-Regime Change Cycle Echo (Nascent Echo)
32year Quad-Regime Change Cycle Echo (Maturity Echo)
The Starting Date, time and price point of the EV: The Stp, or Apt, or
A, or SDTPP-A, or EV origin point O.
A is The Sdtp, the summation and reference origin date, time and price
point (Starting time and price point of reference for XEV BA).
The Sdtp is an actual' market price trade point print,' being an exchange
actual buy/sell trade occurrence 'print' posted as taking place at the
specific referenced time and price.
XEV BA is the "Echo" vector from ebtp (EchoBackTimePoint B) to sdtp
(Summation-Date-Time PricePoint).
A is considered as having 'echo characteristic relationship' in some
definable or designate-able form, manner, or degree a 'parameter
relationship contiguity,' to B, the EBTP, on some dimension.
The notion of 'similar' or 'contiguous' on some measure, parameter,
index, dimension, aspect or reference is implied and/or is axiomatic.
Likeness in quality, time, space, context, origin, or relation... to
some degree or measure is axiomatic to the methodology.
FIO EV: focus interest opportunity echovector
The FIOP (focus or forecast interest opportunity point (or period, or pivot). It is derived from the S-dtp, that is, from A.
From the constellation of 'scope relative' prior and following pivot point highs and lows occurring within Range C (elect Version 1 to 6), the applied CFEV Constellation Range, the CFEV or PPV EBTP
origins are derived.
In some citations Delta EV BA is "delta'd" to EchoVector EV AB and is referenced to have A as the echovector time and price point 'antecedent,' (the echo-back-date and/or echo-back-time-point and/or
echo-back-date-time-point) and B as XEV's Construction beginning or starting price and time point reference, or starting dtpS, B; that is, as the EchoVector's starting price and starting time point
of reference, being the EV's origin.
Also definition-ally derived from XEV AB are the CFEVs, the Coordinate Forecast EchoVectors, also having EchoVector AB length X and EchoVector AB slope m (for momentum, or slope w for wave), and
having Echo-Back-Date-Time-And-Price-Point A as the "CFEV Constellation Base Point," or "Origin," from which the Key Immediate Coordinate Forecast EchoVectors, the Focus Interest Opportunity PPVs
(Potential Pivot Vectors), or SRV Constellation (Support/Resistance, or Flex-point Vector Constellation Projections) are, by specified relation (Versions 1 to 6), derived.
In EchoVector AB, A is the Echo-Back-Date-Time-And-Price-Point, and B is the starting reference, and the echovector 'originating' time and price point, or its 'origin.'
From the constellation of 'scope relative' prior and following pivot point highs and lows occurring within Range C (elect Version 1 to 6), the applied CFEV Constellation Range, the CFEV or PPV EBTP
origins are derived. These origins are referred to as O... OSP1, OSP2, OSP3, ORP1, ORP2, ORP3, etc.
P FOR PIVOT OR FLEX POINT
These CFEV origins O are identified as the scope-relative wave point pivot price point highs and lows occurring before and after THE ECHOBACKDATETIMEANDPRICEPOINT B in XEV BA occurring within Range C
(Range C is, again, specified by specific time length distance backward and forward from B by percentage of X from B, or by (within)
absolute time-length distance from B, or by (within) percent of B price distance from B, or by (within) absolute price length distance (N) from B; or by percent N from B, or by stochastically-related
'Modeling process determined' (and then preset in distance --absolute or percent-- from B designated; B being the EBDTPP of A.
These Range C subsumed wave point pivot highs and lows constitute the CFEV Focus Interest Origin Points (FIOPs O), from which the relative CFEV are derived, and from which the Focus Interest
Opportunity SRV PP Constellation Set is derived (FIOP PPPs).
These CFEV origin points, O's, relate to the EBTP of A, being B, and to (occuring within) the Base Construction Version and Range C of the CFEV, and the Range and Version's mathematical definition.
From these CFEV origin points (correlated to B by V), and from EV-AB attributes, being of the samelength and slope, the CFEVs are derived, as are the CFEV PPPs, the FIOP PPs.
The CFEV PPPs correlate to A, B, and O, and are referred to as Ps or PP's
--pivot points or potential pivots (or PPPs --projected pivot points or
potential pivot points), or flex points.
They are 'forecast-ed' and 'focus interest' coordinate to A, B and O
support, resistance, or flex points, derived from the CFEV's found
originating within Range C from the 'scope relevant' pivots occuring
there. These EchoVector and their coordinate forecast EchoVector based
Pivot Points constitute trade position opportunities, referred to as
Focus Interest Opportunity Points (or Pivots), FIOPs, or simply as OPs
(opportunities or options) or POPs (potential opportunities or options).
Projecting The SRV PPs Of A of I from A to B to EVAB to O to CFEV-OPP,
(Given V, Range C, and EVTLX, XEV)
EVTL (EchoVector Time Length X)
Forward PP (Derived from O occurring in Range C After B).
Back PP (Derived from O occurring in Range C Before B)
Up PP (Derived from O occurring in Range C Above B)
Down PP (Derived from O occurring in Range C Below B)
The FIOP CFEV price origins O will occur above or below the EBTP B.
As any CFEV origin, O, occurring within Range C, occurs above or below
EBTP B, so therefore will the correlate PP of CFEV OPP occur above or
below A.
As all FIOP CFEV origin prices occur above and/or below B, all PPPs of
CFEV POPP will correspondingly occur above and/or below A.
PPPs will be designated S1 S2 and S3 and R1 R2 and R3, as they occur in
proximity to O, B, and A, depending on applied V and Range C specifications, and the
resulting time distance and/or price distance given by V (Version), and the
directional pivots of O (up-wave, down-wave, sideways flex-point)
COPYRIGHT 2011-2022. ECHOVECTORVEST MDPP PRECISION PIVOTS. All Rights Reserved.
Also See "EchoVector Analysis: Topics In EchoVector Analysis" COPYRIGHT 2013-2019 ECHOVECTORVEST | {"url":"http://www.brighthousepublishing.com/EchoVector-Pivot-Points.html","timestamp":"2024-11-04T07:30:52Z","content_type":"text/html","content_length":"288752","record_id":"<urn:uuid:3b8697c7-6644-49ed-b449-688e06bd22bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00425.warc.gz"} |
Steve_Massey - LessWrong
Sorted by
What TGGP said. Also, would an AI really be better at determining the falsifiability of a theory? It seems to me that, given a particular theory, an algorithm for determining the set of testable
predictions thereof isn't going to be easy to optimize. How does the AI prove that one algorithm is better than another? Test it against a set of random theories? | {"url":"https://www.lesswrong.com/users/steve_massey","timestamp":"2024-11-09T10:41:12Z","content_type":"text/html","content_length":"67223","record_id":"<urn:uuid:8536bb49-5e90-4b86-9f56-d61358cb72dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00118.warc.gz"} |
Example 5 : Find the roots of the quadratic equation 3x2−26x+2... | Filo
Question asked by Filo student
Example 5 : Find the roots of the quadratic equation . Solution
Not the question you're searching for?
+ Ask your question
Video solutions (2)
Learn from their 1-to-1 discussion with Filo tutors.
8 mins
Uploaded on: 11/4/2022
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE for FREE
14 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on All topics
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Example 5 : Find the roots of the quadratic equation . Solution
Updated On Nov 4, 2022
Topic All topics
Subject Mathematics
Class Class 11
Answer Type Video solution: 2
Upvotes 248
Avg. Video Duration 4 min | {"url":"https://askfilo.com/user-question-answers-mathematics/example-5-find-the-roots-of-the-quadratic-equation-solution-33303235393838","timestamp":"2024-11-07T09:24:47Z","content_type":"text/html","content_length":"343981","record_id":"<urn:uuid:d13319b3-4dc1-46a9-8c09-cb05be3510e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00482.warc.gz"} |
Category:Stata - Rosetta CodeCategory:Stata
This page is a stub. It needs more information! You can help Rosetta Code by filling it in!
programming language
may be used to instruct a computer to perform a task.
Listed below are all of the tasks on Rosetta Code which have been solved using Stata.
Stata is a statistical software package created in 1985 and developed by StataCorp, located in College Station, Texas. Stata includes a command-based macro language (informally called "Ado") and a
matrix language called Mata. A large part of Stata is itself written in Ado and Mata, with source code available.
Stata has also APIs to call C and Java plugins, and since Stata version 16, can embed Python code within Ado programs.
This category has only the following subcategory.
Pages in category "Stata"
The following 200 pages are in this category, out of 219 total.
(previous page) (
next page
(previous page) (
next page | {"url":"https://rosettacode.org/wiki/Category:Stata","timestamp":"2024-11-03T00:41:22Z","content_type":"text/html","content_length":"64548","record_id":"<urn:uuid:fa076fab-0968-4f23-a4de-c8b55cfc89fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00750.warc.gz"} |
Definition of MEDIANS
: a value in an ordered set of values below and above which there is an equal number of values or which is the arithmetic mean of the two middle values if there is no one middle number
: a value of a random variable for which all greater values make the cumulative distribution function greater than one half and all lesser values make it less than one half
: a line from a vertex (see vertex sense 2) of a triangle to the midpoint of the opposite side
: a line joining the midpoints of the nonparallel sides of a trapezoid (see trapezoid sense 1a)
: lying in the plane dividing a bilateral animal into right and left halves
: relating to or constituting a statistical median
: produced without occlusion along the lengthwise middle line of the tongue
Examples of median in a Sentence
Adjective What is the median price of homes in this area? the median price of a home in the area
Recent Examples on the Web
In 2023, the median was $150 per month and the average was $234. —Cheryl Winokur Munk, CNBC, 4 Nov. 2024 Repeat buyers were able to enter the housing market with much larger down payments (median
23%) than first-time homebuyers (median 9%). —Samantha Delouya, CNN, 4 Nov. 2024
Real median household income under President Joe Biden's administration is down 0.7% from its high under Trump. —Gary Langer, ABC News, 5 Nov. 2024 Nevertheless, the income is still more than double
the median household income which currently stands at $82,207, according to MotioResearch. —Anne Marie Lee, CBS News, 5 Nov. 2024 See all Example Sentences for median
Last Updated: - Updated example sentences | {"url":"https://www.merriam-webster.com/dictionary/medians","timestamp":"2024-11-11T01:06:04Z","content_type":"text/html","content_length":"719387","record_id":"<urn:uuid:89629a22-5a2d-4678-a9ee-ae1fa80e24a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00213.warc.gz"} |
Multiplication Charts 1 To 72 2024 - Multiplication Chart Printable
Multiplication Charts 1 To 72
Multiplication Charts 1 To 72 – You can get a blank Multiplication Chart if you are looking for a fun way to teach your child the multiplication facts. This can let your youngster to fill the details
on their own. You will find blank multiplication charts for various product or service varies, such as 1-9, 10-12, and 15 goods. If you want to make your chart more exciting, you can add a Game to
it. Here are some ways to buy your little one started: Multiplication Charts 1 To 72.
Multiplication Maps
You may use multiplication maps in your child’s university student binder to help them memorize mathematics information. Although children can remember their arithmetic details by natural means, it
requires lots of others time to do this. Multiplication graphs are an ideal way to reinforce their learning and boost their self confidence. As well as being educative, these maps might be laminated
for more toughness. The following are some valuable methods to use multiplication charts. You can also have a look at these web sites for helpful multiplication truth assets.
This course handles the fundamentals in the multiplication table. In addition to understanding the principles for multiplying, pupils will fully grasp the concept of variables and patterning.
Students will be able to recall basic facts like five times four, by understanding how the factors work. They can also be able to use the home of one and zero to solve more complicated items.
Students should be able to recognize patterns in multiplication chart 1, by the end of the lesson.
Besides the common multiplication chart, pupils might need to develop a chart with a lot more elements or a lot fewer factors. To produce a multiplication graph with increased factors, college
students must generate 12 furniture, every with 12 rows and about three posts. All 12 tables should suit in one sheet of papers. Outlines should be drawn by using a ruler. Graph pieces of paper is
best for this undertaking. Students can use spreadsheet programs to make their own tables if graph paper is not an option.
Game suggestions
Whether you are training a newcomer multiplication training or concentrating on the expertise from the multiplication kitchen table, you are able to develop entertaining and fascinating activity
ideas for Multiplication Graph or chart 1. A couple of entertaining suggestions are listed below. This game needs the students to stay in pairs and work on the same dilemma. Then, they will likely
all endure their credit cards and discuss the solution to get a moment. They win if they get it right!
When you’re educating youngsters about multiplication, one of the better equipment you can provide them with is a printable multiplication chart. These printable sheets appear in a number of patterns
and might be printed out using one page or many. Children can discover their multiplication facts by copying them through the chart and memorizing them. A multiplication chart may help for most good
reasons, from helping them find out their arithmetic details to teaching them how to use a calculator.
Gallery of Multiplication Charts 1 To 72
Multiplication Chart Math School Poster Etsy In 2022 Multiplication
Many Printable Multiplication Charts PDF Free Memozor
Multiplication Tables Wolfram Programming Lab Gallery
Leave a Comment | {"url":"https://www.multiplicationchartprintable.com/multiplication-charts-1-to-72/","timestamp":"2024-11-06T11:51:50Z","content_type":"text/html","content_length":"53893","record_id":"<urn:uuid:0ff62449-5756-4a9b-ab4c-f82545598eec>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00829.warc.gz"} |
Delaware's Best High School Essayists Contest
Delaware Today judged nearly 200 papers submitted by students at A.I. duPont High School and the Charter School of Wilmington. Here’s the winning essay by Zack Li, who will be a senior at the Charter
School of Wilmington in the fall.
Math and Beauty
“When will I ever use this?” It is a phrase uttered almost religiously in the halls of lower learning, as students struggle with reading, writing, and more commonly, mathematics. At the first stages,
this complaint is quickly silenced. Anyone can see that there is value in having a student learn fractions, or basic geometry. Good teachers would be quick with a rebuttal, describing some
hypothetical situation that the student might have to face. As the student progresses through school, however, this question becomes more common, more pressing, more emotional, but sadly, less
adequately answered. Why does a student need to know how to factor an equation? With calculators and computers so readily available, does math even need to be learned? Will they ever use this
knowledge that they are being forced to absorb? The answer is that there is much more to mathematics than mere calculation (Wigner). Mathematics bestows upon humanity something that it has searched
for since the mind first became conscious, something that wise men spend lifetimes searching for—understanding. Math is beautiful, both from the understanding that it bestows, and its inherent
From an objective standpoint, mathematics fulfills all of the qualities that mankind defines as “beautiful”. For instance, many consider nature to be a prime source of beauty—gorgeous, colorful
flowers, intricate and complex creatures, and a wonderful harmony with itself. Yet at the heart of nature is mathematics. Every event, every nook and cranny of the universe can be explained with
mankind’s system of logic and symbols. The gentle curves of a nautilus shell, explained by the Golden Ratio (Freitag); the fractal and recursive growth of trees; the turbulence of the tides, modeled
by the Navier-Stokes equations (Clay Mathematics Institute); mathematics gives us an understanding of all of these things, and rather than detracting from our appreciation of their beauty, in fact
adds to it. As physicist Richard Feynman said, “If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in.” (Feynman) Bertrand Russell,
Nobel winning writer, mathematician, philosopher, logician, and activist, claimed that “Mathematics, rightly viewed, possesses not only truth, but supreme beauty — a beauty cold and austere, like
that of sculpture, without appeal to any part of our weaker nature, without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the
greatest art can show.” (Russell, “Mysticism”) The subject has every quality that mankind associates with beauty. It possesses an amazing simplicity at times, yet can become extraordinarily complex
at others. For example, in a mathematical proof, where one takes basic assumptions called axioms, and then logically builds from these to show that something else is true, something can be
extraordinarily difficult to prove, but incredibly simple to state (Benjamin). An example of this is the Four Color Theorem, which asks if one can color in a map full of countries with only four
colors, if neighboring countries cannot have the same color. It took almost a century, and 1,936 cases tested by computer, to show that four colors is indeed the maximum one needs (Rehmeyer). An
interesting view on this is Paul Erdos’; when he saw a particularly short and direct, or “elegant”, proof, he would claim it was in “The Book”, referring to a book that God keeps in heaven of all the
best mathematical proofs (Babai). If one compares mathematics to art, something which certainly has beauty, one can see math’s aesthetic qualities as well. If “art matters because looking at a
beautiful painting or sculpture gives us an experience that nothing else can (Boone)”, then doing mathematics can result in the same level of experience, if not greater. The creators of both
areas—artists and mathematicians, are often viewed by society as isolated and eccentric, and both are deeply intellectual subjects. Like art, there is a timeless quality to mathematics, just as there
is to art, and the work done by Euclid is just as true now as it was thousands of years ago, just as paintings and sculptures hold the same emotion that the masons and painters put into them long
ago. Both mathematics and art are intrinsic parts of human existence, and there have been examples of both from ancient times, and both are born from intangible concepts born in the mind (The
Neolithic Era). As one Colorado State Professor argues, “Both disciplines are creative endeavors with analytical components”, and “mathematics is their deeper purpose to reveal truth about our world
and ourselves” (Farsi).
However, there are many differences too, and mathematics, rather than being an art, actually transcends artistry. Art enables a society to look inwards and reflect, while mathematics allows society
to look outwards to the universe around us. If art is about teaching how to feel, mathematics is about teaching how to see. And what we often see is beauty.
Mathematics also has a wonderful harmony, and unrivaled depth and order. Different sub-fields of mathematics often find that there is much in common, and the discovery of powerful links between
seemingly completely disparate ideas happens often. These characteristics made 18th century scientist and mathematician Carl Friedrich Gauss call mathematics the “queen of sciences” (“Queen of
Mathematics”). A thousand experiments will never best a single proof, for the thousand and first experiment may show different results, but a proof, if done correctly, is true for all time. Science
can never equal mathematics regarding logical solidity—the entire field seems to be built on the firm bedrock of logic and reasoning.
Or is it? In the early 20th century, the British mathematician David Hilbert proposed a series of 23 problems that he believed were the most pressing questions in mathematics at the time (Joyce).
Problem two asked for a formal proof that arithmetic was itself consistent, and without internal contradictions. Hilbert believed firmly in formalism, the mathematical philosophy that treated all of
math like a game, basically scribbles on paper. Different marks on paper meant different things, and mathematics was just a product of the human mind, with different operations on the marks on paper
as simply rules of the game. This view depends on the rules of the game to be consistent; it makes no sense for humanity to invent a game that doesn’t follow its own rules, so Hilbert expected
mathematics to be proved consistent. However, soon after, Kurt Godel proved his two incompleteness theorems—essentially that mathematics can never be proven to be consistent using mathematics
(Kennedy). The rules of the game were changed—this meant that there are statements in mathematics that are simply undecidable. It adds even more to the allure of math—there is a wonderful sense of
mystery that there are certain things about the universe that humanity will never be able to know.
However, what we do know is incredibly useful. Like a powerful sports car, mathematics has both form and function. A fancy car, painted flashy colors and decorated with fins and spoilers, would feel
cheap or deceptive if it did not perform well. Mathematics does not have this problem. Follow a commuter—let us call him Dave—through a typical morning, this becomes immediately apparent. The time is
6 AM, and the alarm clock rings. Its ringing and timing are a product of electrical engineering, which makes heavy use of mathematics and matrix operations. The intersections and traffic lights that
Dave passes through have all been tuned by a civil engineer using mathematical models to predict traffic flow. The road that Dave is driving on has been calculated to endure thermal expansion and
contraction. The GPS unit that Dave uses to get to work (he’s very bad at navigating) relies on special relativity and distance calculations to establish his time and location. When Dave finally
arrives at work, he realizes he forgot to eat breakfast, and heats up a hot-pocket in the microwave. The hot-pocket’s assembly line process has been streamlined using mathematics to reduce cost, and
the delivery of it to the grocery store where Dave bought it was planned using graph theory to reduce fuel use. The microwave he heats his food with was developed with ideas from quantum
electrodynamics, which relies on group theory. According to Ian Stewart, if one were to put a red sticker on every man-made item that used mathematics, it would cover the entire globe (“Letters”).
Modern lifestyles are entirely supported by applied mathematics, and it is awe-inspiring how human civilization would be nowhere where it is without it. It most certainly has both style and
- Partner Content -
And what style it has! Like a classical symphony, one area of study flows smoothly into the next, and ideas from different fields can be used to great effect in others. There is a powerful unity to
mathematics (Dorrie), and one of the common tools used to solve difficult problems is to transplant the problem to a different field. Mathematics is a system based on logic, all stemming from basic
axioms, and there is a wonderful sense of near-perfection to the whole endeavor. The visceral reaction when one inserts a puzzle piece, and it fits snugly with its neighbors—that is mathematics. It
is a world of amounts and figures, of magnitudes and directions, of the simple, and the incomprehensible—and yet all of it is built on reason.
However, beauty is still subjective at its core. A common thing to say is that “beauty is in the eye of the beholder”, and in the case of mathematics, the eye is the mind. Just as a sharper eye can
see more details, a sharper mind can discern more intricacies of mathematics. However, looking closer at many things normally considered beautiful will reveal ugliness, such as in Gulliver’s Travels
when he visits an island full of giants (Swift). In contrast, looking closer at mathematics reveals even greater beauty, as more subtleties are shown, and even more questions are presented. As the
mind squints to get an even closer view, the blurry outlines of other parts of mathematics hover in the distance, a reminder of math’s inherent self-interconnectedness.
Mathematics has great elegance, both from an objective and a subjective standpoint. Humanity can describe nature, build monuments, explain physics, and develop an appreciation of the beauty of the
world around us with math, and in doing so, one can see the beauty that mathematics itself has. It is a system with beauty, rigidity and mystery, and as much as mankind currently knows, there exists
so much more to discover. | {"url":"https://delawaretoday.com/archive/delawares-best-high-school-essayists-contest/","timestamp":"2024-11-13T04:32:53Z","content_type":"text/html","content_length":"266234","record_id":"<urn:uuid:80d53b30-7dec-4bc5-9b7b-eb19e3b6a2b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00537.warc.gz"} |
Discounted Cash Flow [Concepts Series]
Want to get an explainer like this in your inbox every Wednesday morning? Sign up to Capital Gains, my weekly newsletter breaking down topics in finance, economics, and corporate strategy.
Economics nerds always complain that you should never compare stocks to flows. It’s meaningless to say that one company’s cash on hand is bigger than a country’s GDP, for example, because GDP is
quoted in dollars per year and cash on hand is a cumulative quantity. It’s like saying a plane is faster than the distance from New York to Boston. Does not compute.
There’s a giant exception, though: stocks, bonds, loans, and other financial products explicitly exist to convert flows to ‘stocks’ in the economic sense. The mechanics of this are worth
understanding, because they underpin the value of so many financial assets.
The basic idea of discounted cash flow is to convert future estimate cash flows into present values, by “discounting” them at some interest rate. Suppose you can earn 1% annual interest on a
certificate of deposit. The present value of $100 in a year is equal to the amount you’d have to put in a CD today in order to have $100 a year from now, or $99.01. The present value of $100/year for
the next three years, at a 1% discount rate, is $100/1.01 + $100/(1.01 * 1.01) + $100/(1.01 * 1.01 * 1.01), or $294.01.
This can get pretty tedious as you extend it (here I’m using a higher discount rate): | {"url":"https://byrnehobart.medium.com/discounted-cash-flow-concepts-series-e6373c00990a?source=user_profile_page---------4-------------94cdf41ea82---------------","timestamp":"2024-11-12T16:42:51Z","content_type":"text/html","content_length":"85670","record_id":"<urn:uuid:8b165c74-a632-474d-a4ab-3e6218f47c3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00553.warc.gz"} |
Iranian Navy QPSK Modem
From Signal Identification Wiki
Iranian Navy QPSK Modem
Frequencies 8.475 MHz,8.046 MHz,10.724 MHz,17.3822 MHz
Frequency Range 8.046 MHz - 17.3822 MHz
Mode USB
Modulation PSK
ACF —
Emission —
Bandwidth 700 Hz,300 Hz,620 Hz,1.25 kHz,2.52 kHz,1.42 kHz,2.85 kHz
Location Iran
Short Description Iranian Navy QPSK Modem is a QPSK mode used by the Iranian Navy. It has gone through several versions. The current version (2015) is V2 and supports speeds of 468 Bd, 936 Bd, and
1872 Bd.
I/Q Raw Recording —
Audio Sample
Iranian Navy QPSKQuadrature Phase-Shift Keying (2 bits per symbol) Modem is a QPSKQuadrature Phase-Shift Keying (2 bits per symbol) mode used by the Iranian Navy. It has gone through several
versions. The current version (2015) is V2 and supports speeds of 468 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second., 936 BdBaud (unit symbol Bd) is the
unit for symbol rate or modulation rate in symbols per second., and 1872 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second..
Original QPSKQuadrature Phase-Shift Keying (2 bits per symbol) 207 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second. Modem[edit]
The original Iranian Navy QPSKQuadrature Phase-Shift Keying (2 bits per symbol) Modem only supported one baudrate, 207 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in
symbols per second.. The data is transmitted in packets, and each transmission starts with an unmodulated pre-carrier of 2000 HzHertz (Hz), unit of frequency, defined as one cycle per second (1 Hz)..
100 msmilliseconds (.001 of a second) before data transmission, the carrier has a phase jump of 180 degrees. At the end of each transmission two tones are sent at 1000Hz, first one 100ms long and the
second one 110 msmilliseconds (.001 of a second) long right after the first. Total length of this two-tone end of message transmission is ~250 msmilliseconds (.001 of a second).
V1 Adaptive Modem[edit]
V1 Adaptive Modem is a further development of the original QPSKQuadrature Phase-Shift Keying (2 bits per symbol) modem. V1 supports four different speeds: 207 BdBaud (unit symbol Bd) is the unit for
symbol rate or modulation rate in symbols per second., 414 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second., 828 BdBaud (unit symbol Bd) is the unit for
symbol rate or modulation rate in symbols per second., and 1656 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second. and all are QPSKQuadrature Phase-Shift
Keying (2 bits per symbol) modulated. Like the original QPSKQuadrature Phase-Shift Keying (2 bits per symbol) modem, each packet is sent with a pre-carrier burst.
• 207 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second. (414 bpsBits per second (bps)), 300 HzHertz (Hz), unit of frequency, defined as one cycle per
second (1 Hz). Bandwidth
• 414 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second. (828 bpsBits per second (bps)), ~620 HzHertz (Hz), unit of frequency, defined as one cycle per
second (1 Hz). Bandwidth (approx)
• 828 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second. (1656 bpsBits per second (bps)), ~1250 HzHertz (Hz), unit of frequency, defined as one cycle per
second (1 Hz). Bandwidth (approx)
• 1656 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second. (3312 bpsBits per second (bps)), ~2520 HzHertz (Hz), unit of frequency, defined as one cycle per
second (1 Hz). Bandwidth (approx)
V2 Adaptive Modem[edit]
V2 is an upgrade of the V1 Adaptive modem, replacing V1 and using different speeds of 468 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second., 936 BdBaud
(unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second., and 1872 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second.. All use
QPSKQuadrature Phase-Shift Keying (2 bits per symbol) modulation and like the original QPSKQuadrature Phase-Shift Keying (2 bits per symbol) modem, each packet is sent with a pre-carrier burst.
• 468 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second. (936 bpsBits per second (bps)), 700 HzHertz (Hz), unit of frequency, defined as one cycle per
second (1 Hz). Bandwidth: 1500 msmilliseconds (.001 of a second) long packets are sent with a 430 msmilliseconds (.001 of a second) long pre-carrier burst.
Additional Speeds:
• 936 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second. (1872 bpsBits per second (bps)), ~1420 HzHertz (Hz), unit of frequency, defined as one cycle per
second (1 Hz). Bandwidth (approx)
• 1872 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second. (3744 bpsBits per second (bps)), ~2850 HzHertz (Hz), unit of frequency, defined as one cycle per
second (1 Hz). Bandwidth (approx)
Download the raw recording WAV files here
Original V2 QPSKQuadrature Phase-Shift Keying (2 bits per symbol)
QPSKQuadrature Phase-Shift Keying (2 bits per symbol) 207 BdBaud (unit symbol Bd) is the unit for symbol rate or 468 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in
modulation rate in symbols per second. symbols per second.
Additional Samples[edit]
kHzKiloHertz (kHz) 10^3 Hz
Captured on WebSDR Univ. of Twente on August 14, 2016
Additional Links[edit]
Additional Images[edit]
Waterfall Image of PSKPhase-Shift Keying 468 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second. Signal from WebSDR
The bursts of this PSKPhase-Shift Keying 468 BdBaud (unit symbol Bd) is the unit for symbol rate or modulation rate in symbols per second. signal seen | {"url":"https://www.sigidwiki.com/wiki/Iranian_Navy_QPSK_Modem","timestamp":"2024-11-12T02:34:02Z","content_type":"text/html","content_length":"48494","record_id":"<urn:uuid:a18d909a-04af-426c-b0cb-89ff6d21f408>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00240.warc.gz"} |
Basic BMI Calculator : Functions, Conditional Statement. - Prateek Katyal
In todays lecture I covered the basics of functions, conditional statements, comments and printing stuff.
Here is the code I wrote to calculate the BMI of a person.
import UIKit
func calculateBmi(weight: Double, height: Double) -> String {
var finalBmi = weight / (height * height)
let shortBmi = String(format: "%.2f", finalBmi)
if finalBmi > 25 {
return "Your BMI is \(shortBmi) , you are overweight."
} else if finalBmi > 18.5 && finalBmi <= 25 {
return "Your BMI is \(shortBmi) , you have a normal weight."
} else {
return "Your BMI is \(shortBmi) , you are underweight."
print(calculateBmi(weight: 85, height: 1.8))
The formulae used for calculating the BMI is -: weight(kg) / height(in meters) * height(in meters)
What the code basically does:
• Creates a function which takes weight and height as input. (used double instead of Int, to facilitate use of decimals)
• finalBmi contains the method of calculating BMI
• shortBmi basically restricts the output to 2 decimal places
• If BMI is greater than 25 then return a certain statement to the user, if not then return another statement.
• Finally used print to log the output to the console. | {"url":"https://prateekkatyal.com/basic-bmi-calculator-functions-conditional-statement/","timestamp":"2024-11-12T07:16:59Z","content_type":"text/html","content_length":"56624","record_id":"<urn:uuid:9811b55c-aee9-4a50-a2be-ceb2a6a06c50>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00750.warc.gz"} |
dy For the differential equation = (y -
dy For the differential equation = (y - 3) (+2) there is equilibrium solutions. The positive equilibrium solution is [Select] The negative equilibrium solution is [Select] +-3, +-4 V , which is
[Select] which is [Select] stable/unsta ble Hint: to find if the equilibrium solution stable or unstable, you may want to draw the slope field using online graphing Cools)
dy For the differential equation = (y - 3) (+2) there is equilibrium solutions. The positive equilibrium solution is [Select] The negative equilibrium solution is [Select] +-3, +-4 V , which is
[Select] which is [Select] stable/unsta ble Hint: to find if the equilibrium solution stable or unstable, you may want to draw the slope field using online graphing Cools)
Chapter2: Second-order Linear Odes
Section: Chapter Questions
Please explain the answer and the process calculating equilibrium. Thank you!
Transcribed Image Text:dy dx = (y − 3) (½ + 2) there is equilibrium solutions. For the differential equation = The positive equilibrium solution is [Select] The negative equilibrium solution is
[Select] +-3, +-4 which is [Select] which is [Select] stable/unsta ble (Hint: to find if the equilibrium solution stable or unstable, you may want to draw the slope field using online graphing tools)
This question has been solved!
Explore an expertly crafted, step-by-step solution for a thorough understanding of key concepts.
Step by step
Solved in 2 steps with 1 images | {"url":"https://www.bartleby.com/questions-and-answers/dy-for-the-differential-equation-y-3-2-there-is-equilibrium-solutions.-the-positive-equilibrium-solu/9a2034dc-93d4-433a-9eab-6e2520202fbc","timestamp":"2024-11-11T23:19:07Z","content_type":"text/html","content_length":"237220","record_id":"<urn:uuid:3c014201-da1d-4584-b980-4d24e9a74747>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00150.warc.gz"} |
David A. Fanella - Structural Load Determination under 2009 IBC and ASCE SEI 7-05-International Code Council (2009)
UNDER 2009 IBC® AND ASCE/SEI 7-05
DAVID A. FANELLA, PH.D., S.E., P.E., F.ASCE
Structural Load Determination
under 2009 IBC and ASCE/SEI 7-05
Second Edition
ISBN: 978-1-58001-924-8
Cover Art Director:
Cover Design:
Project Editor/Typesetting:
Project Head:
Publications Manager:
Dianna Hallmark
Dianna Hallmark
Jodi Tahsler
John Henry
Mary Lou Luif
COPYRIGHT © 2009
ALL RIGHTS RESERVED. This publication is a copyrighted work owned by the International Code
Council, Inc. Without advance written permission from the copyright owner, no part of this book may be
reproduced, distributed or transmitted in any form or by any means, including, without limitation,
electronic, optical or mechanical means (by way of example, and not limitation, photocopying or recording
by or in an information storage retrieval system). For information on permission to copy material exceeding
fair use, please contact: Publications, 4051 West Flossmoor Road, Country Club Hills, IL 60478. Phone
1-888-ICC-SAFE (422-7233).
The information contained in this document is believed to be accurate; however, it is being provided for
informational purposes only and is intended for use only as a guide. Publication of this document by the
ICC should not be construed as the ICC engaging in or rendering engineering, legal or other professional
services. Use of the information contained in this book should not be considered by the user to be a
substitute for the advice of a registered professional engineer, attorney or other professional. If such advice
is required, it should be sought through the services of a registered professional engineer, licensed attorney
or other professional.
Trademarks: “International Code Council” and the “International Code Council” logo and the
“International Building Code” are trademarks of International Code Council, Inc.
Errata on various ICC publications may be available at www.iccsafe.org/errata.
First Printing: November 2009
The purpose of Structural Load Determination under 2009 IBC and ASCE/SEI 7-05 is to
provide a detailed guide to the proper determination of structural loads in accordance
with the 2009 International Building Code® (IBC®) and Minimum Design Loads for
Buildings and Other Structures (ASCE/SEI 7-05) with Supplement No. 2. The 2009 IBC
references the 2005 edition of the ASCE/SEI 7 standard for many code-prescribed loads,
most notably environmental loads such as flood, snow, wind and seismic load effects. In
general, the IBC contains only the structural design criteria for environmental loads,
while the technical design provisions for these loads are contained in the ASCE/SEI 7
This book is an essential resource for civil and structural engineers, architects, plan check
engineers and students who need an efficient and practical approach to load
determination under the 2009 IBC and ASCE/SEI 7-05 standard. The book is especially
valuable to code users who are familiar with the structural load provisions of the previous
legacy codes such as the Uniform Building Code (UBC). It has been reported that one of
the most significant changes for code users transitioning from the UBC to the IBC is the
way snow loads, wind pressures and earthquake ground motion load effects are
determined under the IBC and ASCE/SEI 7-05 compared to previous legacy codes.
Structural Load Determination under 2009 IBC and ASCE/SEI 7-05 is a practical
resource that will help code users make the transition quickly.
The book illustrates the application of code provisions and methodology for determining
structural loads through the use of numerous flowcharts and practical design examples.
Included are load combinations for allowable stress design, load and resistance factor
(strength) design, seismic load combinations with vertical load effect and special seismic
load combinations; dead loads, live loads and rain loads; snow loads, flood loads, wind
loads and earthquake load effects. For wind load determination, flowcharts and design
examples are presented for the simplified procedure (Method 1), the analytical procedure
(Method 2) and the new alternate all-heights method in the 2009 IBC. Seismic design
criteria, determination of seismic design category, the simplified method, equivalent
lateral force procedure and nonbuilding structures are some of the topics illustrated
through flowcharts and design examples.
This publication is an update to the previous publication, Structural Load Determination
under 2006 IBC and ASCE/SEI 7-05. A new section has been added to Chapter 5 that
covers the alternate all-heights wind design method in 2009 IBC Section 1609.6. This
method is a simplified procedure based on Method 2 of ASCE/SEI 7-05 that applies to
regularly-shaped buildings and structures that meet the five conditions given in IBC
1609.6.1. Net wind pressures pnet are calculated using design pressure coefficients Cnet,
which are given in IBC Table 1609.6.2(2) for main wind-force-resisting systems and
components and cladding. A flowchart on how to determine pnet is provided along with
examples on how to apply the alternate all-heights method.
A new Chapter 7 was added covering the determination of flood loads in accordance with
IBC Section 1612, ASCE/SEI 7-05 and ASCE/SEI 24-05. Section 1612 of the IBC
requires all structures sited in designated flood hazard areas to be designed and
constructed to resist the effects of flood hazards and flood loads. Flood hazards may
include erosion and scour whereas flood loads include flotation, lateral hydrostatic
pressures, hydrodynamic pressures (due to moving water), wave impact and debris
impact. Chapter 7 covers (1) identification of the various types of flood hazard areas and
zones, (2) design and construction requirements and (3) determination of flood loads in
accordance with IBC 1612, Chapter 5 of ASCE/SEI 7-05 and ASCE/SEI 24-05.
Examples that clearly illustrate the provisions are included for residential structures in a
Non-Coastal A Zone, a Coastal A Zone and a V Zone.
Load Determination under 2009 IBC and ASCE/SEI 7-05 is a multipurpose resource for
civil and structural engineers, architects and plan check engineers because it can be used
as a self-learning guide as well as a reference manual.
About the International Code Council
The International Code Council® (ICC®) is a nonprofit membership association dedicated
to protecting the health, safety, and welfare of people by creating better buildings and
safer communities. The mission of ICC is to provide the highest quality codes, standards,
products and services for all concerned with the safety and performance of the built
environment. ICC is the publisher of the family of the International Codes® (I-Codes®), a
single set of comprehensive and coordinated model codes. This unified approach to
building codes enhances safety, efficiency and affordability in the construction of
buildings. The Code Council is also dedicated to innovation, sustainability and energy
efficiency. Code Council subsidiary, ICC Evaluation Service, issues Evaluation Reports
for innovative products and reports of Sustainable Attributes Verification and Evaluation
Headquarters: 500 New Jersey Avenue, NW, 6th Floor, Washington, DC 20001-2070
District Offices: Birmingham, AL; Chicago. IL; Los Angeles, CA
David A. Fanella, Ph.D., S.E., P.E., F.ASCE, is Associate Principal and Director of New
Structures at Klein and Hoffman Inc., Chicago, Illinois. Dr. Fanella holds a Ph.D. in
structural engineering from the University of Illinois at Chicago and is a licensed
Structural Engineer in the State of Illinois and a licensed Professional Engineer in
numerous states. He was formerly with the Portland Cement Association in Skokie,
Illinois, where he was responsible for the buildings and special structures market. Dr.
Fanella is an active member of a number of American Concrete Institute (ACI)
Committees and is an Associate Member of the ASCE 7 Committee. He is currently
President-Elect of the Structural Engineers Association of Illinois. Dr. Fanella has
authored or coauthored many structural publications, including a series of articles on
time-saving methods for reinforced concrete design.
CHAPTER 1 – INTRODUCTION ......................................................................... 1-1
OVERVIEW ............................................................................................................................
REFERENCES ........................................................................................................................
CHAPTER 2 – LOAD COMBINATIONS ............................................................ 2-1
INTRODUCTION ...................................................................................................................
LOAD EFFECTS.....................................................................................................................
AND RESISTANCE FACTOR DESIGN ...............................................................................
LOAD COMBINATIONS USING ALLOWABLE STRESS DESIGN .................................
LOAD COMBINATIONS WITH OVERSTRENGTH FACTOR ..........................................
LOAD COMBINATIONS FOR EXTRAORDINARY EVENTS...........................................
EXAMPLES ............................................................................................................................
Example 2.1 – Column in Office Building, Strength Design
Load Combinations for Axial Loads .........................................................................
Example 2.2 – Column in Office Building, Strength Design
Load Combinations for Axial Loads and Bending Moments ....................................
Example 2.3 – Beam in University Building, Strength Design
Load Combinations for Shear Forces and Bending Moments ...................................
Example 2.4 – Beam in University Building, Basic Allowable Stress
Design Load Combinations for Shear Forces and Bending Moments.......................
Example 2.5 – Beam in University Building, Alternative Basic Allowable
Stress Design Load Combinations for Shear Forces and Bending Moments ............
Example 2.6 – Collector Beam in Residential Building, Load Combinations using
Strength Design and Basic Load Combinations for Strength Design with
Overstrength Factor for Axial Forces, Shear Forces, and Bending Moments ...........
Example 2.7 – Collector Beam in Residential Building, Load Combinations using
Allowable Stress Design (Basic Load Combinations) and Basic Combinations
for Allowable Stress Design with Overstrength Factor for Axial Forces, Shear
Forces, and Bending Moments ..................................................................................
Example 2.8 – Collector Beam in Residential Building, Load Combinations using
Allowable Stress Design (Alternative Basic Load Combinations) and Basic
Combinations for Allowable Stress Design with Overstrength Factor for Axial
Forces, Shear Forces, and Bending Moments ........................................................... 2-22
Example 2.9 – Timber Pile in Residential Building, Basic Allowable Stress Design
Load Combinations for Axial Forces ........................................................................ 2-23
CHAPTER 3 – DEAD, LIVE, AND RAIN LOADS ............................................. 3-1
DEAD LOADS........................................................................................................................
LIVE LOADS..........................................................................................................................
General ......................................................................................................................
Reduction in Live Loads ...........................................................................................
Distribution of Floor Loads .......................................................................................
Roof Loads ................................................................................................................
Crane Loads...............................................................................................................
Interior Walls and Partitions......................................................................................
RAIN LOADS .........................................................................................................................
EXAMPLES ............................................................................................................................
Example 3.1 – Live Load Reduction, General Method of IBC 1607.9.1 ..................
Example 3.2 – Live Load Reduction, Alternate Method of IBC 1607.9.2 ................
Example 3.3 – Live Load Reduction on a Girder ......................................................
Example 3.4 – Rain Load, IBC 1611.........................................................................
CHAPTER 4 – SNOW LOADS .............................................................................. 4-1
INTRODUCTION ...................................................................................................................
FLOWCHARTS ......................................................................................................................
EXAMPLES ............................................................................................................................
Example 4.1 – Warehouse Building, Roof Slope of 1/2 on 12..................................
Example 4.2 – Warehouse Building, Roof Slope of 1/4 on 12..................................
Example 4.3 – Warehouse Building (Roof Slope of 1/2 on 12)
and Adjoining Office Building (Roof Slope of 1/2 on 12) ........................................
Example 4.4 – Six-Story Hotel with Parapet Walls...................................................
Example 4.5 – Six-Story Hotel with Rooftop Unit....................................................
Example 4.6 – Agricultural Building.........................................................................
Example 4.7 – University Facility with Sawtooth Roof ............................................
Example 4.8 – Public Utility Facility with Curved Roof...........................................
CHAPTER 5 – WIND LOADS............................................................................... 5-1
INTRODUCTION ...................................................................................................................
FLOWCHARTS ......................................................................................................................
Allowed Procedures...................................................................................................
Method 1 – Simplified Procedure..............................................................................
Method 2 – Analytical Procedure..............................................................................
Alternate All-heights Method....................................................................................
EXAMPLES ............................................................................................................................
Example 5.1 – Warehouse Building using Method 1, Simplified Procedure ............
Example 5.2 – Warehouse Building using Low-rise Building Provisions of
Method 2, Analytical Method....................................................................................
Example 5.3 – Warehouse Building using Provisions of
Method 2, Analytical Procedure................................................................................
Example 5.4 – Warehouse Building using Alternate All-heights Method.................
Example 5.5 – Residential Building using Method 2, Analytical Procedure.............
Example 5.6 – Six-Story Hotel using Method 2, Analytical Procedure ....................
Example 5.7 – Six-Story Hotel Located on an Escarpment
using Method 2, Analytical Procedure ......................................................................
Example 5.8 – Six-Story Hotel using Alternate All-heights Method ........................ 5-105
Example 5.9 – Fifteen-Story Office Building using Method 2,
Analytical Procedure ................................................................................................. 5-110
Example 5.10 – Agricultural Building using Method 2, Analytical Procedure ......... 5-125
Example 5.11 – Freestanding Masonry Wall using Method 2,
Analytical Procedure ................................................................................................. 5-131
CHAPTER 6 – EARTHQUAKE LOADS ............................................................. 6-1
INTRODUCTION ...................................................................................................................
SEISMIC DESIGN CRITERIA...............................................................................................
Seismic Ground Motion Values ................................................................................
Occupancy Category and Importance Factor ............................................................
Seismic Design Category ..........................................................................................
Design Requirements for SDC A ..............................................................................
SEISMIC DESIGN REQUIREMENTS FOR BUILDING STRUCTURES ...........................
Basic Requirements ...................................................................................................
Seismic Force-Resisting Systems ..............................................................................
Diaphragm Flexibility, Configuration Irregularities, and Redundancy .....................
Seismic Load Effects and Combinations...................................................................
Direction of Loading .................................................................................................
Analysis Procedure Selection ....................................................................................
Modeling Criteria ......................................................................................................
Equivalent Lateral Force Procedure ..........................................................................
Modal Response Spectral Analysis ...........................................................................
Diaphragms, Chords, and Collectors .........................................................................
Structural Walls and Their Anchorage ......................................................................
Drift and Deformation ...............................................................................................
Foundation Design ....................................................................................................
Simplified Alternative Structural Design Criteria for Simple
Bearing Wall or Building Frame Systems .................................................................
SEISMIC DESIGN REQUIREMENTS FOR NONSTRUCTURAL COMPONENTS ..........
General ......................................................................................................................
Seismic Demands on Nonstructural Components .....................................................
Nonstructural Component Anchorage .......................................................................
Architectural Components.........................................................................................
Mechanical and Electrical Components ....................................................................
SEISMIC DESIGN REQUIREMENTS FOR NONBUILDING STRUCTURES ..................
General ......................................................................................................................
Reference Documents................................................................................................
Nonbuilding Structures Supported by Other Structures ............................................
Structural Design Requirements................................................................................
Nonbuilding Structures Similar to Buildings ............................................................
Nonbuilding Structures Not Similar to Buildings......................................................
Tanks and Vessels .....................................................................................................
FLOWCHARTS ......................................................................................................................
Seismic Design Criteria.............................................................................................
Seismic Design Requirements for Building Structures .............................................
Seismic Design Requirements for Nonstructural Components .................................
Seismic Design Requirements for Nonbuilding Structures .......................................
EXAMPLES ............................................................................................................................
Example 6.1 – Residential Building, Seismic Design Category................................
Example 6.2 – Residential Building, Permitted Analytical Procedure ......................
Example 6.3 – Office Building, Seismic Design Category .......................................
Example 6.4 – Office Building, Permitted Analytical Procedure..............................
Example 6.5 – Office Building, Allowable Story Drift .............................................
Example 6.6 – Office Building, P-delta Effects ........................................................
Example 6.7 – Health Care Facility, Diaphragm Design Forces ...............................
Example 6.8 – Health Care Facility, Nonstructural Component ...............................
Example 6.9 – Residential Building, Vertical Combination
of Structural Systems.................................................................................................
Example 6.10 – Warehouse Building, Design of Roof
Diaphragm, Collectors, and Wall Panels................................................................... 6-102
Example 6.11 – Retail Building, Simplified Design Method .................................... 6-112
Example 6.12 – Nonbuilding Structure ..................................................................... 6-121
CHAPTER 7 – FLOOD LOADS ............................................................................ 7-1
INTRODUCTION ...................................................................................................................
FLOOD HAZARD AREAS ....................................................................................................
DESIGN AND CONSTRUCTION .........................................................................................
General ......................................................................................................................
Flood Loads...............................................................................................................
EXAMPLES ............................................................................................................................
Example 7.1 – Residential Building Located in a Non-Coastal A Zone ...................
Example 7.2 – Residential Building Located in a Coastal A Zone............................
Example 7.3 – Residential Building Located in a V Zone ........................................
The writer is deeply grateful to John R. Henry, P.E., Principal Staff Engineer,
International Code Council, Inc., for his thorough review of the second edition of this
publication. His insightful comments and suggestions for improvement have added
significant value to this edition.
Thanks are also due to Adugna Fanuel, S.E., LEED AP, Christina Harber, S.E. and
Majlinda Agojci, all of Klein and Hoffman, Inc., for their contributions. Their help in
modeling and analyzing some of the example buildings and their review of the text and
example problems were invaluable.
CHAPTER 1
The purpose of this publication is to assist in the proper determination of structural loads
in accordance with the 2009 edition of the International Building Code® (IBC®) [1.1] and
the 2005 edition of ASCE/SEI 7 Minimum Design Loads for Buildings and Other
Structures [1.2].1 Chapter 16 of the 2009 IBC, Structural Design, prescribes minimum
structural loading requirements that are to be used in the design of all buildings and
structures. The intent is to subject buildings and structures to loads that are likely to be
encountered during their life span, thereby minimizing hazard to life and improving
performance during and after a design event.
The snow load provisions in Section 1608, the wind load provisions in Section 1609, the
flood load provisions in Section 1612 and the earthquake load provisions in Section 1613
are based on the provisions of Chapter 7, Chapter 6, Chapter 5, and Chapters 11 through
23 (with some exceptions) of ASCE/SEI 7, respectively. These ASCE/SEI 7 chapters are
referenced in the aforementioned sections of the IBC.2
The seismic requirements of the 2009 IBC and ASCE/SEI 7 are based primarily on those
in the 2003 edition of NEHRP Recommended Provisions for Seismic Regulations for New
Buildings and Other Structures, Part 1: Provisions [1.3]. The NEHRP document, which
has been updated every three years since the first edition in 1985, contains state-of-the-art
criteria for the design and construction of buildings anywhere in the U.S. and its
territories that are subject to the effects of earthquake ground motion. Life safety is the
primary goal of the provisions. The requirements are also intended to enhance the
performance of high-occupancy buildings and to improve the capability of essential
facilities to function during and after a design-basis earthquake.
In addition to minimum design load requirements, Chapter 16 contains other important
criteria that have a direct impact on the design of buildings and structures, including
permitted design methodologies and design load combinations. For example, new
Section 1614 contains provisions for structural integrity, which are applicable to high-rise
buildings that are assigned to Occupancy Category III or IV and that are bearing wall
structures or frame structures.3
Numbers in brackets refer to references listed in Section 1.3 of this publication.
ASCE/SEI 7-05 is one of a number of codes and standards that are referenced by the IBC. These
documents, which can be found in Chapter 35 of the 2009 IBC, are considered part of the requirements of
the IBC to the prescribed extent of each reference (see Section 101.4 of the 2009 IBC).
High-rise buildings are defined in Section 202 as a building with an occupied floor located more than 75 ft
above the lowest level to fire department vehicle access. Occupancy categories are defined in IBC
Table 1604.5. Definitions of bearing wall structures and frame structures are given in Section 1614.2.
The content of this publication is geared primarily to practicing structural engineers. The
load requirements of the 2009 IBC and ASCE/SEI 7-05 are presented in a straightforward
manner with emphasis placed on the proper application of the provisions in everyday
Code provisions have been organized in comprehensive flowcharts, which provide a road
map that guides the reader through the requirements. Included in the flowcharts are the
applicable section numbers and equation numbers from the 2009 IBC and ASCE/SEI 705 that pertain to the specific requirements. A basic description of flowchart symbols
used in this publication is provided in Table 1.1.
Table 1.1 Summary of Flowchart Symbols
The terminator symbol represents the starting or ending
point of a flowchart.
The process symbol indicates a particular step or action that
is taken within a flowchart.
The decision symbol represents a decision point, which
requires a “yes” or “no” response.
The off-page connector symbol is used to indicate
continuation of the flowchart on another page.
The logical “Or” symbol is used when a process diverges in
two or more branches. Any one of the branches attached to
this symbol can be followed.
The connector symbol indicates the sequence and direction
of a process.
Numerous completely worked-out design examples are included in the chapters that
illustrate the proper application of the code requirements. These examples follow the
steps provided in the referenced flowcharts.
Readers who are interested in the history and design philosophy of the requirements can
find detailed discussions in the commentary of ASCE/SEI 7 [1.2] and in NEHRP
Recommended Provisions for Seismic Regulations for New Buildings and Other
Structures, Part 2: Commentary [1.4].
In addition to practicing structural engineers, engineers studying for licensing exams,
structural plan checkers and others involved in structural engineering, such as advanced
undergraduate students and graduate students, will find the flowcharts and the workedout design examples to be very useful.
Throughout this publication, section numbers from the 2009 IBC are referenced as
illustrated by the following: Section 1613 of the 2009 IBC is denoted as IBC 1613.
Similarly, Section 11.4 from the 2005 ASCE/SEI 7 is referenced as ASCE/SEI 11.4 or as
Chapter 2 outlines the required load combinations that must be considered when
designing a building or its members for a variety of load effects. Load combinations
using strength design or load and resistance factor design and load combinations using
allowable stress design are both covered. Examples are provided that illustrate the
strength design and allowable stress design load combinations for different types of
members subject to different types of load effects.
Dead, live and rain loads are discussed in Chapter 3. The general method and an alternate
method of live load reduction are covered, and flowcharts and examples illustrate both
methods. The rain load provisions of IBC 1611 are also described, and an example
demonstrates the calculation of a design rain load for a roof with scuppers.
Design provisions for snow loads are given in Chapter 4. A series of flowcharts highlight
the requirements, and examples show the determination of flat roof snow loads, sloped
roof snow loads, unbalanced roof snow loads and snow drift loads on a variety of flat and
sloped roofs, including gable roofs, monoslope roofs, sawtooth roofs and curved roofs.
Examples are also given that illustrate design snow loads for parapets and rooftop units.
Chapter 5 presents the design requirements for wind loads. Flowcharts are provided for
the procedures that are allowed to be used when analyzing main wind-force-resisting
systems and components and cladding. Other flowcharts give step-by-step procedures on
how to determine design wind pressures on main wind-force-resisting systems and
components and cladding of enclosed, partially enclosed and open buildings using the
simplified procedure and the analytical procedure. A number of worked-out examples
illustrate the design requirements for a variety of buildings and structures.
Earthquake loads are presented in Chapter 6. Information on how to determine design
ground accelerations, site class and the seismic design category (SDC) of a building or
structure is included, as are the various methods of analysis and their applicability for
regular and irregular building and structures. Flowcharts and examples are provided that
cover seismic design criteria, seismic design requirements for building structures, seismic
design requirements for nonstructural components and seismic design requirements for
nonbuilding structures.
Chapter 7 contains the requirements for flood loads. Included is information on flood
hazard areas and flood hazard zones. Equations are provided for the following types of
flood loads: hydrostatic loads, hydrodynamic loads, wave loads (breaking wave loads on
vertical pilings and columns, breaking wave loads on vertical and nonvertical walls, and
breaking wave loads from obliquely incident waves), and impact loads. Examples
illustrate load calculations for a residential building in a Non-Coastal A Zone, a Coastal
A Zone, and a V Zone.
1.1 International Code Council, International Building Code, Washington, DC, 2009.
1.2 Structural Engineering Institute of the American Society of Civil Engineers,
Minimum Design Loads for Buildings and Other Structures, ASCE/SEI 7-05,
Reston, VA, 2006.
1.3 Building Seismic Safety Council, NEHRP Recommended Provisions for Seismic
Regulations for New Buildings and Other Structures, Part 1: Provisions, FEMA
450-1/2003 edition, Washington, DC, 2004.
1.4 Building Seismic Safety Council, NEHRP Recommended Provisions for Seismic
Regulations for New Buildings and Other Structures, Part 2: Commentary, FEMA
450-2/2003 edition, Washington, DC, 2004.
CHAPTER 2
In accordance with IBC 1605.1, structural members of buildings and other structures
must be designed to resist the load combinations of IBC 1605.2, 1605.3.1 or 1605.3.2.
Load combinations that are specified in Chapters 18 through 23 of the IBC, which
contain provisions for soils and foundations, concrete, aluminum, masonry, steel and
wood, must also be considered. The structural elements identified in ASCE/SEI 12.2.5.2,
12.3.3.3 and 12.10.2.1 must be designed for the load combinations with overstrength
factor of ASCE/SEI 12.4.3.2. These load combinations and their applicability are
examined in Section 2.5 of this publication.
IBC 1605.2 contains the load combinations that are to be used when strength design or
load and resistance factor design is utilized. Load combinations using allowable stress
design are given in IBC 1605.3. Both sets of combinations are examined in Sections 2.3
and 2.4 of this publication, respectively. In addition to design for strength, the
combinations of IBC 1605.2 or 1605.3 can be used to check overall structural stability,
including stability against overturning, sliding and buoyancy (IBC 1605.1.1).
According to IBC 1605.1, load combinations must be investigated with one or more of
the variable loads set equal to zero.1 It is possible that the most critical load effects on a
member occur when variable loads are not present.
ASCE/SEI 2.3 and 2.4 contain load combinations using strength design and allowable
stress design, respectively. The load combinations are essentially the same as those in
IBC 1605.2 and 1605.3 with some exceptions. Differences in the IBC and ASCE/SEI 7
load combinations are covered in the following sections.
Prior to examining the various load combinations, a brief introduction on load effects is
given in Section 2.2.
The load effects that are included in the IBC and ASCE/SEI 7 load combinations are
summarized in Table 2.1. More details on these load effects can be found in the IBC and
ASCE/SEI 7, as well as in subsequent chapters of this publication, as noted in the table.
By definition, a “variable load” is a load that is not considered to be a permanent load (see IBC 1602).
Permanent loads are those loads that do not change or that change very slightly over time, such as dead
loads. Live loads, roof live loads, snow loads, rain loads, wind loads and earthquake loads are all
examples of variable loads.
Table 2.1 Summary of Load Effects
Load Effect
Dead load
See IBC 1606 and Chapter 3 of this
Weight of ice
See Chapter 10 of ASCE/SEI 7
Combined effect of horizontal and vertical
earthquake-induced forces as defined in
ASCE/SEI 12.4.2
See IBC 1613, ASCE/SEI 12.4.2 and
Chapter 6 of this publication
Maximum seismic load effect of horizontal and
vertical forces as set forth in ASCE/SEI 12.4.3
See IBC 1613, ASCE/SEI 12.4.3 and
Chapter 6 of this publication
Load due to fluids with well-defined pressures
and maximum heights
Flood load
See IBC 1612 and Chapter 7 of this
Load due to lateral earth pressures, ground water
pressure or pressure of bulk materials
See IBC 1610 for soil lateral loads
Live load, except roof live load, including any
permitted live load reduction
See IBC 1607 and Chapter 3 of this
Roof live load including any permitted live load
See IBC 1607 and Chapter 3 of this
Rain load
See IBC 1611 and Chapter 3 of this
Snow load
See IBC 1608 and Chapter 4 of this
Self-straining force arising from contraction or
expansion resulting from temperature change,
shrinkage, moisture change, creep in component
materials, movement due to differential
settlement or combinations thereof
Load due to wind pressure
See IBC 1609 and Chapter 5 of this
Wind-on-ice load
See Chapter 10 of ASCE/SEI 7
The basic load combinations where strength design or load and resistance factor design is
used are given in IBC 1605.2 and are summarized in Table 2.2. These combinations of
factored loads establish the required strength that needs to be provided in the structural
members of a building or structure.
Factored loads are determined by multiplying nominal loads (i.e., loads specified in
Chapter 16 of the IBC) by a load factor, which is typically greater than or less than 1.0.
Earthquake load effects are an exception to this: a load factor of 1.0 is used to determine
the maximum effect, since an earthquake load is considered a strength-level load.
Table 2.2 Summary of Load Combinations Using Strength Design or Load and
Resistance Factor Design (IBC 1605.2.1)
Equation No.
Load Combination
1.4( D + F )
1.2( D + F + T ) + 1.6( L + H ) + 0.5( Lr or S or R )
1.2 D + 1.6( Lr or S or R ) + ( f1 L or 0.8W )
1.2 D + 1.6W + f1 L + 0.5( Lr or S or R )
1.2 D + 1.0 E + f1 L + f 2 S
0.9 D + 1.6W + 1.6 H
0.9 D + 1.0 E + 1.6 H
f1 = 1 for floors in places of public assembly, for live loads in excess
of 100 psf, and for parking garage live load
= 0.5 for other live loads
f 2 = 0.7 for roof configurations (such as sawtooth) that do not shed
snow off the structure
= 0.2 for other roof configurations
Load combinations are constructed by adding to the dead load one or more of the variable
loads at its maximum value, which is typically indicated by a load factor of 1.6. Also
included are other variable loads with load factors less than 1.0; these are companion
loads that represent arbitrary point-in-time values for those loads. Certain types of
variable loads, such as wind and earthquake loads, act in more than one direction on a
building or structure, and the appropriate sign of the variable load must be considered in
the load combinations.
According to the exception in this section, factored load combinations that are specified
in other provisions of the IBC take precedence to those listed in IBC 1605.2.
The load combinations given in IBC 1605.2.1 are the same as those in ASCE/SEI 2.3.2
with the following exceptions:
The variable f1 that is present in IBC Eqs. 16-3, 16-4 and 16-5 is not found in
ASCE/SEI combinations 3, 4 and 5. Instead, the load factor on the live load L in
the ASCE/SEI combinations is equal to 1.0 with the exception that the load factor
on L is permitted to equal 0.5 for all occupancies where the live load is less than
or equal to 100 psf, except for parking garages or areas occupied as places of
public assembly (see exception 1 in ASCE/SEI 2.3.2). This exception makes these
load combinations the same in ASCE/SEI 7 and the IBC.
The variable f2 that is present in IBC Eq. 16-5 is not found in ASCE/SEI
combination 5. Instead, a load factor of 0.2 is applied to S in the ASCE/SEI
combination. The third exception in ASCE/SEI 2.3.2 states that in combinations
2, 4 and 5, S shall be taken as either the flat roof snow load pf or the sloped roof
snow load ps. This essentially means that the balanced snow load defined in
ASCE/SEI 7.3 for flat roofs and 7.4 for sloped roofs can be used in combinations
2, 4 and 5. Drift loads and unbalanced snow loads are covered by combination 3.
More information on snow loads can be found in Chapter 4 of this publication.
According to IBC 1605.2.2, the load combinations of ASCE/SEI 2.3.3 are to be used
where flood loads Fa must be considered in the design.2 In particular, 1.6W in IBC Eqs.
16-4 and 16-6 shall be replaced by 1.6W + 2.0Fa in V Zones or Coastal A Zones.3 In
Non-Coastal A Zones, 1.6W in IBC Eqs. 16-4 and 16-6 shall be replaced by 0.8W +
ASCE/SEI 2.3.4 provides load combinations that include atmospheric ice loads, which
are not found in the IBC. The following load combinations must be considered when a
structure is subjected to atmospheric ice and wind-on-ice loads:4
0.5( Lr or S or R) in ASCE/SEI combination 2 (IBC Eq. 16-2) shall be replaced
by 0.2 Di + 0.5S
1.6W + 0.5( Lr or S or R) in ASCE/SEI combination 4 (IBC Eq. 16-4) shall be
replaced by Di + Wi + 0.5S
1.6W in ASCE/SEI combination 6 (IBC Eq. 16-6) shall be replaced by Di + Wi
The basic load combinations where allowable stress design (working stress design) is
used are given in IBC 1605.3. A set of basic load combinations is given in IBC 1605.3.1,
and a set of alternative basic load combinations is given in IBC 1605.3.2. Both sets are
examined below.
The basic load combinations of IBC 1605.3.1 are summarized in Table 2.3. A factor of
0.75 is applied where these combinations include more than one variable load, since the
probability is low that two or more of the variable loads will reach their maximum values
at the same time.
A factor of 0.6 is applied to the dead load D in IBC Eqs. 16-14 and 16-15. This factor
limits the dead load that resists horizontal loads to approximately two-thirds of its actual
value.5 These load combinations apply to the design of all members in a structure and
also provide for overall stability of a structure.
Flood loads are determined by Chapter 5 of ASCE/SEI 7-05 and are covered in Chapter 7 of this
Definitions of Coastal High Hazard Areas (V Zones) and Coastal A Zones are given in ASCE/SEI 5.2.
Atmospheric and wind-on-ice loads are determined by Chapter 10 of ASCE/SEI 7.
Previous editions of the legacy building codes specified that the overturning moment and sliding due to
wind load could not exceed two-thirds of the dead load stabilizing moment. This provision was not
typically applied to all members in the building.
Table 2.3 Summary of Basic Load Combinations using Allowable Stress Design
(IBC 1605.3.1)
Equation No.
Load Combination
D + H + F + L+T
D + H + F + ( Lr or S or R )
D + H + F + 0.75( L + T ) + 0.75( Lr or S or R )
D + H + F + (W or 0.7 E )
D + H + F + 0.75(W or 0.7 E ) + 0.75 L + 0.75( Lr or S or R )
0.6 D + W + H
0.6 D + 0.7 E + H
As noted in Section 2.3 of this document, the combined effect of horizontal and vertical
earthquake-induced forces E is a strength-level load. A factor of 0.7 (which is
approximately equal to 1/1.4) is applied to E in IBC Eqs. 16-12, 16-13 and 16-15 to
convert the strength-level load to a service-level load.
Two exceptions are given in IBC 1605.3.1. The first exception states that crane hook
loads need not be combined with roof live load or with more than three-fourths of the
snow load or one-half of the wind load. It is important to note this exception does not
eliminate the need to combine live loads other than crane live loads with wind and snow
loads in the prescribed manner. In other words, the load combinations in IBC Eqs. 16-11
and 16-13 must be investigated without the crane live load and with the crane live load
using the criteria in the exception. In particular, the following load combinations must be
investigated where crane live loads Lc are present:6
IBC Eq. 16-11:
D + H + F + 0.75 L + 0.75( Lr or S or R )
D + H + F + 0.75( L + Lc ) + 0.75(0.75S or R)
IBC Eq. 16-13 with E:
D + H + F + 0.75(0.7 E ) + 0.75 L + 0.75( Lr or S or R)
Load effects T are not considered here for simplicity.
D + H + F + 0.75(0.7 E ) + 0.75( L + Lc ) + 0.75(0.75S or R)
IBC Eq. 16-13 with W:
D + H + F + 0.75W + 0.75 L + 0.75( Lr or S or R )
D + H + F + 0.75(0.5W ) + 0.75( L + Lc ) + 0.75(0.75S or R)
The second exception in IBC 1605.3.1 states that flat roof snow loads pf that are less than
or equal to 30 psf and roof live loads that are less than or equal to 30 psf need not be
combined with seismic loads. Also, where pf is greater than 30 psf, 20 percent of the
snow load must be combined with seismic loads.
Increases in allowable stresses that are given in the materials chapters of the IBC or in
referenced standards are not permitted when the load combinations of IBC 1605.3.1 are
used (IBC 1605.3.1.1). However, it is permitted to use the duration of load factor when
designing wood structures in accordance with Chapter 23 of the IBC, which references
the 2005 edition of the National Design Specification for Wood Construction (NDS-05).
According to IBC 1605.3.1.2, the load combinations of ASCE/SEI 2.4.2 are to be used
where flood loads Fa must be considered in design.7 In particular, 1.5Fa must be added to
the other loads in IBC Eqs. 16-12, 16-13 and 16-14, and E is set equal to zero in IBC Eqs.
16-12 and 16-13 in V Zones or Coastal A Zones.8 In noncoastal A Zones, 0.75Fa must be
added to the other loads in IBC Eqs. 16-12, 16-13 and 16-14, and E is set equal to zero in
IBC Eqs. 16-12 and 16-13.
The load combinations of IBC 1605.3.1 and ASCE/SEI 2.4.1 are the same except for the
There is no specific exception for crane loads in ASCE/SEI 2.4.1.
The exception in ASCE/SEI 2.4.1 states that in combinations 4 and 6, S shall be
taken as either the flat roof snow load pf or the sloped roof snow load ps. The
balanced snow load defined in ASCE/SEI 7.3 for flat roofs and 7.4 for sloped
roofs can be used in combinations 4 and 6, and drift loads and unbalanced snow
loads are covered by combination 3. More information on snow loads can be
found in Chapter 4 of this publication.
Flood loads are determined by Chapter 5 of ASCE/SEI 7-05 and are covered in Chapter 7 of this
Definitions of Coastal High Hazard Areas (V Zones) and Coastal A Zones are given in ASCE/SEI 5.2.
ASCE/SEI 2.4.3 provides load combinations including atmospheric ice loads, which are
not found in the IBC. The following load combinations must be considered when a
structure is subjected to atmospheric ice and wind-on-ice loads:9
0.7 Di shall be added to combination 2 (IBC Eq. 16-9)
( Lr or S or R ) in combination 3 (IBC Eq. 16-10) shall be replaced by
0.7 Di + 0.7Wi + S
W in combination 7 (IBC Eq. 16-14) shall be replaced by 0.7 Di + 0.7Wi
As noted previously, a second set of load combinations are provided in the IBC for
allowable stress design. The alternative basic load combinations can be found in
IBC 1605.3.2 and are summarized in Table 2.4.
Table 2.4 Summary of Alternative Basic Load Combinations using Allowable Stress
Design (IBC 1605.3.2)
Equation No.
Load Combination
D + L + ( Lr or S or R )
D + L + ωW
D + L + ωW + S / 2
D + L + S + ωW / 2
D + L + S + E / 1.4
0.9 D + E / 1.4
These load combinations are based on the allowable stress load combinations that
appeared in the Uniform Building Code for many years.
It should be noted that the alternative allowable stress design load combinations do not
include a load combination comparable to IBC Eq. 16-14 for dead load counteracting
wind load effects. Instead of a specific load combination, IBC 1605.3.2 states that for
load combinations that include counteracting effects of dead and wind loads, only twothirds of the minimum dead load that is likely to be in place during a design wind event is
to be used in the load combination. This is equivalent to a load combination of 0.67D +
As was discussed previously in this section, the combined effect of horizontal and
vertical earthquake-induced forces E is a strength-level load. This strength-level load is
divided by 1.4 in IBC Eqs. 16-20 and 16-21 to convert it to a service-level load.
ASCE/SEI 12.13.4 permits a reduction of foundation overturning due to earthquake
forces, provided that the criteria of that section are satisfied. Such a reduction is not
Atmospheric and wind-on-ice loads are determined by Chapter 10 of ASCE/SEI 7.
permitted when the alternative basic load combinations are used to evaluate sliding,
overturning and soil bearing at the soil-structure interface. Also, the vertical seismic load
effect Ev in ASCE/SEI Eq. 12.4-4 may be taken as zero when proportioning foundations
using these load combinations.
The coefficient ω in IBC Eqs. 16-17, 16-18 and 16-19 is equal to 1.3 where wind loads
are calculated in accordance with ASCE/SEI Chapter 6.10
Unlike the basic load combinations of IBC 1605.3.1, allowable stresses are permitted to
be increased or load combinations are permitted to be reduced where permitted by the
material chapters of the IBC (Chapters 18 through 23) or by referenced standards when
the alternative basic load combinations of IBC 1605.3.2 are used. This applies to those
load combinations that include wind or earthquake loads.
The two exceptions in IBC 1605.3.2 for crane hook loads and for combinations of snow
loads, roof live loads, and earthquake loads are the same as those in IBC 1605.3.1, which
were discussed previously.
IBC 1605.3.2.1 requires that where F, H or T must be considered in design, each
applicable load is to be added to the load combinations in IBC Eqs. 16-16 through 16-21.
ASCE/SEI 7-05 does not contain provisions for the alternative basic load combinations of
IBC 1605.3.2.
The following load combinations, which are given in ASCE/SEI 12.4.3.2, must be used
where required by ASCE 12.2.5.2, 12.3.3.3 or 12.10.2.1 instead of the corresponding load
combinations in IBC 1605.2 and 1605.3 (IBC 1605.1, item 3):
Basic Combinations for Strength Design with Overstrength Factor11
IBC Eq. 16-5: (1.2 + 0.2 S DS ) D + Ω o QE + L + 0.2 S
IBC Eq. 16-7: (0.9 − 0.2 S DS ) D + Ω o QE + 1.6 H
Basic Combinations for Allowable Stress Design with Overstrength Factor
IBC Eq. 16-12: (1.0 + 0.14S DS ) D + H + F + 0.7Ω oQE
It is shown in Chapter 5 of this publication that the wind directionality factor, which is equal to 0.85 for
building structures, is explicitly included in the velocity pressure equation for wind. In earlier editions of
ASCE/SEI 7 and in the legacy codes, the directionality factor was part of the load factor, which was
equal to 1.3 for wind. Thus, for allowable stress design, ω = 1.3 × 0.85 ≈ 1.0 , and for strength design,
ω = 1.6 × 0.85 ≈ 1.3 .
See Notes 1 and 2 in ASCE/SEI 12.4.3.2 that pertain to these load combinations.
IBC Eq. 16-13:
(1.0 + 0.105S DS ) D + H + F + 0.525Ω oQE + 0.75 L + 0.75( Lr or S or R)
IBC Eq. 16-15: (0.6 − 0.14 S DS ) D + 0.7Ω oQE + H
Alternative Basic Combinations for Allowable Stress Design with Overstrength
0.2S DS
IBC Eq. 16-20: ¨¨1.0 +
Ω Q
¸D + o E + L + S
0.2S DS
IBC Eq. 16-21: ¨¨ 0.9 −
Ω Q
¸D + o E
Em = Emh + Ev = Ω o QE + 0.2 S DS D for use in IBC Eqs. 16-5, 16-12, 16-13
and 16-20
= Emh − Ev = Ω oQE − 0.2 S DS D for use in IBC Eqs. 16-7, 16-15 and 1621
Ω o = system overstrength factor obtained from ASCE/SEI Table 12.2-1 for a
particular seismic-force-resisting system
QE = effects of horizontal seismic forces on a building or structure
S DS = design spectral response acceleration parameter at short periods
determined by IBC 1613.5 or ASCE/SEI 11.4
ASCE/SEI 12.4.3.3 permits allowable stresses to be increased by a factor of 1.2 where
allowable stress design is used with seismic load effect including overstrength factor.
This increase is not to be combined with increases in allowable stresses or reductions in
load combinations that are otherwise permitted in ASCE/SEI 7 or in other referenced
materials standards. However, the duration of load factor is permitted to be used when
designing wood members in accordance with the referenced standard.
Provisions for cantilever column systems are given in ASCE/SEI 12.2.5.2. In addition to
the design requirements of that section, the members in such systems must be designed to
resist the strength or allowable stress load combinations of IBC 1605.2 or 1605.3 and the
applicable load combinations with overstrength factor specified in ASCE/SEI 12.4.3.2.
The provisions of ASCE/SEI 12.3.3.3 apply to structural members that support
discontinuous frames or shear wall systems where the discontinuity is severe enough to
be deemed a structural irregularity. In particular, columns, beams, trusses or slabs that
support discontinuous walls or frames having horizontal irregularity Type 4 of ASCE/SEI
Table 12.3-1 or vertical irregularity Type 4 of ASCE/SEI Table 12.3-2 must be designed
to resist the load combinations with overstrength factor specified in ASCE/SEI 12.4.3.2
in addition to the strength design or allowable stress design load combinations described
in previous sections of this publication.12
An example of columns supporting a shear wall that has been discontinued at the first
floor of a multistory building is illustrated in Figure 2.1. The columns in this case must be
designed to resist the load combinations with overstrength factor.
Discontinuous shear wall
Column (typ.)
Figure 2.1 Example of Columns Supporting Discontinuous Shear Wall
ASCE/SEI 12.10.2.1 applies to collector elements in structures assigned to Seismic
Design Category (SDC) C and higher.13 Collectors, which are also commonly referred to
as drag struts, are elements in a structure that are used to transfer the loads from a
diaphragm to the elements of the lateral-force-resisting system (LFRS) where the lengths
of the vertical elements in the LFRS are less than the length of the diaphragm at that
location. An example of collector beams and a shear wall is shown in Figure 2.2. The
collector beams collect the force from the diaphragm and distribute it to the shear wall.
In general, collector elements, splices and connections to resisting elements must all be
designed to resist the load combinations with overstrength factor in addition to the
strength design or allowable stress design load combinations presented previously. The
exception in ASCE/SEI 12.10.2.1 permits collectors, splices and connections in structures
braced entirely by light frame shear walls to be only designed to resist the forces
prescribed in ASCE/SEI 12.10.1.1. Light frame construction is defined in ASCE/SEI 11.2
and includes systems composed of repetitive wood and cold-formed steel framing.
Additional information on structural irregularities can be found in ASCE/SEI 12.3 and Chapter 6 of this
More information on how to determine the Seismic Design Category of a building or structure is given in
IBC 1613.5.6, ASCE/SEI 11.6 and Chapter 6 of this publication.
Collector beam
Shear wall
Shear wall (collector
beams not required)
Collector beam
Horizontal seismic force
Figure 2.2 Example of Collector Beams and Shear Walls
ASCE/SEI 2.5 requires that the strength and stability of a structure be checked to ensure
that it can withstand the effects from extraordinary events (i.e., low-probability events)
such as fires, explosions and vehicular impact. More information on these types of events
and recommended load combinations that include the effects of such events can be found
in ASCE/SEI C2.5. In that section, reference is made to ASCE/SEI 1.4 and C1.4, which
address general structural integrity requirements. Included is a discussion on resistance to
progressive collapse.
New provisions for structural integrity are contained in IBC 1614. These are applicable to
buildings classified as high-rise buildings in accordance with IBC 403 and assigned to
Occupancy Category III or IV with bearing wall structures or frame structures.14 Specific
load combinations are not included in these prescriptive requirements; they are meant to
improve the redundancy and ductility of these types of framing systems in the event of
damage due to an abnormal loading event.
The following examples illustrate the load combinations that were discussed in the
previous sections of this publication.
A high-rise building is defined in IBC 202 as a building with an occupied floor located more than 75 ft
above the lowest level of fire department vehicle access. Occupancy Categories III and IV are defined in
IBC Table 1604.5.
Example 2.1 – Column in Office Building, Strength Design Load
Combinations for Axial Loads
Determine the strength design load combinations for a column in a multistory office
building using the nominal axial loads in the design data. The live load on the floors is
less than 100 psf and the roof is a gable roof.
Axial Load (kips)
Dead load, D
Live load, L
Roof live load, Lr
Balanced snow load, S
The load combinations using strength design of IBC 1605.2 are summarized in Table 2.5
for this column. The load combinations in the table include only the applicable load
effects from the design data.
Table 2.5 Summary of Load Combinations using Strength Design for Column in
Example 2.1
Equation No.
16-6, 16-7
Load Combination
1.4 D = 1.4 × 78 = 109 kips
1.2 D + 1.6 L + 0.5( Lr or S ) = (1.2 × 78) + (1.6 × 38) + (0.5 × 13) = 161 kips
= (1.2 × 78) + (1.6 × 38) + (0.5 × 19) = 164 kips
1.2 D + 1.6( Lr or S ) + f1 L = (1.2 × 78) + (1.6 × 13) + (0.5 × 38) = 133 kips
= (1.2 × 78) + (1.6 × 19) + (0.5 × 38) = 143 kips
1.2 D + f1 L + 0.5( Lr or S ) = (1.2 × 78) + (0.5 × 38) + (0.5 × 13) = 119 kips
= (1.2 × 78) + (0.5 × 38) + (0.5 × 19) = 122 kips
1.2 D + f1 L + f 2 S = (1.2 × 78) + (0.5 × 38) + (0.2 × 19) = 116 kips
0.9 D = 0.9 × 78 = 70 kips
The constant f1 is taken as 0.5, since the live load is less than 100 psf. The constant f2 is
taken as 0.2, since the gable roof can shed snow, unlike a sawtooth roof.
The largest axial load on the column is due to IBC Eq. 16-2 when the snow load is
In this example, taking one or more of the variable loads (live, roof live or snow loads) in
IBC Eqs. 16-1 through 16-5 equal to zero results in factored loads less than those shown
in Table 2.5.
Example 2.2 – Column in Office Building, Strength Design Load
Combinations for Axial Loads and Bending Moments
Determine the strength design load combinations for a column in a multistory office
building using the nominal axial loads and maximum bending moments in the design
data. The live load on the floors is less than 100 psf and the roof is essentially flat.
Axial Load (kips)
Bending Moment (ft-kips)
Dead load, D
Live load, L
Roof live load, Lr
Balanced snow load, S
Wind load, W
± 20
± 47
The load combinations using strength design of IBC 1605.2 are summarized in Table 2.6
for this column. The load combinations in the table include only the applicable load
effects from the design data. The constant f1 is taken as 0.5, since the live load is less than
100 psf. The constant f2 is taken as 0.2, since the flat roof can shed snow, unlike a
sawtooth roof.
Since the wind loads cause the structure to sway to the right and to the left, load
combinations must be investigated for both cases. This is accomplished by taking both
“plus” and “minus” load effects of the wind.
In general, all of the load combinations in Table 2.6 must be investigated when designing
the column. It is usually possible to anticipate which of the load combinations is the most
critical. Taking one or more of the variable loads (live, roof live, snow or wind loads) in
IBC Eqs. 16-1 through 16-5 equal to zero was not considered, since these load
combinations typically do not govern the design of such members.
Table 2.6 Summary of Load Combinations using Strength Design for Column in
Example 2.2
Load Combination
Bending Moment
1.4 D
1.2 D + 1.6 L + 0.5 Lr
1.2 D + 1.6 L + 0.5 S
1.2 D + 1.6 Lr + f1 L
1.2 D + 1.6 Lr + 0.8W
1.2 D + 1.6 Lr − 0.8W
1.2 D + 1.6 S + f1 L
1.2 D + 1.6 S + 0.8W
1.2 D + 1.6 S − 0.8W
1.2 D + 1.6W + f1 L + 0.5 Lr
1.2 D + 1.6W + f1 L + 0.5 S
1.2 D − 1.6W + f1 L + 0.5 Lr
1.2 D − 1.6W + f1 L + 0.5 S
1.2 D + f1 L + f 2 S
0.9 D + 1.6W
0.9 D − 1.6W
0.9 D
Example 2.3 – Beam in University Building, Strength Design Load
Combinations for Shear Forces and Bending Moments
Determine the strength design load combinations for a beam in a university building
using the nominal shear forces and bending moments in the design data. The occupancy
of the building is classified as a place of public assembly.
Shear Force
Bending Moment (ft-kips)
Dead load, D
Live load, L
Wind load, W
± 10
± 100
Seismic, QE
± 50
The seismic design data are as follows:15
ρ = redundancy factor = 1.0
S DS = design spectral response acceleration parameter at short periods = 0.5g
The load combinations using strength design of IBC 1605.2 are summarized in Table 2.7
for this beam. The load combinations in the table include only the applicable load effects
from the design data.
The quantity QE is the effect of code-prescribed horizontal seismic forces on the beam
determined from a structural analysis.
In accordance with ASCE/SEI 12.4.2, the seismic load effect E is defined as follows:
For use in load combination 5 of ASCE/SEI 2.3.2, or, equivalently, in IBC
Eq. 16-5:
E = Eh + Ev = ρQE + 0.2S DS D = QE + 0.1D
For use in load combination 7 of ASCE/SEI 2.3.2, or, equivalently, in IBC
Eq. 16-7:
E = Eh − Ev = ρQE − 0.2S DS D = QE − 0.1D
Substituting for E, IBC Eq. 16-5 becomes: 1.2 D + 1.0 E + f1L = 1.3D + L + QE
Similarly, IBC Eq. 16-7 becomes: 0.9 D + 1.0 E = 0.8 D + QE
More information on seismic design can be found in Chapter 6 of this publication.
Table 2.7 Summary of Load Combinations using Strength Design for the Beam in
Example 2.3
Load Combination
1.4 D
1.2 D + 1.6 L
1.2 D + L
1.2 D + 0.8W
1.2 D + L + 1.6W
1.3D + L + QE
0.9 D − 1.6W
0.8 D − QE
Bending Moment
Shear Force
Like wind loads, sidesway to the right and to the left must be investigated for seismic
loads. In IBC Eq. 16-5, the maximum effect occurs when QE is added to the effects of the
gravity loads. In IBC Eq. 16-7, QE is subtracted from the effect of the dead load, since
maximum effects occur, in general, when minimum dead load and the effects from lateral
loads counteract. The same reasoning is applied in IBC Eqs. 16-3, 16-4 and 16-6 for the
wind effects W.
The constant f1 is taken as 1.0, since the occupancy of the building is classified as a place
of public assembly.
Example 2.4 – Beam in University Building, Basic Allowable Stress
Design Load Combinations for Shear Forces and Bending Moments
Determine the basic allowable stress design load combinations of IBC 1605.3.1 for the
beam described in Example 2.3.
The basic load combinations using allowable stress design of IBC 1605.3.1 are
summarized in Table 2.8 for this beam. The load combinations in the table include only
the applicable load effects from the design data.
Table 2.8 Summary of Basic Load Combinations using Allowable Stress Design
for Beam in Example 2.4
Load Combination
D + 0.75 L
D +W
1.07 D + 0.7Q E
D + 0.75 L + 0.75W
1.05 D + 0.75 L + 0.525QE
0.6 D − W
0.67 D − 0.7QE
Bending Moment
Shear Force
In accordance with ASCE/SEI 12.4.2, the seismic load effect E is defined as follows:
For use in load combinations 5 and 6 of ASCE/SEI 2.4.1 or, equivalently, in
IBC Eqs. 16-12 and 16-13:
E = Eh + Ev = ρQE + 0.2S DS D = QE + 0.1D
For use in load combination 8 of ASCE/SEI 2.3.2 or, equivalently, in IBC
Eq. 16-15:
E = Eh − Ev = ρQE − 0.2S DS D = QE − 0.1D
Thus, substituting for E, IBC Eqs. 16-12 and 16-13 become, respectively,
D + 0.7 E = 1.07 D + 0.7QE
D + 0.75 L + 0.525 E = 1.05 D + 0.75 L + 0.525QE
Similarly, IBC Eq. 16-15 becomes: 0.6 D + 0.7 E = 0.67 D + 0.7QE
Like wind loads, sidesway to the right and to the left must be investigated for seismic
loads. In IBC Eqs. 16-12 and 16-13, the maximum effect occurs when QE is added to the
effects of the gravity loads. In IBC Eq. 16-15, QE is subtracted from the effect of the dead
load, since maximum effects occur, in general, when minimum dead load and the effects
from lateral loads counteract. The same reasoning is applied in IBC Eqs. 16-12, 16-13
and 16-14 for the wind effects W.
Example 2.5 – Beam in University Building, Alternative Basic Allowable
Stress Design Load Combinations for Shear Forces and Bending
Determine the alternative basic allowable stress design load combinations of
IBC 1605.3.2 for the beam described in Example 2.3. Assume that the wind forces have
been determined using the provisions of Chapter 6 of ASCE/SEI 7.
The alternative basic load combinations using allowable stress design of IBC 1605.3.2
are summarized in Table 2.9 for this beam. The load combinations in the table include
only the applicable load effects from the design data.
Table 2.9 Summary of Alternative Basic Load Combinations using Allowable Stress
Design for Beam in Example 2.5
Load Combination
D + L + 1.3W
0.67 D + L − 1.3W
D + L + 0.65W
1.07 D + L + (QE / 1.4)
0.83D − (QE / 1.4)
Bending Moment
Shear Force
The factor ω is taken as 1.3, since the wind forces have been determined by Chapter 6 of
In accordance with ASCE/SEI 12.4.2, the seismic load effect E is defined as follows:
For use in IBC Eq. 16-20:
E = Eh + Ev = ρQE + 0.2S DS D = QE + 0.1D
For use in IBC Eq. 16-21:
E = Eh − Ev = ρQE − 0.2S DS D = QE − 0.1D
Thus, substituting for E, IBC Eq. 16-20 becomes: D + L + E / 1.4 = 1.07 D + L + QE / 1.4
Similarly, IBC Eq. 16-21 becomes: 0.9 D + E / 1.4 = 0.83D + QE / 1.4
Like wind loads, sidesway to the right and to the left must be investigated for seismic
loads. In IBC Eq. 16-20, the maximum effect occurs when QE is added to the effects of
the gravity loads. In IBC Eq. 16-21, QE is subtracted from the effect of the dead load,
since maximum effects occur, in general, when minimum dead load and the effects from
lateral loads counteract.
In accordance with IBC 1605.3.2, two-thirds of the dead load is used in IBC Eqs. 16-17
and 16-18 to counter the maximum effects from the wind pressure.
Example 2.6 – Collector Beam in Residential Building, Load Combinations
using Strength Design and Basic Load Combinations for Strength Design
with Overstrength Factor for Axial Forces, Shear Forces, and Bending
Determine the strength design load combinations and the basic combinations for strength
design with overstrength factor for a simply supported collector beam in a residential
building using the nominal axial loads, shear forces and bending moments in the design
data. The live load on the floors is less than 100 psf.
Axial Force
Shear Force
Bending Moment
Dead load, D
Live load, L
Seismic, QE
± 50
The seismic design data are as follows:16
More information on seismic design can be found in Chapter 6 of this publication.
Ω o = system overstrength factor = 2.5
S DS = design spectral response acceleration parameter at short periods = 1.0g
Seismic design category: D
The axial seismic force QE corresponds to the portion of the diaphragm design force that
is resisted by the collector beam. This force can be tensile or compressive.
The governing load combination in IBC 1605.2.1 is as follows:
IBC Eq. 16-2:
Bending moment: 1.2 D + 1.6 L = (1.2 × 703) + (1.6 × 235) = 1,220 ft-kips
Shear force: 1.2 D + 1.6 L = (1.2 × 56) + (1.6 × 19) = 98 kips
Since the beam is a collector beam in a building assigned to SDC D, the beam must be
designed for the following basic combinations for strength design with overstrength
factor (see IBC 1605.1 and 1605.2.1; ASCE/SEI 12.4.3.2):
IBC Eq. 16-5: (1.2 + 0.2 S DS ) D + Ω oQE + 0.5 L 17
Axial force: Ω oQE = 2.5 × 50 = 125 kips tension or compression
Bending moment: (1.2 + 0.2S DS ) D + 0.5L = (1.4 × 703) + (0.5 × 235) = 1,102 ftkips
Shear force: (1.2 + 0.2S DS ) D + 0.5L = (1.4 × 56) + (0.5 × 19) = 88 kips
IBC Eq. 16-7: (0.9 − 0.2S DS ) D + Ω oQE
Axial force: Ω oQE = 2.5 × 50 = 125 kips tension or compression
Bending moment: (0.9 − 0.2 S DS ) D = 0.7 × 703 = 492 ft-kips
Shear force: (0.9 − 0.2 S DS ) D = 0.7 × 56 = 39 kips
The load factor on L is permitted to equal 0.5 in accordance with Note 1 in ASCE 12.4.3.2.
The collector beam and its connections must be designed to resist the combined effects of
(1) flexure and axial tension, (2) flexure and axial compression and (3) shear as set forth
by the above load combinations.
Example 2.7 – Collector Beam in Residential Building, Load Combinations
using Allowable Stress Design (Basic Load Combinations) and Basic
Combinations for Allowable Stress Design with Overstrength Factor for
Axial Forces, Shear Forces, and Bending Moments
Determine the load combinations using allowable stress design (basic load combinations)
and the basic combinations for allowable stress design with overstrength factor for the
beam in Example 2.6.
The governing load combination in IBC 1605.3.1 is as follows:
IBC Eq. 16-9:
Bending moment: D + L = 703 + 235 = 938 ft-kips
Shear force: D + L = 56 + 19 = 75 kips
Since the beam is a collector beam in a building assigned to SDC D, the beam must be
designed for the following basic combinations for allowable stress design with
overstrength factor (see IBC 1605.1 and 1605.3.1; ASCE/SEI 12.4.3.2):
IBC Eq. 16-12: (1.0 + 0.14 S DS ) D + 0.7Ω oQE
Axial force: 0.7Ω oQE = 0.7 × 2.5 × 50 = 88 kips tension or compression
Bending moment: (1.0 + 0.14 S DS ) D = 1.14 × 703 = 801 ft-kips
Shear force: (1.0 + 0.14S DS ) D = 1.14 × 56 = 64 kips
IBC Eq. 16-13: (1.0 + 0.105S DS ) D + 0.525Ω oQE + 0.75 L
Axial force: 0.525Ω oQE = 0.525 × 2.5 × 50 = 66 kips tension or compression
Bending moment: 1.105 D + 0.75L = (1.105 × 703) + (0.75 × 235) = 953 ft-kips
Shear force: 1.105 D + 0.75 L = (1.105 × 56) + (0.75 × 19) = 76 kips
IBC Eq. 16-15: (0.6 − 0.14 S DS ) D + 0.7Ω oQE
Axial force: 0.7Ω oQE = 0.7 × 2.5 × 50 = 88 kips tension or compression
Bending moment: (0.6 − 0.14 S DS ) D = 0.46 × 703 = 323 ft-kips
Shear force: (0.6 − 0.14 S DS ) D = 0.46 × 56 = 26 kips
The collector beam and its connections must be designed to resist the combined effects of
(1) flexure and axial tension, (2) flexure and axial compression and (3) shear as set forth
by the above load combinations.18
Example 2.8 – Collector Beam in Residential Building, Load Combinations
using Allowable Stress Design (Alternative Basic Load Combinations)
and Basic Combinations for Allowable Stress Design with Overstrength
Factor for Axial Forces, Shear Forces, and Bending Moments
Determine the load combinations using allowable stress design (alternative basic load
combinations) and the basic combinations for allowable stress design with overstrength
factor for the beam in Example 2.6.
The governing load combination in IBC 1605.3.2 is as follows:
IBC Eq. 16-16:
Bending moment: D + L = 703 + 235 = 938 ft-kips
Shear force: D + L = 56 + 19 = 75 kips
Since the beam is a collector beam in a building assigned to SDC D, the beam must be
designed for the following basic combinations for allowable stress design with
overstrength factor (see IBC 1605.1 and 1605.3.2; ASCE/SEI 12.4.3.2):
0.2 S DS
IBC Eq. 16-20: ¨¨1.0 +
Axial force:
Ω o QE
Ω Q
¸D + o E + L
2.5 × 50
= 89 kips tension or compression
See ASCE/SEI 12.4.3.3 for allowable stress increase for load combinations with overstrength.
0.2 ·
Bending moment: Ǭ1.0 +
¸ × 703» + 235 = 1038 ft-kips
1.4 ¹
0.2 ·
Shear force: Ǭ1.0 +
¸ × 56» + 19 = 83 kips
1.4 ¹
0.2S DS
IBC Eq. 16-21: ¨¨ 0.9 −
Axial force:
Ω o QE
Ω Q
¸D + o E
2.5 × 50
= 89 kips tension or compression
0.2 ·
Bending moment: ¨ 0.9 −
¸ × 703 = 532 ft-kips
1.4 ¹
0.2 ·
Shear force: ¨ 0.9 −
¸ × 56 = 42 kips
1.4 ¹
The collector beam and its connections must be designed to resist the combined effects of
(1) flexure and axial tension, (2) flexure and axial compression and (3) shear as set forth
by the above load combinations.19
Example 2.9 – Timber Pile in Residential Building, Basic Allowable Stress
Design Load Combinations for Axial Forces
Determine the basic allowable stress design load combinations for a timber pile
supporting a residential building using the nominal axial loads in the design data. The
residential building is located in a Coastal A Zone.
Axial Force
Dead load, D
Live load, L
Roof live load, Lr
Wind, W
± 16
Flood, Fa
See ASCE/SEI 12.4.3.3 for allowable stress increase for load combinations with overstrength.
The basic load combinations using allowable stress design of IBC 1605.3.1 are
summarized in Table 2.10 for this pile. The load combinations in the table include only
the applicable load effects from the design data. Also, since flood loads Fa must be
considered, the load combinations of ASCE/SEI 2.4.2 are used (IBC 1605.3.1.2). In
particular, 1.5 Fa is added to the other applicable loads in IBC Eqs. 16-12, 16-13 and 1614.
Table 2.10 Summary of Basic Load Combinations using Allowable Stress Design
(IBC 1605.3.1) for Timber Pile
Load Combination
Axial Load
D + Lr
D + 0.75 L + 0.75 Lr
D + W + 1.5 Fa
D − W − 1.5 Fa
D + 0.75W + 0.75 L + 0.75 Lr + 1.5 Fa
D − 0.75W + 0.75 L + 0.75 Lr − 1.5 Fa
0.6 D + W
0.6 D − W
The pile must be designed for the axial compression and tension forces in Table 2.10 in
combination with bending moments caused by the wind and flood loads. Shear forces and
deflection at the tip of the pile must also be checked. Finally, the embedment length of
the pile must be sufficient to resist the maximum net tension force.20
More information on flood loads can be found in Chapter 7 of this publication.
CHAPTER 3
DEAD, LIVE, AND RAIN LOADS
Nominal dead loads D are determined in accordance with IBC 1606.1 In general, design
dead loads are the actual weights of construction materials and fixed service equipment
that are attached to or supported by the building or structure. Various types of such loads
are listed in IBC 1602.
Dead loads are considered to be permanent loads, i.e., loads in which variations over time
are rare or of small magnitude. Variable loads, such as live loads and wind loads, are not
permanent. It is important to know the distinction between permanent and variable loads
when applying the provisions for load combinations.2
It is not uncommon that the weights of materials and service equipment (such as
plumbing stacks and risers, HVAC equipment, elevators and elevator machinery, fire
protection systems and similar fixed equipment) are not known during the design phase.
Estimated material and/or equipment loads are often used in design. Typically, estimated
dead loads are assumed to be greater than the actual dead loads so that the design is
conservative. While such practice is acceptable when considering load combinations
where the effects of gravity loads and lateral loads are additive, it is not acceptable when
considering load combinations where gravity loads and lateral loads counteract. For
example, it would be unconservative to design for uplift on a structure using an
overestimated value of dead load.
ASCE/SEI Table C3-1 provides minimum design dead loads for various types of
common construction components, including ceilings, roof and wall coverings, floor fill,
floors and floor finishes, frame partitions and frame walls. Minimum densities for
common construction materials are given in ASCE/SEI Table C3-2.
The weights in ASCE/SEI Tables C3-1 and C3-2 can be used as a guide when estimating
dead loads. Actual weights of construction materials and equipment can be greater than
tabulated values, so it is always prudent to verify weights with manufacturers or other
similar resources prior to design. In cases where information on dead load is unavailable,
values of dead loads used in design must be approved by the building official
(IBC 1606.2).
Nominal loads are those loads that are specified in Chapter 16 of the IBC.
See IBC 1605, ASCE/SEI Chapter 2 and Chapter 2 of this publication for information on load
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
Nominal live loads are determined in accordance with IBC 1607. Live loads are those
loads produced by the use and occupancy of a building or structure and do not include
construction loads, environmental loads (such as wind loads, snow loads, rain loads,
earthquake loads and flood loads) or dead loads (IBC 1602).
In general, live loads are transient in nature and vary in magnitude over the life of a
structure. Studies have shown that building live loads consist of both a sustained portion
and a variable portion. The sustained portion is based on general day-to-day use of the
facilities, and will generally vary during the life of the structure due to tenant modifications
and changes in occupancy, for example. The variable portion of the live load is typically
created by events such as remodeling, temporary storage and similar unusual events.
Nominal design values of uniformly distributed and concentrated live loads are given in
IBC Table 1607.1 as a function of occupancy or use. The occupancy category listed in
the table is not necessarily group specific.3 As an example, an office building with a
Business Group B classification may also have storage areas that may warrant live loads
of 125 psf or 250 psf depending on the type of storage.
The design values in IBC Table 1607.1 are minimum values; actual design values can be
determined to be larger than these values, but in no case shall the structure be designed
for live loads that are less than the tabulated values. For occupancies that are not listed in
the table, values of live load used in design must be approved by the building official. It
is also important to note that the provisions do not require concurrent application of
uniform and concentrated loads. Structural members are designed based on the maximum
effects due to the application of either a uniform load or a concentrated load, and need
not be designed for the effects of both loads applied at the same time.
Provisions for the weight of partitions are given in IBC 1607.5. Buildings where
partitions can be relocated must include a live load of 15 psf if the nominal uniform floor
live load is less than or equal to 80 psf. The weight of any built-in partitions that cannot
be moved is considered a dead load in accordance with IBC 1602.
Minimum live loads for truck and bus garages are given in IBC 1607.6. IBC 1607.7
prescribes loads on handrails, guards, grab bars and vehicle barriers, and IBC 1607.8
contains provisions for impact loads that involve unusual vibration and impact forces,
such as those from elevators and machinery.
ASCE/SEI Table 4-1 also contains minimum uniform and concentrated live loads. There
are some differences between this table and IBC Table 1607.1. ASCE Tables C4-1 and
C4-2 can also be used as a guide in establishing live loads for commonly encountered
Occupancy groups are defined in IBC Chapter 3.
Reduction in Live Loads
According to IBC 1607.9, the minimum nominal uniformly distributed live loads Lo of
IBC Table 1607.1 and uniform live loads of special purpose roofs are permitted to be
reduced by either the provisions of IBC 1607.9.1 or 1607.9.2. Both methods are
discussed below. Roof live loads, other than special-purpose roofs, are not permitted to
be reduced by these provisions; reduction of such roof live loads is covered in IBC
1607.11 and Section 3.2.4 of this publication.
General Method of Live Load Reduction
The general method in IBC 1607.9.1 of reducing uniform live loads other than uniform
live loads at roofs is based on the provisions contained in ASCE/SEI 4.8. IBC Eq. 16-22
can be used to obtain a reduced live load L for members where KLLAT 400 sq ft, subject
to the limitations of IBC 1607.9.1.1 through 1607.9.1.5:
L = Lo ¨ 0.25 +
LL T ¹
≥ 0.50 Lo for members supporting one floor
≥ 0.40 Lo for members supporting two or more floors
where KLL live load element factor given in IBC Table 1607.9.1 and AT = tributary area in
square feet.
The live load element factor KLL converts the tributary area of a structural member AT to
an influence area, which is considered to be the adjacent floor area from which the
member derives any of its load. For example, the influence area for an interior column is
equal to the area of the four bays adjacent to the column, which is equal to four times the
tributary area of the column. Thus, KLL = 4 for an interior column. ASCE/SEI Figure C4
illustrates typical tributary and influence areas for a variety of elements. Figure 3.1
illustrates how the reduction multiplier 0.25 + 15 / K LL AT varies with respect to the
influence area KLLAT. Included in the figure are the minimum influence area of 400 sq ft
and the limits of 0.5 and 0.4, which are the maximum permitted reductions for members
supporting one floor and two or more floors, respectively.
One-way Slabs. Live load reduction on one-way slabs is now permitted in the 2009 IBC
provided the tributary area AT does not exceed an area equal to the slab span times a
width normal to the span of 1.5 times the slab span (i.e., an area with an aspect ratio of
1.5). The live load will be somewhat higher for a one-way slab with an aspect ratio of 1.5
than for a two-way slab with the same aspect ratio. This recognizes the benefits of higher
redundancy that results from two-way action.
ASCE 4.8.5 has the same requirements for live load reduction on one-way slabs as that in
IBC 1607.9.1.1.
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
0.25 +
K LL AT
Figure 3.1 Reduction Multiplier for Live Load in accordance with IBC 1607.9.1
Heavy Live Loads. According to IBC 1607.9.1.2, live loads that are greater than 100 psf
shall not be reduced except for the following: (1) Live loads for members supporting two
or more floors are permitted to be reduced by a maximum of 20 percent, but L shall not
be less than that calculated by IBC 1607.9.1, or (2) In occupancies other than storage,
additional live load reduction is permitted if it can be shown by a registered design
professional that such a reduction is warranted.
Passenger Vehicle Garages. The live load in passenger vehicle garages is not permitted
to be reduced, except for members supporting two or more floors; in such cases, the
maximum reduction is 20 percent, but L shall not be less than that calculated by
IBC 1607.9.1 (IBC 1607.9.1.3).
Group A (Assembly) Occupancies. Due to the nature of assembly occupancies, there is
a high probability that the entire floor is subjected to full uniform live load. Thus,
IBC 1607.9.1.4 requires that live loads of 100 psf and live loads at areas where fixed
seats are located shall not be reduced in such occupancies.
Roof Members. Live loads of 100 psf or less are not permitted to be reduced on roof
members except as specified in IBC 1607.11.2 for flat, pitched and curved roofs and
special-purpose roofs (IBC 1607.9.1.5).
A summary of the general method of live load reduction for floors in accordance with
IBC 1607.9.1 is given in Figure 3.2.
(IBC 1607.9.1)
Is Lo > 100 psf?
Live load reduction is not permitted*
Is the member a one-way
AT ≤ 1.5(slab span)2
Is the structure a passenger
vehicle garage?
Live load reduction is not permitted**
Is the structure classified as a
Group A occupancy with live
loads of 100 psf and areas where
fixed seats are located?
Live load reduction is not permitted
* See IBC 1607.9.1.2 for two exceptions to this requirement.
** Live loads for members supporting two or more floors are permitted
to be reduced by a maximum of 20 percent (IBC 1607.9.1.3).
Figure 3.2 General Method of Live Load Reduction in accordance with IBC 1607.9.1
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
GENERAL METHOD OF LIVE LOAD REDUCTION (IBC 1607.9.1)
Is the member a roof
Live load reduction is not permitted
except as specified in IBC 1607.11.2
Is K LL AT < 400 sq ft?
Live load reduction is not permitted
L = Lo ¨ 0.25 +
K LL AT
≥ 0.50 Lo for members supporting one floor
≥ 0.40 Lo for members supporting two or more floors
Figure 3.2 General Method of Live Load Reduction in accordance with IBC 1607.9.1
Alternate Method of Floor Live Load Reduction
An alternate method of floor live load reduction, which is based on provisions in the 1997
Uniform Building Code, is given in IBC 1607.9.2. IBC Eq. 16-23 can be used to obtain a
reduction factor R for members that support an area greater than or equal to 150 sq ft:
R = 0.08( A − 150)
­40 percent for horizontal members
≤ the smallest of ®60 percent for vertical members
°23.1(1 + D / L )
where A = area of floor supported by a member in square feet and D = dead load per
square foot of area supported.
The reduced live load L is then determined by
R ·
L = Lo ¨1 −
© 100 ¹
Similar to the general method of live load reduction, live loads are not permitted to be
reduced in the following situations:
In Group A (assembly) occupancies.
Where the live load exceeds 100 psf except (a) for members supporting two or
more floors in which case the live load may be reduced by a maximum of 20
percent or (b) in occupancies other than storage where it can be shown by a
registered design professional that such a reduction is warranted.
In passenger vehicle garages except for members supporting two or more floors
in which case the live load may be reduced by a maximum of 20 percent.
Reduction of live load on one-way slab systems is permitted by this method provided the
area A is not taken greater than that prescribed in IBC 1607.9.2(5).
A summary of the alternate method of floor live load reduction in accordance with IBC
1607.9.2 is given in Figure 3.3.
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
(IBC 1607.9.2)
Is Lo > 100 psf?
Live load reduction is not permitted*
Is the member a one-way
A ≤ 0.5(slab span)2
Is the structure a passenger
vehicle garage?
Live load reduction is not permitted**
Is the structure classified as a
Group A occupancy?
Live load reduction is not permitted
* Live loads for members supporting two or more floors are permitted to be reduced
by a maximum of 20 percent [IBC 1607.9.2(2); also see exception in that section].
** Live loads for members supporting 2 or more floors are permitted to be reduced by a
maximum of 20 percent [IBC 1607.9.1.2(3)].
Figure 3.3 Alternate Method of Live Load Reduction in accordance with IBC 1607.9.2
ALTERNATE METHOD OF LIVE LOAD REDUCTION (IBC 1607.9.2)
Is A < 150 sq ft?
Live load reduction is not permitted
L = Lo [1 − ( R / 100)]
R = 0.08( A − 150)
­40 percent for horizontal members
≤ smallest of ®60 percent for vertical members
°23.1(1 + D / L )
Figure 3.3 Alternate Method of Live Load Reduction in accordance with IBC 1607.9.2
Distribution of Floor Loads
IBC 1607.10 requires that the effects of partial uniform live loading (or alternate span
loading) be investigated when analyzing continuous floor members. Such loading
produces greatest effects at different locations along the span. Reduced floor live loads
may be used when performing this analysis.
Figure 3.4 illustrates four loading patterns that need to be investigated for a three-span
continuous system subject to dead and live loads.
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
Dead + live load
Dead + live load
Dead load
Loading pattern for maximum negative moment at support A or D
and maximum positive moment in span AB or CD
Dead + live load
Dead load
Loading pattern for maximum negative moment at support B
Dead + live load
Dead load
A A
Loading pattern for maximum negative moment at support C
Dead + live load
Dead load
Dead load
Loading pattern for maximum positive moment in span BC
Figure 3.4 Distribution of Floor Loads for a Three-span Continuous System in
accordance with IBC 1607.10
Roof Loads
In general, roofs are to be designed to resist dead, live, wind, and where applicable, rain,
snow and earthquake loads. A minimum roof live load of 20 psf is prescribed in IBC
Table 1607.1 for typical roof structures, while larger live loads are required for roofs used
as gardens or places of assembly.
IBC 1607.11.2 permits nominal roof live loads of 20 psf on flat, pitched and curved roofs
to be reduced in accordance with Eq. 16-25:
Lr = Lo R1R2 , 12 ≤ Lr ≤ 20
Lr = reduced live load per square foot of horizontal roof projection
­1 for At ≤ 200 sq ft
R1 = ®1.2-0.001At for 200 sq ft < At < 600 sq ft
°0.6 for A ≥ 600 sq ft
­1 for F ≤ 4
R2 = ®1.2-0.05 F for 4 < F < 12
°0.6 for F > 12
At = tributary area (span length multiplied by effective width) in square feet
supported by a structural member
F = for a sloped roof, the number of inches of rise per foot; for an arch or dome,
the rise-to-span ratio multiplied by 32
As seen from Eq. 16-25, roof live load reduction is based on tributary area of the member
being considered and the slope of the roof. No live load reduction is permitted for
members supporting less than or equal to 200 sq ft as well as for roof slopes less than or
equal to 4:12. In no case is the reduced roof live load to be taken less than 12 psf.
Live load reduction of special purpose roofs (roofs used as promenade decks, roof
gardens, roofs used as places of assembly, etc.) is permitted in accordance with the
provisions of IBC 1607.9 for floors (IBC 1607.11.2.2). Live loads that are equal to or
greater than 100 psf at areas of roofs that are classified as Group A (assembly)
occupancies are not permitted to be reduced.
Landscaped roofs are to be designed for a minimum roof live load of 20 psf
(IBC 1607.11.3). The weight of landscaping material is considered as dead load,
considering the saturation level of the soil.
A minimum roof live load of 5 psf is required for awnings and canopies in accordance
with IBC Table 1607.1 (IBC 1607.11.4). Such elements must also be designed for the
combined effects of snow and wind loads in accordance with IBC 1605.
Crane Loads
A general description of the crane loads that must be considered is given in IBC 1607.12.
In general, the support structure of the crane must be designed for the maximum wheel
load, vertical impact and horizontal impact as a simultaneous load combination.
Provisions on how to determine each of these component loads are given in IBC 1607.12.
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
Interior Walls and Partitions
Interior walls and partitions (including their finishing materials) greater than 6 ft in height
are required to be designed for a horizontal load of 5 psf (IBC 1607.13). This requirement
is intended to provide sufficient strength and durability of the wall framing and its
finished construction when subjected to nominal impact loads, such as those from
moving furniture or equipment, and from HVAC pressurization.
Requirements for fabric partitions are given in IBC 1607.13.1.
IBC 1611 contains requirements for design rain loads. IBC Eq. 16-35 is used to
determine the rain load R on an undeflected roof:
R = 5.2(d s + d h )
where ds = depth of water on the undeflected roof up to the inlet of the secondary
drainage system when the primary drainage system is blocked and dh = additional depth
of water on the undeflected roof above the inlet of the secondary drainage system at its
design flow.4
The nominal rain load R represents the weight of accumulated rainwater on the roof,
assuming that the primary roof drainage system is blocked. The primary roof drainage
system is designed for the 100-year hourly rainfall rate indicated in IBC Figure 1611.1
and the area of roof that it drains; it can include, for example, roof drains, leaders,
conductors and horizontal storm drains within the structure.5
When primary roof drainage is blocked, water will rise above the primary roof drain until
it reaches the elevation of the roof edge, scuppers or other secondary drains. The depth of
water above the primary drain at the design rainfall intensity is based on the flow rate of
the secondary drainage system, which depends on the type of drainage system.6
Figure 3.5 illustrates the water depths ds and dh that are to be used in Eq. 16-35 for the
case of a scupper acting as a secondary drain. Similarly, Figure 3.6 illustrates these water
depths for a typical interior secondary drain.
The constant in Eq. 16-35 is equal to the unit load per inch depth of water = 62.4/12 = 5.2 psf/in. An
undeflected roof refers to the case where deflections from loads (including dead loads) are not considered
when determining the amount of rainwater on the roof.
IBC Figure 1611.1 provides the rainfall rates for a storm of 1-hour duration that has a 100-year return
period. These rates are calculated by a statistical analysis of weather records. See Chapter 11 of the
International Plumbing Code® (IPC®) for requirements on the design of roof drainage systems. Rainfall
rates are given for various cities in the U.S. in Appendix B of the IPC. The rates are based on the maps in
IPC Figure 1106.1, which have the same origin as the maps in the IBC.
ASCE/SEI Table C8-1 gives flow rates in gallons per minute and corresponding hydraulic heads for
various types of drainage systems. For example, a 6-inch wide by 4-inch high closed scupper with 3
inches of hydraulic head will discharge 90 gallons per minute.
Scupper (Secondary Drainage System)
Top of Rainwater
Roof Drain
Primary Drainage System
(assumed blocked)
Figure 3.5 Example of Water Depths d s and d h in accordance with IBC 1611 for
Typical Perimeter Scuppers
Secondary Drainage System
Top of Rainwater
Roof Drain
Primary Drainage System
(assumed blocked)
Figure 3.6 Example of Water Depths ds and dh in accordance with IBC 1611 for Typical
Interior Drains
Where buildings are configured such that rainwater will not collect on the roof, no rain
load is required in the design of the roof, and a secondary drainage system is not needed.
What is important to note is that the provisions of IBC 1611 must be considered wherever
the potential exists that water may accumulate on the roof.
The following examples illustrate the IBC requirements for live load reduction and rain
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
Example 3.1 – Live Load Reduction, General Method of IBC 1607.9.1
The typical floor plan of a 10-story reinforced concrete office building is illustrated in
Figure 3.7.
Determine reduced live loads for
(1) column A3
(2) column B3
(3) column C1
(4) column C4
(5) column B6
(6) two-way slab AB23
Figure 3.7 Typical Floor Plan of 10-story Office Building
The ninth floor is designated as a storage floor (125 psf) and all other floors are typical
office floors with moveable partitions.
The roof is an ordinary flat roof (slope of 1/2 on 12), which is not used as a place of
public assembly or for any special purposes. Assume that rainwater does not collect on
the roof, and neglect snow loads.
Neglect lobby/corridor loads on the typical floors for this example.
Nominal Loads
Roof: 20 psf in accordance with IBC Table 1607.1, since the roof is an ordinary
flat roof that is not used for public assembly or any other special purposes.
Ninth floor: storage load is given in the design criteria as 125 psf.
Typical floor: 50 psf for office space in accordance with IBC Table 1607.1 and 15
psf for moveable partitions in accordance with IBC 1607.5, since the live load
does not exceed 80 psf. The partition load is not reducible; only the minimum
loads in IBC Table 1607.1 are permitted to be reduced (IBC 1607.9).
Part 1: Determine reduced live load for column A3
A summary of the reduced live loads is given in Table 3.1. Detailed calculations for
various floor levels follow the table.
Table 3.1 Summary of Reduced Live Loads for Column A3
K LL AT
(sq ft)
Live Load
N = nonreducible live load, R = reducible live load
* Roof live load reduced in accordance with IBC 1607.11.2
** Live load > 100 psf is not permitted to be reduced (IBC 1607.9.1.2)
The reduced roof live load Lr is determined by Eq. 16-25:
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
Lr = Lo R1R2 = 20 R1R2
The tributary area At of column A3 = (28 / 2) × 22.5 = 315 sq ft
Since 200 sq ft < At < 600 sq ft, R1 is determined by Eq. 16-27:
R1 = 1.2 − 0.001At = 1.2 − (0.001 × 315) = 0.89
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20 × 0.89 × 1 = 17.8 psf
Axial load = 17.8 × 315 / 1,000 = 5.6 kips
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced (IBC 1607.9.1.2).
Axial load = 125 × 315 / 1,000 = 39.4 kips
Typical floors
Reducible nominal live load = 50 psf
Since column A3 is an exterior column without a cantilever slab, the live load
element factor KLL = 4 (IBC Table 1607.9.1).7
Reduced live load L is determined by Eq. 16-22:
L = Lo ¨ 0.25 +
≥ 0.50 Lo for members supporting one floor
≥ 0.40 Lo for members supporting two or more floors
The reduction multiplier is equal to 0.40 where KLLAT ≥ 10,000 sq ft (see Figure 3.1).
Axial load = ( L + 15) AT = 315( L + 15)
KLL = influence area/tributary area = 28(25 + 20)/315 = 4.
Part 2: Determine reduced live load for column B3
A summary of the reduced live loads is given in Table 3.2. Detailed calculations for
various floor levels follow the table.
Table 3.2 Summary of Reduced Live Loads for Column B3
K LL AT
(sq ft)
Live Load
N = nonreducible live load, R = reducible live load
* Roof live load reduced in accordance with IBC 1607.11.2
** Live load > 100 psf is not permitted to be reduced (IBC 1607.9.1.2)
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
The tributary area At of column B3 = 28 × 22.5 = 630 sq ft
Since At > 600 sq ft, R1 is determined by Eq. 16-28: R1 = 0.6
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20 × 0.6 × 1 = 12.0 psf
Axial load = 12.0 × 630 / 1,000 = 7.6 kips
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced (IBC 1607.9.1.2).
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
Axial load = 125 × 630 / 1,000 = 78.8 kips
Typical floors
Reducible nominal live load = 50 psf
Since column B3 is an interior column, the live load element factor KLL = 4 (IBC
Table 1607.9.1).8
Reduced live load L is determined by Eq. 16-22:
L = Lo ¨ 0.25 +
LL T ¹
≥ 0.50 Lo for members supporting one floor
≥ 0.40 Lo for members supporting two or more floors
The reduction multiplier is equal to 0.40 where KLLAT ≥ 10,000 sq ft (see Figure 3.1).
Axial load = ( L + 15) AT = 630( L + 15)
Part 3: Determine reduced live load for column C1
A summary of the reduced live loads is given in Table 3.3. Detailed calculations for
various floor levels follow the table.
Table 3.3 Summary of Reduced Live Loads for Column C1
K LL AT
(sq ft)
Live Load
N = nonreducible live load, R = reducible live load
* Roof live load reduced in accordance with IBC 1607.11.2
** Live load > 100 psf is not permitted to be reduced (IBC 1607.9.1.2)
K LL = influence area/tributary area = 2[(28 × 25) + (28 × 20)]/630 = 4.
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
The tributary area At of column C1 = 28 × 25 / 4 = 175 sq ft
Since At < 200 sq ft, R1 is determined by Eq. 16-26: R1 = 1
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20 × 1 × 1 = 20.0 psf
Axial load = 20.0 × 175 / 1,000 = 3.5 kips
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced (IBC 1607.9.1.2).
Axial load = 125 × 175 / 1,000 = 21.9 kips
Typical floors
Reducible nominal live load = 50 psf
Since column C1 is an exterior column without a cantilever slab, the live load
element factor KLL = 4 (IBC Table 1607.9.1).9
Reduced live load L is determined by Eq. 16-22:
15 ·¸
L = Lo ¨ 0.25 +
K LL AT ¸¹
≥ 0.50 Lo for members supporting one floor
≥ 0.40 Lo for members supporting two or more floors
Axial load = ( L + 15) AT = 175( L + 15)
K LL = influence area/tributary area = (28 × 25)/175 = 4.
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
Part 4: Determine reduced live load for column C4
A summary of the reduced live loads is given in Table 3.4. Detailed calculations for
various floor levels follow the table.
Table 3.4 Summary of Reduced Live Loads for Column C4
K LL AT
(sq ft)
Live Load
N = nonreducible live load, R = reducible live load
* Roof live load reduced in accordance with IBC 1607.11.2
** Live load > 100 psf is not permitted to be reduced (IBC 1607.9.1.2)
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
The tributary area At of column C4 = (28 × 20) / 4 + (28 × 25) / 2 = 490 sq ft
Since 200 sq ft < At < 600 sq ft, R1 is determined by Eq. 16-27:
R1 = 1.2 − 0.001At = 1.2 − (0.001 × 490) = 0.71
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20.0 × 0.71 × 1 = 14.2 psf
Axial load = 14.2 × 490 / 1,000 = 7.0 kips
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced (IBC 1607.9.1.2).
Axial load = 125 × 490 / 1,000 = 61.3 kips
Typical floors
Reducible nominal live load = 50 psf
Since column C1 is an exterior column without a cantilever slab, the live load
element factor KLL = 4 (IBC Table 1607.9.1).10
Reduced live load L is determined by Eq. 16-22:
L = Lo ¨ 0.25 +
LL T ¹
≥ 0.50 Lo for members supporting one floor
≥ 0.40 Lo for members supporting two or more floors
The reduction multiplier is equal to 0.40 where KLLAT ≥ 10,000 sq ft.
Axial load = ( L + 15) AT = 490( L + 15)
Part 5: Determine reduced live load for column B6
A summary of the reduced live loads is given in Table 3.5. Detailed calculations for
various floor levels follow.
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
The tributary area At of column B6 = 28 × (25 / 2 + 5) = 490 sq ft
Since 200 sq ft < At < 600 sq ft, R1 is determined by Eq. 16-27:
R1 = 1.2 − 0.001At = 1.2 − (0.001 × 490) = 0.71
K LL = influence area/tributary area = 28[20 + (2 × 25)]/490 = 4.
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20 × 0.71 × 1 = 14.2 psf
Axial load = 14.2 × 490 / 1,000 = 7.0 kips
Table 3.5 Summary of Reduced Live Loads for Column B6
K LL AT
(sq ft)
Live Load
N = nonreducible live load, R = reducible live load
* Roof live load reduced in accordance with IBC 1607.11.2
** Live load > 100 psf is not permitted to be reduced (IBC 1607.9.1.2)
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced (IBC 1607.9.1.2).
Axial load = 125 × 490 / 1,000 = 61.3 kips
Typical floors
Reducible nominal live load = 50 psf
Column B6 is an exterior column with a cantilever slab; thus, the live load element
factor KLL = 3 (IBC Table 1607.9.1).11
Reduced live load L is determined by Eq. 16-22:
Actual influence area/tributary area = 2[(28 × 25) + (5 × 28)]/490 = 3.4. IBC Table 1607.9.1 requires
K LL = 3 , which is slightly conservative.
L = Lo ¨ 0.25 +
LL T ¹
≥ 0.50 Lo for members supporting one floor
≥ 0.40 Lo for members supporting two or more floors
The reduction multiplier is equal to 0.40 where KLLAT 10,000 sq ft (see Figure 3.1).
Axial load = ( L + 15) AT = 490( L + 15)
Part 6: Determine reduced live load for two-way slab AB23
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
The tributary area At of this slab = 25 × 28 = 700 sq ft
Since At > 600 sq ft, R1 is determined by Eq. 16-28: R1 = 0.6
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20 × 0.6 × 1 = 14.0 psf
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced (IBC 1607.9.1.2).
Live load = 125 psf
Typical floors
Reducible nominal live load = 50 psf
According to IBC Table 1607.9.1, K LL = 1 for a two-way slab.
Reduced live load L is determined by Eq. 16-22:
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
¸ = Lo §¨ 0.25 + 15 ·¸ = 0.82 Lo
L = Lo ¨ 0.25 +
K LL AT ¸¹
700 ¹
= 41.0 psf
> 0.50 Lo for members supporting one floor
Total live load = 41 + 15 = 56 psf
Example 3.2 – Live Load Reduction, Alternate Method of IBC 1607.9.2
Determine the reduced live loads for the elements in Example 3.1 using the alternate
floor live load reduction of IBC 1607.9.2. Assume a nominal dead-to-live load ratio of 2.
Part 1: Determine reduced live load for column A3
A summary of the reduced live loads is given in Table 3.6. Detailed calculations for
various floor levels follow the table.
Table 3.6 Summary of Reduced Live Loads for Column A3
(sq ft)
Factor, R
Live Load
N = nonreducible live load, R = reducible live load
* Roof live load reduced in accordance with IBC 1607.11.2
** Live load > 100 psf is not permitted to be reduced [IBC 1607.9.2(2)]
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
The tributary area At of column A3 = (28 / 2) × 22.5 = 315 sq ft
Since 200 sq ft < At < 600 sq ft, R1 is determined by Eq. 16-27:
R1 = 1.2 − 0.001At = 1.2 − (0.001 × 315) = 0.89
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20 × 0.89 × 1 = 17.8 psf
Axial load = 17.8 × 315 / 1,000 = 5.6 kips
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced [IBC 1607.9.2(2)].
Axial load = 125 × 315 / 1,000 = 39.4 kips
Typical floors
Reducible nominal live load = 50 psf
Reduction factor R is given by Eq. 16-23:
R = 0.08( A − 150)
­60 percent for vertical members (governs)
≤ the smallest of ®
¯23.1(1 + D / Lo ) = 23.1(1 + 2) = 69 percent
Axial load = [ Lo (1 − 0.01R) + 15] A = 315[50(1 − 0.01R) + 15]
Part 2: Determine reduced live load for column B3
A summary of the reduced live loads is given in Table 3.7. Detailed calculations for
various floor levels follow.
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
The tributary area At of column B3 = 28 × 22.5 = 630 sq ft
Since At > 600 sq ft, R1 is determined by Eq. 16-28: R1 = 0.6
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20 × 0.6 × 1 = 12.0 psf
Axial load = 12.0 × 630 / 1,000 = 7.6 kips
Table 3.7 Summary of Reduced Live Loads for Column B3
(sq ft)
Factor, R
Live Load
N = nonreducible live load, R = reducible live load
* Roof live load reduced in accordance with IBC 1607.11.2
** Live load > 100 psf is not permitted to be reduced [IBC 1607.9.2(2)]
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced [IBC 1607.9.2(2)].
Axial load = 125 × 630 / 1,000 = 78.8 kips
Typical floors
Reducible nominal live load = 50 psf
Reduction factor R is given by Eq. 16-23:
R = 0.08( A − 150)
­60 percent for vertical members (governs)
≤ the smallest of ®
¯23.1(1 + D / Lo ) = 23.1(1 + 2) = 69 percent
Axial load = [ Lo (1 − 0.01R) + 15] A = 630[50(1 − 0.01R) + 15]
Part 3: Determine reduced live load for column C1
A summary of the reduced live loads is given in Table 3.8. Detailed calculations for
various floor levels follow the table.
Table 3.8 Summary of Reduced Live Loads for Column C1
(sq ft)
Factor, R
Live Load
N = nonreducible live load, R = reducible live load
* Roof live load reduced in accordance with IBC 1607.11.2
** Live load > 100 psf is not permitted to be reduced [IBC 1607.9.2(2)]
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
The tributary area At of column C1 = 28 × 25 / 4 = 175 sq ft
Since At < 200 sq ft, R1 is determined by Eq. 16-26: R1 = 1
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20 × 1 × 1 = 20.0 psf
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
Axial load = 20.0 × 175 / 1,000 = 3.5 kips
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced [IBC 1607.9.2(2)].
Axial load = 125 × 175 / 1,000 = 21.9 kips
Typical floors
Reducible nominal live load = 50 psf
Reduction factor R is given by Eq. 16-23:
R = 0.08( A − 150)
­60 percent for vertical members (governs)
≤ the smallest of ®
¯23.1(1 + D / Lo ) = 23.1(1 + 2) = 69 percent
Axial load = [ Lo (1 − 0.01R ) + 15] A = 175[50(1 − 0.01R ) + 15]
Part 4: Determine reduced live load for column C4
A summary of the reduced live loads is given in Table 3.9. Detailed calculations for
various floor levels follow the table.
Table 3.9 Summary of Reduced Live Loads for Column C4
(sq ft)
Factor, R
Live Load
N = nonreducible live load, R = reducible live load
* Roof live load reduced in accordance with IBC 1607.11.2
** Live load > 100 psf is not permitted to be reduced [IBC 1607.9.2(2)]
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
The tributary area At of column C4 = (28 × 20) / 4 + (28 × 25) / 2 = 490 sq ft
Since 200 sq ft < At < 600 sq ft, R1 is determined by Eq. 16-27:
R1 = 1.2 − 0.001At = 1.2 − (0.001 × 490) = 0.71
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20.0 × 0.71 × 1 = 14.2 psf
Axial load = 14.2 × 490 / 1,000 = 7.0 kips
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced [IBC 1607.9.2(2)].
Axial load = 125 × 490 / 1,000 = 61.3 kips
Typical floors
Reducible nominal live load = 50 psf
Reduction factor R is given by Eq. 16-23:
R = 0.08( A − 150)
­60 percent for vertical members (governs)
≤ the smallest of ®
¯23.1(1 + D / Lo ) = 23.1(1 + 2) = 69 percent
Axial load = [ Lo (1 − 0.01R) + 15] A = 490[50(1 − 0.01R) + 15]
Part 5: Determine reduced live load for column B6
A summary of the reduced live loads is given in Table 3.10. Detailed calculations for
various floor levels follow the table.
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
Table 3.10 Summary of Reduced Live Loads for Column B6
(sq ft)
Factor, R
Live Load
N = nonreducible live load, R = reducible live load
* Roof live load reduced in accordance with IBC 1607.11.2
** Live load > 100 psf is not permitted to be reduced [IBC 1607.9.2(2)]
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
The tributary area At of column B6 = 28 × (25 / 2 + 5) = 490 sq ft
Since 200 sq ft < At < 600 sq ft, R1 is determined by Eq. 16-27:
R1 = 1.2 − 0.001At = 1.2 − (0.001 × 490) = 0.71
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20 × 0.71 × 1 = 14.2 psf
Axial load = 14.2 × 490 / 1,000 = 7.0 kips
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced [IBC 1607.9.2(2)].
Axial load = 125 × 490 / 1,000 = 61.3 kips
Typical floors
Reducible nominal live load = 50 psf
Reduction factor R is given by Eq. 16-23:
R = 0.08( A − 150)
­60 percent for vertical members (governs)
≤ the smallest of ®
¯23.1(1 + D / Lo ) = 23.1(1 + 2) = 69 percent
Axial load = [ Lo (1 − 0.01R) + 15] A = 490[50(1 − 0.01R) + 15]
Part 6: Determine reduced live load for two-way slab AB23
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
The tributary area At of this slab = 25 × 28 = 700 sq ft
Since At > 600 sq ft, R1 is determined by Eq. 16-28: R1 = 0.6
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20 × 0.6 × 1 = 14.0 psf
Ninth floor
Since the ninth floor is storage with a live load of 125 psf, which exceeds 100 psf, the
live load is not permitted to be reduced [IBC 1607.9.2(2)].
Live load = 125 psf
Typical floors
Reducible nominal live load = 50 psf
Reduction factor R is given by Eq. 16-23:
R = 0.08( A − 150) = 0.08(700 − 150) = 44 percent
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
­40 percent for horizontal members (governs)
> the smallest of ®
¯23.1(1 + D / Lo ) = 23.1(1 + 2) = 69 percent
Reduced live load L = Lo (1 − R) = 50(1 − 0.4) = 30 psf
Total live load = 30 + 15 = 45 psf
Note: the reduced live load on the shear walls in this example, as well as in Example 3.1,
can be determined using the same procedure as for columns. For example, the shear walls
located at E5 can be collectively considered to be an edge column without a cantilever
Example 3.3 – Live Load Reduction on a Girder
Determine the reduced live load on a typical interior girder of the warehouse shown in
Figure 3.8. The roof is an ordinary flat roof.
7″ precast concrete wall (typ.)
4 @ 37′-0″ = 148′-0″
HSS column (typ.)
Open-web joist girders (typ.)
Metal roof deck
Open-web joist purlins @ 8′-0″
8 @ 32′-0″ = 256′-0″
Roof slope = 1/2 in./ft
Figure 3.8 Plan and Elevation of Warehouse Building
Nominal roof live load: 20 psf in accordance with IBC Table 1607.1, since the roof is an
ordinary flat roof that is not used for public assembly or any other special purposes.
The reduced roof live load Lr is determined by Eq. 16-25:
Lr = Lo R1R2 = 20 R1R2
The tributary area At of this girder = 32 × 37 = 1,184 sq ft
Since At > 600 sq ft, R1 is determined by Eq. 16-28: R1 = 0.6
Since F = 1/2 < 4, R2 = 1 (Eq. 16-29)
Thus, Lr = 20 × 0.6 × 1 = 14.0 psf
The code-prescribed snow and wind loads on the roof of this warehouse are given in
Chapters 4 and 5 of this publication, respectively.
Example 3.4 – Rain Load, IBC 1611
Determine the rain load R on a roof located in Madison, Wisconsin, similar to the one
depicted in Figure 3.5 given the following design data:
Tributary area of primary roof drain = 6,200 sq ft
Closed scupper size: 6 in. wide (b) by 4 in. high (h)
Vertical distance from primary roof drain to inlet of scupper (static head distance
d s ) = 6 in.
Rainfall rate = 3.0 in./hr (IBC Figure 1611.1)
To determine the rain load R, the hydraulic head dh must be determined, based on the
required flow rate.
Required flow rate Q = tributary area of roof drain × rainfall rate = 6,200 × 3/12 =
1,550 cu ft/hr = 25.83 cu ft/min = 193.2 gpm12
1 gallon = 0.1337 cu ft
CHAPTER 3 DEAD, LIVE, AND RAIN LOADS
The hydraulic head d h is determined by the following equation, which is applicable for
closed scuppers where the free surface of the water is above the top of the scupper:13
Q = 2.9b(d 1h.5 − h11.5 ) = 2.9b[(h + h1 )1.5 − h11.5 ]
where b = width of the scupper, h = depth of the scupper, and h1 = d h − h = distance
from the free surface of the water to the top of the scupper.
For a flow rate of 193.2 gpm, h1 = 1.5 in. and d h = 4 + 1.5 = 5.5 in.14
The rain load R is determined by Eq. 16-35:
R = 5.2(d s + d h ) = 5.2(6 + 5.5) = 59.8 psf
Equations for flow rate for various types of roof drains can be found in various references, including
“Roof Loads for New Construction,” FM Global Property Loss Prevention Data Sheet 1-54, Factory
Mutual Insurance Company, 2006. See also Appendix Table C8-1 of ASCE/SEI 7 for flow rates and
hydraulic heads of various drainage systems.
By interpolating the values in Table C8-1 for a flow rate of 193.2 gpm, the hydraulic head d h is equal to
5.6 inches.
CHAPTER 4
IBC 1608.1 requires that design snow loads on buildings and structures be determined by
the provisions of Chapter 7 of ASCE/SEI 7-05. These provisions are based on over 40
years of ground snow load data.
The ground snow load pg is obtained from ASCE/SEI Figure 7-1 or IBC Figure 1608.2
for the conterminous U.S. and from ASCE/SEI Table 7-1 or IBC Table 1608.2 for
locations in Alaska. The snow loads on the maps have a 2-percent annual probability of
being exceeded (i.e., a 50-year mean recurrence interval). Table C7-1 in the commentary
of ASCE/SEI 7 contains ground snow loads at 204 national weather service locations
where load measurements are made.1
In some areas of the U.S., the ground snow load is too variable to allow mapping. Such
regions are noted on the maps as “CS,” which indicates that a site-specific case study is
required. More information on site-specific case studies can be found in C7.2. The maps
also provide ground snow loads in mountainous areas based on elevation. Numbers in
parentheses represent the upper elevation limits in feet for the ground snow load values
that are given below the elevation. Where a building is located at an elevation greater
than that shown on the maps, a site-specific case study must be conducted to establish the
ground snow load.
Once a ground snow load has been established, a flat roof snow load pf is determined by
Eq. 7-1 (7.3). This snow load is used for flat roofs (roof slope less than or equal to 5
degrees) and is a function of roof exposure, roof thermal condition and occupancy of the
structure. Minimum values of pf are given for low-slope roofs in 7.3.2
Design snow loads for all structures are based on the sloped roof snow load ps, which is
determined by modifying the flat roof snow load pf by a roof slope factor Cs (Eq. 7-2).
The factor Cs depends on the slope and temperature of the roof, the presence or absence
of obstructions and the degree of slipperiness of the roof surface. A list of roof materials
that are considered to be slippery and those that are not is given in 7.4. Figure 7-2
contains graphs of Cs for various conditions, and equations for Cs are given in C7.4.
According to Note a in Table C7-1, it is not appropriate to use only the site-specific information in this
table to determine design snow loads. See C7.2 for more information.
Low-slope roofs are defined in 7.3.4. The minimum roof snow load is a separate load case, and it is not to
be combined with drifting, sliding or other types of snow loading.
According to 7.4.3, Cs = 0 for portions of curved roofs that have a slope exceeding 70
degrees, i.e, ps = 0. Balanced snow loads3 for curved roofs are determined from the
loading diagrams in Figure 7-3 with Cs determined from the appropriate curve in Figure
7-2. Multiple folded plate, sawtooth and barrel vault roofs are to be designed using
Cs = 1, i.e., ps = pf (7.4.4). These types of roofs collect additional snow in their valleys
by wind drifting and snow sliding, so no reduction in snow load based on roof slope is
The partial loading provisions of 7.5 must be satisfied for continuous roof framing
systems and all other roof systems where removal of snow load on one span (by wind or
thermal effects, for example) causes an increase in stress or deflection in an adjacent
span. For simplicity, only the three load cases given in Figure 7-4 need to be investigated;
comprehensive alternate span (or, checkerboard) loading analyses are not required.
Partial loading provisions need not be considered for structural members that span
perpendicular to the ridgeline of gable roofs with slopes greater than the larger of
2.38 degrees and (70/W) + 0.5 where W is the horizontal distance from the eave to the
ridge in feet. Also, the minimum roof load requirements of 7.3.4 are not applicable in the
partial load provisions.
Unbalanced load occurs on sloped roofs from wind and sunlight. Wind tends to reduce
the snow load on the windward portion and increase the snow load on the leeward
portion. This is unlike partial loading where snow is removed on one portion of the roof
and is not added to another portion. Provisions for unbalanced snow loads are given in
7.6.1 for hip and gable roofs, in 7.6.2 for curved roofs, in 7.6.3 for multiple folded plate,
sawtooth and barrel vault roofs, and in 7.6.4 for dome roofs. Figures 7-3, 7-5 and 7-6
illustrate balanced and unbalanced snow loads for curved roofs, hip and gable roofs, and
sawtooth roofs, respectively.
Section 7.7 contains provisions for snow drifts that can occur on lower roofs of a building
due to (1) wind depositing snow from higher portions of the same building or an adjacent
building or terrain feature (such as a hill) to a lower roof and (2) wind depositing snow
from the windward portion of a lower roof to the portion of a lower roof adjacent to a
taller part of the building. These two types of drifts, which are called leeward and
windward drifts, respectively, are illustrated in Figure 7-7. Loads from drifting snow are
superimposed on the balanced snow load, as shown in Figure 7-8. Drift loads on sides of
roof projections (including rooftop equipment) and at parapet walls are determined by the
provisions of 7.8, which are based on the drift requirements of 7.7.1.
The load caused by snow sliding off a sloped roof onto a lower roof is determined by the
provisions of 7.9. Such loads are superimposed on the balanced snow load of the lower
A balanced snow load is defined in the snow load provisions as the sloped roof snow load ps determined
by Eq. 7-2. This load is assumed to act on the horizontal projection of the entire roof surface.
A rain-on-snow surcharge load of 5 psf is to be added on all roofs that meet the
conditions of 7.10. This surcharge load applies only to the balanced load case, and need
not be used in combination with drift, sliding, unbalanced or partial loads.
Provisions for ponding instability and progressive deflection of roofs with a slope less
than ¼ inch per foot are given in 7.11 and 8.4. Requirements for increased snow loads on
existing roofs due to additions and alterations are covered in 7.12.
The following general procedure, which is based on that given in C7.0, can be used to
determine design snow loads in accordance with Chapter 7 of ASCE/SEI 7-05:
1. Determine ground snow load pg (7.2).
2. Determine flat roof snow load pf by Eq. 7-1 (7.3).
3. Determine sloped roof snow load ps by Eq. 7-2 (7.4).
4. Consider partial loading (7.5).
5. Consider unbalanced snow loads (7.6).
6. Consider snow drifts on lower roofs (7.7) and roof projections (7.8).
7. Consider sliding snow (7.9).
8. Consider rain-on-snow loads (7.10).
9. Consider ponding instability (7.11).
10. Consider existing roofs (7.12).
It is possible that snow loads in excess of the design values computed by Chapter 7 may
occur on a building or structure. The snow load to dead load ratio of a roof structure is an
important consideration when evaluating the implications of excess loads. Section C7.0
provides additional information on this topic.
Section C7.13 gives information on wind tunnel tests and other experimental and
computational methods that have been employed to establish design snow loads for roof
geometries and complicated sites not addressed in the provisions.
Section 4.2 of this document contains flowcharts for determining design snow loads and
load cases, based on the design procedure outlined above.
Section 4.3 contains completely worked-out examples that illustrate the design
requirements for snow.
A summary of the flowcharts provided in this chapter is given in Table 4.1.
Table 4.1 Summary of Flowcharts Provided in Chapter 4
Flowchart 4.1
Flat Roof Snow Load, p f
Flowchart 4.2
Roof Slope Factor, C s
Flowchart 4.3
Sloped Roof Snow Load, p s
Flowchart 4.4
Unbalanced Roof Snow Loads – Hip
and Gable Roofs
Flowchart 4.5
Unbalanced Roof Snow Loads –
Curved and Dome Roofs
Flowchart 4.6
Unbalanced Roof Snow Loads –
Multiple Folded Plate, Sawtooth, and
Barrel Vault Roofs
Flowchart 4.7
Drifts on Lower Roof of a Structure
FLOWCHART 4.1
Flat Roof Snow Load, pf *
Determine ground snow load p g
from Fig. 7-1 or Fig. 1608.2 for
conterminous U.S. and Table 7-1 or
Table 1608.2 for Alaska**
Determine exposure factor Ce from
Table 7-2 (7.3.1)
Determine thermal factor Ct from
Table 7-3 (7.3.2)
Determine importance factor I from
IBC Table 1604.5 and Table 7-4
Determine flat roof snow load
p f = 0.7CeCt Ip g by Eq. 7-1†
* A flat roof is defined as a roof with a slope that is less than or equal to 5 degrees.
** “CS” in the maps signifies areas where a site-specific study must be conducted to
determine pg. Numbers in parentheses represent the upper elevation limit in feet for the
ground snow load values given below. Site-specific studies are required at elevations
not covered in the maps.
Minimum values of pf are specified in 7.3 for low-slope roofs, which are defined in
FLOWCHART 4.2
Roof Slope Factor, Cs *
Is the roof a multiple
folded plate, sawtooth or
barrel vault roof?
Determine thermal factor Ct
from Table 7-3 (7.3.2)
C s = 1.0 (7.4.4)
Is Ct ≤ 1.0 ?
Roof is defined as a
cold roof (7.4.2)
Roof is defined as a
warm roof (7.4.1)
* Portions of curved roofs having a slope exceeding 70 degrees shall be considered free of
snow load, i.e., Cs = 0 (7.4.3).
FLOWCHART 4.2
Roof Slope Factor, Cs
Is Ct = 1.1 ?
Is the roof an unobstructed
slippery surface that will allow
snow to slide off the eaves?**
Determine roof slope factor C s
using the solid line in Fig. 7-2b
Determine roof slope factor C s
using the dashed line in Fig. 7-2b
Is the roof an unobstructed
slippery surface that will allow
snow to slide off the eaves?**
Determine roof slope factor C s
using the solid line in Fig. 7-2c
Determine roof slope factor C s
using the dashed line in Fig. 7-2c
** See 7.4 for definitions of unobstructed and slippery surfaces.
FLOWCHART 4.2
Roof Slope Factor, Cs
Is the roof (1) an unobstructed slippery
surface that will allow snow to slide off
the eaves and (2) nonventilated with an
R-value ≥ 30 ft2 h $ F /Btu or ventilated
with an R-value ≥ 20 ft2 h $ F /Btu?**
Determine roof slope factor C s
using the solid line in Fig. 7-2a†
Determine roof slope factor C s
using the dashed line in Fig. 7-2a†
** See 7.4 for definitions of unobstructed and slippery surfaces. An R-value for a roof is
defined as its thermal resistance.
See 7.4.5 for an additional uniformly distributed load that is to be applied on overhanging
portions of warm roofs due to formation of ice dams and icicles along eaves.
FLOWCHART 4.3
Sloped Roof Snow Load, ps
Determine flat snow load p f from
Flowchart 4.1
Determine roof slope factor C s from
Flowchart 4.2
Determine sloped roof snow load
p s = C s p f by Eq. 7-2
FLOWCHART 4.4
Unbalanced Roof Snow Loads – Hip and Gable Roofs
Is the roof slope > 70$ ?
Is the roof slope < the larger
Unbalanced snow loads are
not required to be applied
of (70/W) + 0.5 and 2.38 ?
Is W ≤ 20 ft and do simply
supported prismatic members
span from ridge to eave?
Unbalanced load shall consist of:
0.3 ps on the windward side
ps on the leeward side plus a rectangular
Apply unbalanced uniform snow
load of Ip g on the leeward side
and no load on the windward
side (see Fig. 7-5)
surcharge of hd γ / S , which extends
from the ridge a distance of 8 S hd / 3 *
(see Fig. 7-5)
* hd is the drift height from Fig. 7-9 with W substituted for " u, Ȗ = snow density
determined by Eq. 7-3, and S = roof slope run for a rise of one
FLOWCHART 4.5
Unbalanced Roof Snow Loads – Curved and Dome Roofs*,**
(7.6.2, 7.6.4)
Is the slope of the straight line from the
eaves (or the 70$ point, if present) to the
crown less than 10$ or greater than 60$ ?
Unbalanced snow loads are not
required to be applied
Unbalanced load shall consist of:
No load on the windward side
The applicable load distribution depicted
in Cases 1, 2 or 3 shown in Fig. 7-3†
* Portions of curved roofs having a slope > 70 shall be considered free of snow.
** See 7.6.4 for provisions related to dome roofs.
See 7.6.2 where ground or another roof abuts a Case 2 or Case 3 curved roof at or
within 3 ft of its eaves.
FLOWCHART 4.6
Unbalanced Roof Snow Loads – Multiple Folded Plate,
Sawtooth, and Barrel Vault Roofs
Does the roof slope exceed 3/8 in./ft?
Unbalanced snow loads are not
required to be applied
Unbalanced load shall consist of:*
0.5 p f at the ridge or crown
2 p f / Ce at the valley**
* Figure 7-6 illustrates balanced and unbalanced snow loads for a sawtooth roof.
** Snow surface above the valley shall not be at an elevation higher than the snow
above the ridge. Snow depths shall be determined by dividing the snow load by the
snow density given by Eq. 7-3.
FLOWCHART 4.7
Drifts on Lower Roof of a Structure
Determine ground snow load p g
from Fig. 7-1 or Fig. 1608.2 for
conterminous U.S. and Table 7-1 or
Table 1608.2 for Alaska*
Determine snow density
γ = 0.13 p g + 14 ≤ 30 pcf by Eq. 7-3
Determine sloped roof snow load p s
from Flowchart 4.3
Determine hc as clear height from
top of balanced snow to
• closest point on adjacent upper
• top of parapet
• top of projection on the roof
(see Fig. 7-8)**
Is hc / hb < 0.2 ?
Determine (1) drift height hd from Fig. 7-9 for
leeward drifts and from 7.7.1 for windward
drifts and (2) drift width w from 7.7.1†
Drift loads are not
required to be applied
* “CS” in the maps signifies areas where a site-specific study must be conducted to determine pg. Numbers in parentheses
represent the upper elevation limit in feet for the ground snow load values given below. Site-specific studies are required
at elevations not covered in the maps.
** Height of balanced snow h = p / γ or p / γ (7.7.1)
See 7.7.2 for drift loads caused by adjacent structures and terrain features. See 7.8 for drift loads on roof projections and
parapet walls.
The following sections contain examples that illustrate the snow load design provisions
of Chapter 7 in ASCE/SEI 7-05.
Example 4.1 – Warehouse Building, Roof Slope of 1/2 on 12
Determine the design snow loads for the one-story warehouse illustrated in Figure 4.1.
7″ precast concrete wall (typ.)
4 @ 37′-0″ = 148′-0″
HSS column (typ.)
Open-web joist girders (typ.)
Metal roof deck
Open-web joist purlins @ 8′-0″
8 @ 32′-0″ = 256′-0″
Roof slope = 1/2 in./ft
Figure 4.1 Plan and Elevation of Warehouse Building
Terrain category:
Thermal condition:
Roof exposure condition:
Roof surface:
Roof framing:
St. Louis, MO
C (open terrain with scattered obstructions less than 30 ft in
Warehouse use. Less than 300 people congregate in one area
and the building is not used to store hazardous or toxic
Structure is kept just above freezing
Partially exposed
Rubber membrane
All members are simply supported
1. Determine ground snow load p g .
From Figure 7-1 or Figure 1608.2, the ground snow load is equal to 20 psf for
St. Louis, MO.
2. Determine flat roof snow load p f by Eq. 7-1.
Use Flowchart 4.1 to determine p f .
a. Determine exposure factor Ce from Table 7-2.
From the design data, the terrain category is C and the roof exposure is partially
exposed. Therefore, Ce = 1.0 from Table 7-2.
b. Determine thermal factor Ct from Table 7-3.
From the design data, the structure is kept just above freezing during the winter,
so Ct = 1.1 from Table 7-3.
c. Determine the importance factor I from Table 7-4.
From IBC Table 1604.5, the Occupancy Category is II, based on the occupancy
given in the design data. Thus, I = 1.0 from Table 7-4.
p f = 0.7CeCt Ip g
= 0.7 × 1.0 × 1.1 × 1.0 × 20 = 15.4 psf
Check if the minimum snow load requirements are applicable:
Minimum values of pf in accordance with 7.3 apply to hip and gable roofs with slopes
less than the larger of 2.38 degrees (1/2 on 12) (governs) and (70/W) + 0.5 = (70/128)
+ 0.5 = 1.05 degrees. Since the roof slope in this example is equal to 2.38 degrees,
minimum roof snow loads do not apply.
3. Determine sloped roof snow load ps by Eq. 7-1.
Use Flowchart 4.2 to determine roof slope factor C s .
a. Determine thermal factor Ct from Table 7-3.
From item 2 above, thermal factor Ct = 1.1 .
b. Determine if the roof is warm or cold.
Since Ct = 1.1 , the roof is defined as a cold roof in accordance with 7.4.2.
c. Determine if the roof is unobstructed or not and if the roof is slippery or not.
There are no obstructions on the roof that inhibit the snow from sliding off the
eaves.4 Also, the roof surface is a rubber membrane. According to 7.4, rubber
membranes are considered to be slippery surfaces.
Since this roof is unobstructed and slippery, use the dashed line in Figure 7-2b to
determine Cs :
For a roof slope of 2.38 degrees, C s = 1.0.
Therefore, p s = C s p f = 1.0 × 15.4 = 15.4 psf. This is the balanced snow load for this
In general, large vent pipes, snow guards, parapet walls and large rooftop equipment are a few common
examples of obstructions that could prevent snow from sliding off the roof. Ice dams and icicles along
eaves can also possibly inhibit snow from sliding off of two types of warm roofs, which are described in
4. Consider partial loading.
Since all of the members are simply supported, partial loading is not considered (7.5).
5. Consider unbalanced snow loads.
Flowchart 4.4 is used to determine if unbalanced loads on this gable roof need to be
considered or not.
Unbalanced snow loads must considered for this roof, since the slope is greater than
or equal to the larger of (70/W) + 0.5 = 1.05 degrees and 2.38 degrees (governs).
Since W = 128 ft > 20 ft, the unbalanced load consists of the following (see Figure 75):
Windward side: 0.3 ps = 0.3 × 15.4 = 4.6 psf
Leeward side: ps = 15.4 psf along the entire leeward length plus a uniform
pressure of hd γ / S = (3.6 × 16.6 ) / 24 = 12.2 psf, which extends from the
ridge a distance of 8hd S / 3 = (8 × 3.6 × 24 ) / 3 = 47.0 ft where
hd = drift length from Figure 7-9 with W = 128 ft substituted for " u
0.43(W )1 / 3 p g + 10 1 / 4 − 1.5 = 3.6 ft
γ = snow density (Eq. 7-3)
= 0.13 p g + 14 = 16.6 pcf < 30 pcf
S = roof slope run for a rise of one = 24
6. Consider snow drifts on lower roofs and roof projections.
Not applicable.
7. Consider sliding snow.
Not applicable.
8. Consider rain-on-snow loads.
In accordance with 7.10, a rain-on-snow surcharge of 5 psf is required for locations
where the ground snow load pg is 20 psf or less (but not zero) with roof slopes less
than W/50.
In this example, pg = 20 psf and W/50 = 128/50 = 2.56 degrees, which is greater than
the roof slope of 2.38 degrees. Thus, an additional 5 psf must be added to the
balanced load of 15.4 psf.5
9. Consider ponding instability.
Since the roof slope in this example is greater than 1/4 in./ft, progressive roof
deflection and ponding instability from rain-on-snow or from snow meltwater need
not be investigated (7.11 and 8.4).
10. Consider existing roofs.
Not applicable.
The balanced and unbalanced snow loads are depicted in Figure 4.2.
15.4 + 12.2 = 27.6 psf
4.6 psf
15.4 psf
20.4 psf
Figure 4.2 Balanced and Unbalanced Snow Loads for Warehouse Building
The rain-on-snow load applies only to the balanced load case and need not be used in combination with
drift, sliding, unbalanced or partial loads (7.10).
Example 4.2 – Warehouse Building, Roof Slope of 1/4 on 12
For the one-story warehouse depicted in Figure 4.1, determine the design snow loads for
a roof slope of 1/4 on 12. Use the same design data given in Example 4.1.
1. Determine ground snow load p g .
From Figure 7-1 or Figure 1608.2, the ground snow load is equal to 20 psf for
St. Louis, MO.
2. Determine flat roof snow load p f by Eq. 7-1.
It was determined in item 2 of Example 4.1 that p f = 15.4 psf.
Check if the minimum snow load requirements are applicable:
Minimum values of pf in accordance with 7.3 apply to hip and gable roofs with
slopes less than the larger of 2.38 degrees (1/2 on 12) (governs) and (70/W) + 0.5 =
(70/128) + 0.5 = 1.05 degrees. Since the roof slope in this example is equal to 1.19
degrees, minimum flat roof snow loads apply.
In accordance with 7.3, minimum flat roof snow load = Ipg = 1.0 × 20 = 20 psf, since
pg is equal to 20 psf or less.
3. Determine sloped roof snow load ps by Eq. 7-1.
It was determined in item 3 of Example 4.1 that Cs = 1.0 for a roof slope of
2.38 degrees. Using the dashed line in Figure 7-2b, Cs = 1.0 for a roof slope of
1.19 degrees as well.
Therefore, p s = 1.0 × 15.4 = 15.4 psf.
4. Consider partial loading.
Since all of the members are simply supported, partial loading is not considered (7.5).
5. Consider unbalanced snow loads.
Flowchart 4.4 is used to determine if unbalanced loads on this gable roof need to be
considered or not.
Unbalanced snow loads need not be considered for this roof, since the slope is less
than the larger of (70/W) + 0.5 = 1.05 degrees and 2.38 degrees (governs).
6. Consider snow drifts on lower roofs and roof projections.
Not applicable.
7. Consider sliding snow.
Not applicable.
8. Consider rain-on-snow loads.
In accordance with 7.10, a rain-on-snow surcharge of 5 psf is required for locations
where the ground snow load pg is 20 psf or less (but not zero) with roof slopes less
than W/50.
In this example, pg = 20 psf and W/50 = 128/50 = 2.56 degrees, which is greater than
the roof slope of 1.19 degrees. Thus, an additional 5 psf must be added to the sloped
roof snow load of 15.4 psf.6
9. Consider ponding instability.
Since the roof slope in this example is not less than 1/4 in./ft, progressive roof
deflection and ponding instability from rain-on-snow or from snow meltwater need
not be investigated (7.11 and 8.4).
10. Consider existing roofs.
Not applicable.
In this example, the uniform load of 15.4 + 5 = 20.4 psf (balanced plus rain-on-snow)
governs, since it is greater than the minimum roof snow load of 20 psf. The 20.4 psf snow
load is uniformly distributed over the entire length of the roof, as depicted in Figure 4.2.
This is the only load that needs to be considered in this example.
Example 4.3 – Warehouse Building (Roof Slope of 1/2 on 12) and
Adjoining Office Building (Roof Slope of 1/2 on 12)
A new one-story office building is to be constructed adjacent to the existing one-story
warehouse in Example 4.1 (see Figure 4.3). Determine the design snow loads on the roof
of the office building.7 Both structures have a roof slope of 1/2 on 12.
The rain-on-snow load applies only to the balanced load case and need not be used in combination with
drift, sliding, unbalanced or partial loads (7.10).
A summary of the design snow loads for the warehouse is given in Figure 4.2 in Example 4.1.
Direction of wood trusses
Figure 4.3 Elevation of Warehouse and Office Buildings
Terrain category:
Thermal condition:
Roof exposure condition:
Roof surface:
Roof framing:
St. Louis, MO
C (open terrain with scattered obstructions less than 30 ft in
Business (less than 300 people congregate in one area)
Heated with unventilated roof (R-value less than
30 ft2h°F/Btu)
Partially exposed (due in part to the presence of the adjacent
taller warehouse building)
Asphalt shingles
Wood trusses spaced 25 ft on center that overhang a masonry
wall and wood purlins spaced 5 ft on center that frame
between the trusses (see Figure 4.3)
1. Determine ground snow load p g .
From Figure 7-1 or Figure 1608.2, the ground snow load is equal to 20 psf for
St. Louis, MO.
2. Determine flat roof snow load p f by Eq. 7-1.
Use Flowchart 4.1 to determine p f .
a. Determine exposure factor Ce from Table 7-2.
From the design data, the terrain category is C and the roof exposure is partially
exposed. Therefore, Ce = 1.0 from Table 7-2.
b. Determine thermal factor Ct from Table 7-3.
From the design data, the structure is heated with an unventilated roof, so
Ct = 1.0 from Table 7-3.
c. Determine the importance factor I from Table 7-4.
From IBC Table 1604.5, the Occupancy Category is II, based on the occupancy
given in the design data. Thus, I = 1.0 from Table 7-4.
p f = 0.7CeCt Ip g
= 0.7 × 1.0 × 1.0 × 1.0 × 20 = 14.0 psf
Check if the minimum snow load requirements are applicable:
Minimum values of p f in accordance with 7.3 apply to monoslope roofs with slopes
less than 15 degrees. Since the roof slope in this example is equal to 2.38 degrees,
minimum roof snow loads apply.
In accordance with 7.3, minimum roof snow load = Ipg 1.0 × 20 = 20 psf, since pg is
equal to 20 psf or less.
3. Determine sloped roof snow load ps by Eq. 7-1.
Use Flowchart 4.2 to determine roof slope factor C s .
a. Determine thermal factor Ct from Table 7-3.
From item 2 above, thermal factor Ct = 1.0 .
b. Determine if the roof is warm or cold.
Since Ct = 1.0 , the roof is defined as a warm roof in accordance with 7.4.1.
c. Determine if the roof is unobstructed or not and if the roof is slippery or not.
In accordance with the design data, the roof surface is asphalt shingles. According
to 7.4, asphalt shingles are not considered to be slippery.
Also, since the roof is unventilated with an R-value less than 30 ft2h°F/Btu, it is
possible for an ice dam to form at the eave, which can prevent the snow from
sliding off of the roof (7.4.5). This is considered to be an obstruction.
Thus, use the solid line in Figure 7-2a to determine C s :
For a roof slope of 2.38 degrees, C s = 1.0.
Therefore, p s = C s p f = 1.0 × 14.0 = 14.0 psf.
In accordance with 7.4.5, a uniformly distributed load of 2pf = 2 × 14.0 = 28.0 psf
must be applied on the 5-ft overhanging portion of the roof to account for ice dams.
Only the dead load is to be present when this uniformly distributed load is applied.
4. Consider partial loading.
It is assumed that the roof purlins are connected to the wood trusses by metal hangers,
which are essentially simple supports, so partial loads do not have to be considered
for the roof purlins. Therefore, with a spacing of 5 ft, the uniform snow load on a
purlin is equal to 20.0 × 5.0 = 100 plf (minimum snow load governs).
The roof trusses are continuous over the masonry wall; thus, partial loading must be
considered (7.5). The balanced snow load to be used in partial loading cases is that
determined by Eq. 7-2, which is equal to 14.0 psf. With a spacing of 25 ft, the
balanced and partial loads on a typical roof truss are
Balanced load = 14.0 × 25.0 = 350 plf
Partial load = one-half of balanced load = 175 plf
Shown in Figure 4.4 are the balanced and partial load cases that must be considered
for the roof trusses in this example, including the ice dam load on the overhang,
which was determined in item 3 above. Note that the minimum snow load of 20 psf is
not applicable in the partial load cases and in the ice dam load case.
5. Consider unbalanced snow loads.
Not applicable.
6. Consider snow drifts on lower roofs and roof projections.
Use Flowchart 4.7 to determine the leeward and windward drifts that form on the
lower (office) roof.
20 x 25 = 500 plf
175 plf
350 plf
350 plf
175 plf
28 x 25 = 700 plf
Ice Dam
Figure 4.4 Balanced and Partial Load Cases for Roof Trusses
a. Determine ground snow load pg.
From item 1 above, the ground snow load is equal to 20 psf for St. Louis, MO.
b. Determine snow density γ by Eq. 7-3.
γ = 0.13 p g + 14 = (0.13 × 20) + 14 = 16.6 pcf < 30 pcf
c. Determine sloped roof snow load ps from Flowchart 4.3.
From item 3 above, ps = 14.0 psf.
d. Determine clear height hc .
In this example, the clear height hc is from the top of the balanced snow to the top
of the warehouse eave (see Figure 7-8). The height of the balanced snow
hb = ps/Ȗ = 14.0/16.6 = 0.8 ft.
Thus, hc = (10 − 25 tan 2.38$ ) − 0.8 = 8.2 ft
e. Determine if drift loads are required or not.
Drift loads are not required where hc / hb < 0.2 (7.7.1). In this example,
hc / hb = 8.2/0.8 > 0.2, so drift loads must be considered.
f. Determine drift load.
Both leeward and windward drift heights hd must be determined by the provisions
of 7.7.1. The larger of these two heights is used to determine the drift load.
Leeward drift
A leeward drift occurs when snow from the warehouse roof is deposited by
wind to the office roof (wind from left to right in Figure 4.3).
For leeward drifts, the drift height hd is determined from Figure 7-9 using the
length of the upper roof " u. In this example, " u = 256 ft and the ground snow
load pg = 20 psf. Using the equation in Figure 7-9:
hd = 0.43(" u )1 / 3 p g + 10 1 / 4 − 1.5
= 0.43(256 )1 / 3 (20 + 10)1 / 4 − 1.5 = 4.9 ft
Windward drift
A windward drift occurs when snow from the office roof is deposited adjacent
to the wall of the warehouse building (wind from right to left in Figure 4.3).
For windward drifts, the drift height hd is 75 percent of that determined from
Figure 7-9 using the length of the lower roof for " u:
hd = 0.75[0.43(" u )1 / 3 p g + 10 1 / 4 − 1.5]
= 0.75[0.43(30 )1 / 3 (20 + 10 )1 / 4 − 1.5] = 1.2 ft
Thus, the leeward drift controls and hd = 4.9 ft.
Since hd = 4.9 ft < hc = 8.2 ft, the drift width w = 4hd = 4 × 4.9 = 19.6 ft.
The maximum surcharge drift load pd = hd γ = 4.9 × 16.6 = 81.3 psf.
The total load at the step is the balanced load on the office roof plus the drift
surcharge = 14.0 + 81.3 = 95.3 psf, which is illustrated in Figure 4.5.
95.3 psf
14.0 psf
Figure 4.5 Balanced and Drift Loads on Office Roof
The snow loads on the purlins and trusses are obtained by multiplying the loads
depicted in Figure 4.5 by the respective tributary widths. As expected, the purlins
closest to the warehouse have the largest loads.
If the office and warehouse were separated, the drift load on the office roof would be
reduced by the factor (20 − s ) / 20 where s is the separation distance in feet (7.7.2).
For example, if the buildings were separated by 5 ft, the modified drift height is equal
to [(20 − 5) / 20] × 4.9 = 3.7 ft and the maximum surcharge load at the step is equal to
3.7 × 16.6 = 61.4 psf. Also, the drift width is equal to 4 × 3.7 = 14.8 ft. Drift loads on
lower roofs are not considered for structures separated by a distance of 20 ft or more.
7. Consider sliding snow.
The provisions of 7.9 are used to determine if a load due to snow sliding off of the
warehouse roof on to the office roof must be considered.
Load caused by snow sliding must be considered, since the warehouse roof is slippery
with a slope greater than 1/4 on 12. This load is in addition to the balanced load
acting on the lower roof.
The total sliding load per unit length of eave is equal to 0.4 pfW where W is the
horizontal distance from the eave to the ridge of the warehouse roof. In this example,
W = 128 ft and the sliding load = 0.4 × 14.0 × 128 = 717 plf. This load is to be
uniformly distributed over a distance of 15 ft from the warehouse eave.8 Thus, the
sliding load is equal to 717/15 = 47.8 psf.
The total load over the 15-ft width is equal to the balanced load plus the sliding load
= 14.0 + 47.8 = 61.8 psf. The total depth of snow for the total load is equal to
61.8/16.6 = 3.7 ft, which is less than the distance from the warehouse eave to the top
of the office roof at the interface. Thus, sliding snow is not blocked and the full load
can be developed over the 15-ft length.9
Depicted in Figure 4.6 is the load case including sliding snow. The total balanced and
sliding snow load is less than the total balanced and drift snow load (see Figure 4.5).
8. Consider rain-on-snow loads.
In accordance with 7.10, a rain-on-snow surcharge of 5 psf is required for locations
where the ground snow load pg is 20 psf or less (but not zero) with roof slopes less
than W/50.
In this example, pg = 20 psf and W/50 = 25/50 = 0.5 degrees, which is less than the
roof slope of 2.38 degrees. Thus, rain-on-snow loads are not considered.
If the width of the lower roof is less than 15 ft, the sliding load is to be reduced proportionally (7.9). For
example, if the width of the office building in this example was 12 ft, the reduced sliding load = (12/15) ×
717 = 574 plf. This load would be applied uniformly over the 12-ft width.
If the calculated total snow depth on the lower roof exceeds the distance from the upper roof eave to the
top of the lower roof, sliding snow is blocked and a fraction of the sliding snow is forced to remain on the
upper roof. In such cases, the total load on the lower roof near the upper roof eave is equal to the density
of the snow multiplied by the distance from the upper roof eave to the top of the lower roof. This load is
uniformly distributed over a distance of 15 ft or the width of the lower roof, whichever is less.
Figure 4.6 Balanced and Sliding Snow Loads on Office Roof
9. Consider ponding instability.
Since the roof slope in this example is greater than 1/4 in./ft, progressive roof
deflection and ponding instability from rain-on-snow or from snow meltwater need
not be investigated (7.11 and 8.4).
10. Consider existing roofs.
Not applicable.
The loads depicted in Figures 4.4, 4.5 and 4.6 must be considered when designing the
purlins and trusses.
Example 4.4 – Six-Story Hotel with Parapet Walls
Determine the design snow loads for the six-story hotel depicted in Figure 4.7. Parapet
walls are on all four sides of the building and the roof is nominally flat except for
localized areas around roof drains that are sloped to facilitate drainage.
Ground snow load, p g :
40 psf
Terrain category:
B (urban area with numerous closely spaced obstructions
having the size of single-family dwellings or larger)
Residential (less than 300 people congregate in one area)
Cold, ventilated roof (R-value between the ventilated space
and the heated space exceeds 25 ft2h°F/Btu)
Fully exposed
Concrete slab with waterproofing
Thermal condition:
Roof exposure condition:
Roof surface:
Roof drains are provided over the entire roof area
Reinforced concrete parapet (typ.)
North/South Elevation
Figure 4.7 Plan and Elevation of Six-story Hotel with Parapet Walls
1. Determine ground snow load p g .
From the design data, the ground snow load p g is equal to 40 psf.
2. Determine flat roof snow load p f by Eq. 7-1.
Use Flowchart 4.1 to determine p f .
a. Determine exposure factor Ce from Table 7-2.
From the design data, the terrain category is B and the roof exposure is fully
exposed. Therefore, Ce = 0.9 from Table 7-2.
b. Determine thermal factor Ct from Table 7-3.
From the design data, the roof is cold and ventilated with an R-value between the
ventilated space and the heated space that exceeds 25 ft2h°F/Btu, so Ct = 1.1 from
Table 7-3.
c. Determine the importance factor I from Table 7-4.
From IBC Table 1604.5, the Occupancy Category is II, based on the occupancy
given in the design data. Thus, I = 1.0 from Table 7-4.
p f = 0.7CeCt Ip g
= 0.7 × 0.9 × 1.1 × 1.0 × 40 = 27.7 psf
3. Consider drift loading at parapet walls.
According to 7.8, drift loads at parapet walls and other roof projections are
determined using the provisions of 7.7.1.
Windward drifts occur at parapet walls, and Flowchart 4.7 is used to determine the
windward drift load.
a. Determine snow density γ by Eq. 7-3.
γ = 0.13 p g + 14 = (0.13 × 40) + 14 = 19.2 pcf < 30 pcf
b. Determine clear height hc .
The clear height hc is from the top of the balanced snow to the top of the parapet
wall. For a flat roof, the height of the balanced snow is determined as follows:
hb = pf / Ȗ = 27.7/19.2 = 1.4 ft.
Thus, hc = 4.5 − 1.4 = 3.1 ft
c. Determine if drift loads are required or not.
Drift loads are not required where hc / hb < 0.2 (7.7.1). In this example,
hc / hb = 3.1/1.4 > 0.2, so drift loads must be considered.
d. Determine drift load.
Windward drift height hd must be determined by the provisions of 7.7.1 using
three-quarters of the drift height hd from Figure 7-9 with " u equal to the length of
the roof upwind of the parapet wall (7.8). Wind in both the north-south and eastwest directions must be examined.
Wind in north-south direction
The equation in Figure 7-9 yields the following for the drift height hd based
on a ground snow load pg = 40 psf and an upwind fetch " u = 75.33 ft:
hd = 0.75[0.43(" u )1 / 3 p g + 10 1 / 4 − 1.5]
= 0.75[0.43(75.33)1 / 3 (40 + 10 )1 / 4 − 1.5] = 2.5 ft
Since hd = 2.5 ft < hc = 3.1 ft, the drift width w = 4hd = 4 × 2.5 = 10.0 ft.
The maximum surcharge drift load pd = hd γ = 2.5 × 19.2 = 48.0 psf.
The total load at the face of the parapet wall is the balanced load plus the drift
surcharge = 27.7 + 48.0 = 75.7 psf.
Wind in east-west direction
The equation in Figure 7-9 yields the following for the drift height hd based on
an upwind fetch " u = 328.75 ft:
hd = 0.75[0.43(" u )1 / 3 p g + 10 1 / 4 − 1.5]
= 0.75[0.43(328.75)1 / 3 (40 + 10 )1 / 4 − 1.5] = 4.8 ft
Since hd = 4.8 ft > hc = 3.1 ft, the drift height is limited to 3.1 ft and the drift
width w = 4hd2 / hc = (4 × 4.82 ) / 3.1 = 29.7 ft > 8hc = 8 × 3.1 = 24.8 ft.
Therefore, use w = 24.8 ft.
The total load at the face of the parapet wall is 4.5 × 19.2 = 86.4 psf.
Balanced and drift loads at the parapet walls in both directions are shown in
Figure 4.8.
Figure 4.8 Balanced and Drift Snow Loads at Parapet Walls
Since the ground snow load p g exceeds 20 psf, the minimum snow load is
20 I = 20 × 1.0 = 20 psf (7.3), which is less than the flat roof snow load
p f = 27.7 psf, and, in accordance with 7.10, a rain-on-snow surcharge load is not
The only other load cases that need to be considered are the partial load cases of 7.5.
For illustration purposes, assume that in the N-S direction the framing consists of a 3span moment frame (cast-in-place concrete columns and beams) with 25 ft-2 in.
exterior spans and a 25 ft-0 in. interior span. Balanced and partial loading diagrams
for the concrete beams are illustrated in Figure 4.9. Partial loads are determined in
accordance with 7.5 and Figure 7-4.
27.7 psf
27.7 psf
13.9 psf
27.7 psf
13.9 psf
Figure 4.9 Balanced and Partial Loading Diagrams for the Concrete Beams Spanning in
the N-S Direction
Example 4.5 – Six-Story Hotel with Rooftop Unit
For the six-story hotel in Example 4.4, determine the drift loads at the rooftop unit
depicted in Figure 4.10. Use the same design data as in Example 4.4 and assume that the
roof has no parapets.
The following were determined in Example 4.4 and are used in this example:
Sloped roof snow load ps = 27.7 psf
Snow density γ = 19.2 pcf
Height of the balanced snow hb = ps / γ = 27.7 / 19.2 = 1.4 ft
Rooftop unit
North/South Elevation
Figure 4.10 Plan and Elevation of Six-story Hotel with Rooftop Unit
The clear height to the top of the rooftop unit hc = 6.5 − 1.4 = 5.1 ft
Drift loads are not required where hc / hb < 0.2 (7.7.1). In this example,
hc / hb = 5.1 / 1.4 > 0.2 , so drift loads must be considered.
Since the plan dimension of the rooftop unit in the N-S direction is less than 15 ft, a drift
load is not required to be applied to those sides for wind in the E-W direction (7.8). Drift
loads must be considered for the other sides of the rooftop unit, since those sides are
greater than 15 ft.
For a N-S wind, the larger of the upwind fetches is 75.33 − 21.0 − 3.5 = 50.83 ft. For
simplicity, this fetch is used for drift on both sides of the rooftop unit.
The equation in Figure 7-9 yields the following for the drift height hd based on a ground
snow load pg = 40 psf and an upwind fetch " u = 50.83 ft:
hd = 0.75[0.43(" u )1 / 3 p g + 10 1 / 4 − 1.5]
= 0.75[0.43(50.83)1 / 3 (40 + 10 )1 / 4 − 1.5] = 2.1 ft
Since hd = 2.1 ft < hc = 5.1 ft, the drift width w = 4hd = 4 × 2.1 = 8.4 ft.
The maximum surcharge drift load pd = hd γ = 2.1 × 19.2 = 40.3 psf.
The total load at the face of the parapet wall is the balanced load plus the drift surcharge
= 27.7 + 40.3 = 68.0 psf.
Drift loads at the rooftop unit are illustrated in Figure 4.11.
68.0 psf
68.0 psf
Rooftop Unit
27.7 psf
27.7 psf
Figure 4.11 Balanced and Drift Loads at Rooftop Unit
Example 4.6 – Agricultural Building
Determine the design snow loads for the agricultural building depicted in Figure 4.12.
5 @ 15′ = 75′
Structural system: wood frames (no walls)
Wood trusses spaced 3′ on center
Figure 4.12 Agricultural Building
Ground snow load, p g :
30 psf
Terrain category:
C (open terrain with scattered obstructions having heights less
than 30 ft)
Utility and miscellaneous occupancy
Unheated structure
Wood shingles
Thermal condition:
Roof exposure condition:
Roof surface:
1. Determine ground snow load p g .
From the design data, the ground snow load p g is equal to 30 psf.
2. Determine flat roof snow load p f by Eq. 7-1.
Use Flowchart 4.1 to determine p f .
a. Determine exposure factor Ce from Table 7-2.
From the design data, the terrain category is B and the roof exposure is sheltered.
Therefore, Ce = 1.2 from Table 7-2.
b. Determine thermal factor Ct from Table 7-3.
From the design data, the structure is unheated, so Ct = 1.2 from Table 7-3.
c. Determine the importance factor I from Table 7-4.
From IBC Table 1604.5, the Occupancy Category is I for an agricultural facility.
Thus, I = 0.8 from Table 7-4.
p f = 0.7CeCt Ip g
= 0.7 × 1.2 × 1.2 × 0.8 × 30 = 24.2 psf
Check if the minimum snow load requirements are applicable:
Minimum values of pf in accordance with 7.3 apply to hip and gable roofs with
slopes less than the larger of 2.38 degrees (1/2 on 12) and (70/W) + 0.5 = (70/30) +
0.5 = 2.83 degrees (governs). Since the roof slope in this example is equal to 18.4
degrees, minimum roof snow loads do not apply.
3. Determine sloped roof snow load ps by Eq. 7-1.
Use Flowchart 4.2 to determine roof slope factor C s .
a. Determine thermal factor Ct from Table 7-3.
From item 2 above, thermal factor Ct = 1.2 .
b. Determine if the roof is warm or cold.
Since Ct = 1.2 , the roof is defined as a cold roof in accordance with 7.4.2.
c. Determine if the roof is unobstructed or not and if the roof is slippery or not.
There are no obstructions on the roof that inhibit the snow from sliding off the
eaves.10 Also, the roof surface has wood shingles. According to 7.4, wood
shingles are not considered to be slippery.
Since this roof is unobstructed and not slippery, use the solid line in Figure 7-2c to
determine Cs:
For a roof slope of 18.4 degrees, C s = 1.0.
Therefore, ps = Cspf = 1.0 × 24.2 = 24.2 psf. This is the balanced snow load for this
4. Consider partial loading.
Partial loads need not be applied to structural members that span perpendicular to the
ridgeline in gable roofs with slopes greater than the larger of 2.38 degrees (1/2 on 12)
and (70/W) + 0.5 = (70/30) + 0.5 = 2.83 degrees (governs).
In general, large vent pipes, snow guards, parapet walls and large rooftop equipment are a few common
examples of obstructions that could prevent snow from sliding off the roof. Ice dams and icicles along
eaves can also possibly inhibit snow from sliding off of two types of warm roofs, which are described in
Since the roof slope is greater than 2.83 degrees, partial loading is not considered.11
5. Consider unbalanced snow loads.
Flowchart 4.4 is used to determine if unbalanced loads on this gable roof need to be
considered or not.
Unbalanced snow loads must considered for this roof, since the slope is greater than
or equal to the larger of (70/W) + 0.5 = 2.83 degrees (governs) and 2.38 degrees.
Since W = 30 ft > 20 ft, the unbalanced load consists of the following (see Figure 75):
Windward side: 0.3 ps = 0.3 × 24.2 = 7.3 psf
Leeward side: ps = 24.2 psf along the entire leeward length plus a uniform
pressure of hd γ / S = (1.9 × 17.9 ) / 3 = 19.6 psf, which extends from the
ridge a distance of 8hd S / 3 = (8 × 1.9 × 3 ) / 3 = 8.8 ft where
hd = drift length from Figure 7-9 with W = 30 ft substituted for " u
0.43(W )1 / 3 p g + 10 1 / 4 − 1.5 = 1.9 ft
γ = snow density (Eq. 7-3)
= 0.13 p g + 14 = 17.9 pcf < 30 pcf
S = roof slope run for a rise of one = 3
6. Consider snow drifts on lower roofs and roof projections.
Not applicable.
7. Consider sliding snow.
Not applicable.
8. Consider rain-on-snow loads.
In accordance with 7.10, a rain-on-snow surcharge of 5 psf is required for locations
where the ground snow load pg is 20 psf or less (but not zero) with roof slopes less
than W/50.
Partial loads on individual members of roof trusses such as those illustrated in Figure 4.12 are generally
not considered.
In this example, pg = 30 psf so an additional 5 psf need not be added to the balanced
load of 24.2 psf.12
9. Consider ponding instability.
Since the roof slope in this example is greater than 1/4 in./ft, progressive roof
deflections and ponding instability from rain-on-snow or from snow meltwater need
not be investigated (7.11 and 8.4).
10. Consider existing roofs.
Not applicable.
The balanced and unbalanced snow loads are depicted in Figure 4.13.
43.8 x 3 = 131.4 plf
72.6 plf
0.3 x 72.6 = 21.8 plf
24.2 x 3 = 72.6 plf
Figure 4.13 Balanced and Unbalanced Snow Loads for Agricultural Building
Example 4.7 – University Facility with Sawtooth Roof
Determine the design snow loads for the university facility depicted in Figure 4.14.
The rain-on-snow load applies only to the balanced load case and need not be used in combination with
drift, sliding, unbalanced or partial loads (7.10).
Figure 4.14 Elevation of University Facility
Ground snow load, p g :
25 psf
Terrain category:
C (open terrain with scattered obstructions having heights less
than 30 ft)
Educational with an occupant load greater than 500
Cold, ventilated roof (R-value between the ventilated space
and the heated space exceeds 25 ft2h°F/Btu)
Partially exposed
Thermal condition:
Roof exposure condition:
Roof surface:
1. Determine ground snow load p g .
From the design data, the ground snow load p g is equal to 25 psf.
2. Determine flat roof snow load p f by Eq. 7-1.
Use Flowchart 4.1 to determine p f .
a. Determine exposure factor Ce from Table 7-2.
From the design data, the terrain category is C and the roof exposure is partially
exposed. Therefore, Ce = 1.0 from Table 7-2.
b. Determine thermal factor Ct from Table 7-3.
From the design data, the roof is cold and ventilated roof with an R-value between
the ventilated space and the heated space that exceeds 25 ft2h°F/Btu. Thus,
Ct = 1.1 from Table 7-3.
c. Determine the importance factor I from Table 7-4.
From IBC Table 1604.5, the Occupancy Category is III for this college
educational facility that has an occupant load greater than 500 people. Thus, I =
1.1 from Table 7-4.
p f = 0.7CeCt Ip g
= 0.7 × 1.0 × 1.1 × 1.1 × 25 = 21.2 psf
3. Determine sloped roof snow load ps from Eq. 7-1.
Use Flowchart 4.2 to determine roof slope factor C s .
In accordance with 7.4.4, C s = 1.0 for sawtooth roofs.
Thus, p s = p f = 21.2 psf.
4. Consider unbalanced snow loads.
Flowchart 4.6 is used to determine if unbalanced loads on this sawtooth roof need to
be considered or not.
Unbalanced snow loads must be considered, since the slope is greater than
1.79 degrees (7.6.3).
In accordance with 7.6.3, the load at the ridge or crown is equal to 0.5 pf =
0.5 × 21.2 = 10.6 psf. At the valley, the load is 2 pf / Ce = 2 × 21.2 / 1.0 = 42.4 psf.
The load at the valley is limited by the space that is available for snow accumulation.
The unit weight of the snow is determined by Eq. 7-3:
γ = 0.13 p g + 14 = (0.13 × 25) + 14 = 17.3 pcf < 30 pcf
The maximum permissible load is equal to the load at the ridge plus the load
corresponding to 10 ft of snow: 10.6 + (10 × 17.3) = 183.6 psf.
Since the unbalanced load of 42.4 psf at the valley is less than 183.6 psf, the load at
the valley is not reduced.
Balanced and unbalanced snow loads are illustrated in Figure 4.15.
42.4 psf
10.6 psf
21.2 psf
Figure 4.15 Balanced and Unbalanced Snow Loads for University Facility
Example 4.8 – Public Utility Facility with Curved Roof
Determine the design snow loads for the public utility facility depicted in Figure 4.16.
The facility is required to remain operational during an emergency.
Ground snow load, p g :
60 psf
Terrain category:
Thermal condition:
Roof exposure condition:
Roof surface:
D (flat unobstructed area near water)
Essential facility
Unheated structure
Fully exposed
Rubber membrane
1. Determine ground snow load pg.
From the design data, the ground snow load pg is equal to 60 psf.
Figure 4.16 Elevation of Public Utility Facility
2. Determine flat roof snow load p f by Eq. 7-1.
Use Flowchart 4.1 to determine p f .
a. Determine exposure factor Ce from Table 7-2.
From the design data, the terrain category is D and the roof exposure is fully
exposed. Therefore, Ce = 0.8 from Table 7-2.
b. Determine thermal factor Ct from Table 7-3.
From the design data, the structure is unheated. Thus, Ct = 1.2 from Table 7-3.
c. Determine the importance factor I from Table 7-4.
From IBC Table 1604.5, the Occupancy Category is IV for this essential facility.
Thus, I = 1.2 from Table 7-4.
p f = 0.7CeCt Ip g
= 0.7 × 0.8 × 1.2 × 1.2 × 60 = 48.4 psf
Check if the minimum snow load requirements are applicable:
Minimum values of pf in accordance with 7.3 apply to curved roofs where the
vertical angle from the eaves to the crown is less than 10 degrees. Since that slope in
this example is equal to 12 degrees, minimum roof snow loads do not apply.
3. Determine sloped roof snow load ps from Eq. 7-1.
Use Flowchart 4.2 to determine roof slope factor C s .
a. Determine thermal factor Ct from Table 7-3.
From item 2 above, thermal factor Ct = 1.2 .
b. Determine if the roof is warm or cold.
Since Ct = 1.2, the roof is defined as a cold roof in accordance with 7.4.2.
c. Determine if the roof is unobstructed or not and if the roof is slippery or not.
There are no obstructions on the roof that inhibit the snow from sliding off the
eaves.13 Also, the roof surface is a rubber membrane. According to 7.4, rubber
membranes are considered to be slippery.
Since this roof is unobstructed and slippery, use the dashed line in Figure 7-2c to
determine C s .
For the tangent slope of 25 degrees at the eave, the roof slope factor Cs is determined
by the equation in C7.4 for cold roofs with Ct = 1.2:
C s = 1.0 −
(slope − 15$ )
= 1.0 −
(25$ − 15$ )
= 0.82
Therefore, p s = C s p f = 0.82 × 48.4 = 39.7 psf, which is the balanced snow load at
the eaves.
Away from the eaves, the roof slope factor Cs is equal to 1.0 where the tangent roof
slope is less than or equal to 15 degrees (see dashed line in Figure 7-2c). This occurs
at distances of approximately 20.7 ft from the eaves at both ends of the roof.
Therefore, in the center portion of the roof, ps = Cspf = 1.0 × 48.4 = 48.4 psf.
The balanced snow load is depicted in Figure 4.17, which is based on Case 1 in
Figure 7-3 for slope at eaves less than 30 degrees.
In general, large vent pipes, snow guards, parapet walls and large rooftop equipment are a few common
examples of obstructions that could prevent snow from sliding off the roof. Ice dams and icicles along
eaves can also possibly inhibit snow from sliding off of two types of warm roofs, which are described in
4. Consider unbalanced snow loads.
Flowchart 4.5 is used to determine if unbalanced loads on this curved roof need to be
considered or not.
Since the slope of the straight line from the eaves to the crown is greater than
10 degrees and is less than 60 degrees, unbalanced snow loads must be considered
Unbalanced loads for this roof are given in Case 1 of Figure 7-3. No snow loads are
applied on the windward side. On the leeward side, the snow load is equal to
0.5 pf = 0.5 × 48.4 = 24.2 psf at the crown and 2 pfCs / Ce = 2 × 48.4 × 0.82 / 0.8 =
99.2 psf at the eaves where Cs is based on the slope at the eaves.
The unbalanced snow loads are shown in Figure 4.17.
99.2 psf
24.2 psf
48.4 psf
39.7 psf
39.7 psf
Figure 4.17 Balanced and Unbalanced Snow Loads for Public Utility Facility
CHAPTER 5
In accordance with IBC 1609.1.1, wind loads on buildings and structures are to be
determined by the provisions of Chapter 6 of ASCE/SEI 7-05 or by the alternate allheights method of IBC 1609.6. Exceptions are also given in this section, which permit
wind forces to be determined on certain types of structures using industry standards other
than ASCE/SEI 7. Wind tunnel tests that conform to the provisions of ASCE/SEI 6.6 are
also permitted, provided the limitations of IBC 1609.1.1.2 are satisfied.
The basic wind speed, the exposure category and the type of opening protection required
may be determined in accordance with the provisions of IBC 1609 or ASCE/SEI 7-05.
Figure 1609 in the IBC and Figure 6-1 in ASCE/SEI 7-05 are identical and provide basic
wind speeds based on 3-second gusts at 33 ft above ground for Exposure C. The design
wind speeds on these maps do not include effects of tornadoes. Since some referenced
standards contain criteria or applications based on fastest-mile wind speed, which was the
wind speed utilized in earlier editions of ASCE/SEI 7 and the legacy codes, IBC 1609.3.1
provides an equation and a table that can be used to convert from one wind speed to the
Chapter 6 of ASCE/SEI 7-05 contains three methods to determine design wind pressures
or loads:
Method 1 – Simplified Procedure
Method 2 – Analytical Procedure
Method 3 – Wind Tunnel Procedure
The design requirements of Method 1 can be found in 6.4. This method is based on the
low-rise buildings procedure in Method 2, and can be used to determine wind pressures
on the main wind-force-resisting system (MWFRS) of a building, provided the conditions
of 6.4.1.1 are met. Although there are eight conditions that need to be satisfied, a large
number of typical low-rise buildings meet these criteria. Definitions for enclosed, lowrise, regularly shaped, and simple diaphragm buildings are given in 6.2, along with other
important definitions (these types of buildings are listed under the conditions in 6.4.1.1).
Method 1 can also be used to determine wind pressures on components and cladding
(C&C) of a building, provided the five conditions in 6.4.1.2 are satisfied. If a building
satisfies only those criteria in 6.4.1.2 for C&C, the design wind pressures on the MWFRS
must be determined by Method 2 or 3.
Wind pressures are tabulated in Figures 6-2 and 6-3 as a function of the basic wind speed
for a specific set of conditions: Occupancy Category II buildings with a mean roof height
of 30 ft that is located on primarily flat ground in Exposure B. Adjustments are made to
these tabulated pressures based on actual building height, exposure, occupancy and
topography at the site. The adjusted pressures are applied normal to projected areas of the
building in accordance with Figures 6-2 and 6-3.
Method 2 provides wind pressures and forces for the design of MWFRSs and C&C of
enclosed and partially enclosed rigid and flexible buildings, open buildings and other
structures, including freestanding walls, signs, rooftop equipment and other structures
In general, this procedure entails the determination of velocity pressures (which are a
function of exposure, height, topographic effects, wind directionality, wind velocity and
building occupancy), gust effect factors, external and internal pressure coefficients and
force coefficients. The Analytical Procedure can be used to determine pressures and
forces on a wide range of buildings and structures, provided
1. The building or structure is regularly-shaped, i.e., the building has no unusual
geometrical irregularity in spatial form (both vertical and horizontal).
2. The building or structure responds to wind primarily in the same direction as that
of the wind, i.e., it does not have response characteristics that make it subject to
across-wind loading, vortex shedding or any other dynamic load effects, which
are common in tall, slender buildings and structures and cylindrical buildings and
3. The building or structure is located on a site where there are no channeling effects
or buffeting in the wake of upwind obstructions.
The Wind Tunnel Procedure (Method 3) in 6.6 can be utilized for any building or
structure in lieu of Methods 1 or 2, and must be used where the conditions of Methods 1
or 2 are not satisfied. Requirements for proper wind tunnel testing are given in 6.6.2. As
noted previously, the limitations of IBC 1609.1.1.2 must be satisfied where this
procedure is utilized. The provisions in IBC 1609.1.1.2.1 and 1609.1.1.2.2 prescribe
lower limits on the magnitude of the base overturning moments on the main wind-forceresisting system and the pressures for components and cladding, respectively.
The alternate all-heights method in IBC 1609.6, which is based on Method 2 of
ASCE/SEI 7-05, can be used to determine wind pressures on regularly shaped buildings
and structures that meet the five conditions listed in IBC 1609.6.1. In this method, terms
in the design pressure equation of Method 2 are combined to produce pressure
coefficients Cnet, which are provided in IBC Table 1609.6.2(2) for surfaces in main windforce-resisting systems and components and cladding. Net wind pressures calculated by
IBC Eq. 16-34 are applied simultaneously on, and in a direction normal to, all wall and
roof surfaces.
Provisions for minimum design wind loading are given in 6.1.4. Figure C6-1 in the
commentary of ASCE/SEI 7 illustrates the minimum wind pressure that must be applied
horizontally on the entire vertical projection of a building or structure for the design of
the MWFRS. This 10-psf pressure is to be applied as a separate load case in addition to
the other load cases specified in Chapter 6. For C&C, a minimum net pressure of 10 psf
acting in either direction normal to the surface is required.
The same minimum pressures given in 6.1.4 of ASCE/SEI 7 are in IBC 1609.6.3 for the
alternate all-heights method.
Section 5.2 of this document contains various flowcharts for determining wind loads.
Included are two flowcharts that present information on when the various methods can be
utilized and eight flowcharts that provide step-by-step procedures on how to determine
design wind pressures and forces on buildings and other structures.
Section 5.3 contains completely worked-out design examples that illustrate the design
requirements for wind.
A summary of the flowcharts provided in this chapter is given in Table 5.1. Included is a
description of the content of each flowchart.
Table 5.1 Summary of Flowcharts Provided in Chapter 5
ASCE/SEI 5.2.1 Allowed Procedures
Flowchart 5.1
Allowed Procedures – MWFRS
Summarizes procedures that are allowed in
determining design wind pressures on MWFRSs.
Flowchart 5.2
Allowed Procedures – C&C
Summarizes procedures that are allowed in
determining design wind pressures on C&C.
ASCE/SEI 5.2.2 Method 1 – Simplified Procedure
Flowchart 5.3
Flowchart 5.4
Provides step-by-step procedure on how to
determine the net design wind pressures p s on
MWFRS – Net Design Wind
Pressure, p s
C&C – Net Design Wind Pressure,
p net
Provides step-by-step procedure on how to
determine the net design wind pressures p net on
Table 5.1 Summary of Flowcharts Provided in Chapter 5 (continued)
ASCE/SEI 5.2.3 Method 2 – Analytical Procedure
Flowchart 5.5
Velocity Pressures, q z and q h
Outlines the procedure for determining velocity
pressures that are used in Method 2.
Flowchart 5.6
Gust Effect Factors, G and G f
Outlines methods for determining gust effect factors
that are used in Method 2.
Flowchart 5.7
Buildings, MWFRS
Provides step-by-step procedures on how to
determine design wind pressures on MWFRSs of
enclosed, partially enclosed and open buildings.
Flowchart 5.8
Buildings, C&C
Provides step-by-step procedures on how to
determine design wind pressures on C&C of
enclosed, partially enclosed and open buildings.
Structures Other than Buildings
Provides step-by-step procedures on how to
determine design wind forces on solid freestanding
walls, solid signs, rooftop structures and equipment,
and other structures that are not buildings.
Flowchart 5.9
IBC 1609.6 Alternate All-heights Method
Flowchart 5.10
Net wind pressure, p net
MWFRS = Main wind-force-resisting system
C&C = Components and cladding
Provides step-by-step procedure on how to
determine design wind pressures on MWFRSs and
Allowed Procedures
FLOWCHART 5.1
Allowed Procedures – Main Wind-force-resisting Systems
Does the building meet all of
the conditions of 6.4.1.1?
It is permitted to determine
design wind loads in
accordance with Method 1 –
Simplified Procedure (6.4)*
Does the building or
structure meet all of
the conditions of
IBC 1609.6.1?
Does the building or
structure meet all of
the conditions of
6.5.1 and 6.5.2?
Design wind loads must be
determined in accordance with
Method 3 – Wind Tunnel
Procedure (6.6)**
It is permitted to determine
design wind loads in
accordance with Alternate Allheights Method (IBC 1609.6)*
It is permitted to determine
design wind loads in
accordance with Method 2 –
Analytical Procedure (6.5)*
* Wind tunnel testing (ASCE/SEI 6.6.1) is permitted for any
building or structure subject to the limitations of IBC 1609.1.1.2.
** Limitations of IBC 1609.1.1.2 must also be satisfied.
FLOWCHART 5.2
Allowed Procedures – Components and Cladding
Does the building meet all of
the conditions of 6.4.1.2?
It is permitted to determine
design wind loads in
accordance with Method 1 –
Simplified Procedure (6.4)*
Does the building or
structure meet all of
the conditions of
IBC 1609.6.1?
Does the building or
structure meet all of
the conditions of
6.5.1 and 6.5.2?
Design wind loads must be
determined in accordance with
Method 3 – Wind Tunnel
Procedure (6.6)**
It is permitted to determine
design wind loads in
accordance with Alternate Allheights Method (IBC 1609.6)*
It is permitted to determine
design wind loads in
accordance with Method 2 –
Analytical Procedure (6.5)*
* Wind tunnel testing (ASCE/SEI 6.6.1) is permitted for any
building or structure subject to the limitations of IBC 1609.1.1.2.
** Limitations of IBC 1609.1.1.2 must also be satisfied.
Method 1 – Simplified Procedure
FLOWCHART 5.3
Method 1 – Main Wind-force-resisting Systems
Net Design Wind Pressure, ps
Determine basic wind speed V from
Fig. 6-1 or Fig. 1609 (6.5.4)*
Determine importance factor I from
Table 6-1 based on occupancy
category from IBC Table 1604.5
Determine exposure category (6.5.6)
Determine adjustment factor for
height and exposure λ from Fig. 6-2
Are all 5 conditions
of 6.5.7.1 met?
Topographic factor
Topographic factor K zt = 1.0
* See 6.5.4.1 and 6.5.4.2 for basic wind speed in
special wind regions and estimation of basic wind
speeds from regional climatic data. Tornadoes
have not been considered in developing basic
wind speed distributions shown in the figures.
K zt = (1 + K1 K 2 K 3 ) 2 where K1 ,
K 2 and K 3 are given in Fig. 6-4
FLOWCHART 5.3
Method 1 – Main Wind-force-resisting Systems
Net Design Wind Pressure, ps (continued)
Determine simplified design wind
pressures ps30 from Fig. 6-2 for
Zones A through H on the building
Determine net design wind
pressures p s = λK zt Ip s30 by
Eq. 6-1 for Zones A through H**
** Notes:
1. For horizontal pressure zones, ps is the sum of the windward and leeward
net (sum of internal and external) pressures on vertical projection of Zones
A, B, C, and D. For vertical pressure zones, ps is the net (sum of internal
and external) pressure on horizontal projection of Zones E, F, G and H.
2. The load patterns shown in Fig. 6-2 shall be applied to each corner of the
building in turn as the reference corner. See other notes in Fig. 6-2.
3. Load effects of the design wind pressures determined by Eq. 6-1 shall not
be less than those from the minimum load case of 6.1.4.1. It is assumed
that the pressures ps for Zones A, B, C and D are equal to +10 psf and the
pressures for Zones E, F, G and H are equal to 0 psf.
FLOWCHART 5.4
Method 1 – Components and Cladding
Net Design Wind Pressure, pnet
Determine basic wind speed V from
Fig. 6-1 or Fig. 1609 (6.5.4)*
Determine importance factor I from
Table 6-1 based on occupancy
category from IBC Table 1604.5
Determine exposure category (6.5.6)
Determine adjustment factor for
height and exposure λ from Fig. 6-3
Are all 5 conditions
of 6.5.7.1 met?
Topographic factor
Topographic factor K zt = 1.0
* See 6.5.4.1 and 6.5.4.2 for basic wind speed in
special wind regions and estimation of basic wind
speeds from regional climatic data. Tornadoes
have not been considered in developing basic
wind speed distributions shown in the figures.
K zt = (1 + K1 K 2 K 3 ) 2 where K1 ,
K 2 , and K 3 are given in Fig. 6-4
FLOWCHART 5.4
Method 1 – Components and Cladding
Net Design Wind Pressure, pnet (continued)
Determine net design wind pressure
pnet30 from Fig. 6-3 for Zones 1
through 5 on the building
Determine net design wind
pressure p net = λK zt Ip net 30 by
Eq. 6-2 for Zones 1 through 5**
** Notes:
1. pnet represents the net pressures (sum of internal and external) to be
applied normal to each building surface as shown in Fig. 6-3. See
notes in Fig. 6-3.
2. The positive design wind pressures pnet determined by Eq. 6-2 shall
not be less than +10 psf and the negative design wind pressures pnet
determined by Eq. 6-2 shall not be less than -10 psf.
Method 2 – Analytical Procedure
FLOWCHART 5.5
Method 2 – Velocity Pressures, q z and q h
Determine basic wind speed V from
Fig. 6-1 or Fig. 1609 (6.5.4)*
Determine wind directionality factor
K d from Table 6-4 (6.5.4.4)
Determine importance factor I from
Table 6-1 based on occupancy
category from IBC Table 1604.5
Determine exposure category (6.5.6)
Are all 5 conditions
of 6.5.7.1 met?
Topographic factor
Topographic factor K zt = 1.0
* See 6.5.4.1 and 6.5.4.2 for basic wind speed in
special wind regions and estimation of basic wind
speeds from regional climatic data. Tornadoes
have not been considered in developing basic
wind speed distributions shown in the figures.
K zt = (1 + K1 K 2 K 3 ) 2 where K1 ,
K 2 , and K 3 are given in Fig. 6-4
FLOWCHART 5.5
Method 2 – Velocity Pressures, q z and q h
Determine velocity pressure
exposure coefficients K z and K h
from Table 6-3 (6.5.6.6)
Determine velocity pressure at
height z and h by Eq. 6-15:
q z = 0.00256 K z K zt K d V 2 I
q h = 0.00256 K h K zt K d V 2 I **
** Notes:
1. qz = velocity pressure evaluated at height z
2. qh = velocity pressure evaluated at mean roof height h
3. The numerical constant of 0.00256 should be used except where
sufficient weather data are available to justify a different value (see
FLOWCHART 5.6
Method 2 – Gust Effect Factors, G and Gf
Given: structure dimensions B, L and h, damping
ratio β , and building natural frequency n1 *
Is n1 ≥ 1 Hz?
Structure is flexible
Structure is rigid
G = 0.85
g Q = g v = 3.4
z = 0.6h
≥ z min
where z min is given in Table 6-2
* Notes:
1. Information on structural damping can be found in C6.5.8.
2. n1 can be determined from a rational analysis or estimated
from approximate equations given in C6.5.8.
FLOWCHART 5.6
Method 2 – Gust Effect Factors, G and Gf
1/ 6
§ 33 ·
I z = c¨ ¸
© z ¹
Eq. 6-5
where c is given in Table 6-2
§ z ·
L z = "¨ ¸
© 33 ¹
Eq. 6-7
where " and ∈ are given in Table 6-2
1 + 0.63¨¨
© Lz ¹
§ 1 + 1.7 g Q I z Q ·
G = 0.925¨¨
v z ¹
Eq. 6-6
Eq. 6-4
FLOWCHART 5.6
Method 2 – Gust Effect Factors, G and Gf
g Q = g v = 3.4
g R = 2 ln(3,600n1 ) +
2 ln(3,600n1 )
z = 0.6h
≥ z min
where z min is given in Table 6-2
1/ 6
§ 33 ·
I z = c¨ ¸
© z ¹
where c is given in Table 6-2
Eq. 6-5
Eq. 6-9
FLOWCHART 5.6
Method 2 – Gust Effect Factors, G and Gf
§ z ·
L z = "¨ ¸
© 33 ¹
Eq. 6-7
where " and ∈ are given in Table 6-2
1 + 0.63¨¨
© Lz ¹
Eq. 6-6
Determine basic wind speed V from
Fig. 6-1 or Fig. 1609 (6.5.4)*
§ z · § 88 ·
Vz = b ¨ ¸ V ¨ ¸
© 33 ¹ © 60 ¹
Eq. 6-14
where b and α are given in Table 6-2
* See 6.5.4.1 and 6.5.4.2 for basic wind speed in special wind regions
and estimation of basic wind speeds from regional climatic data.
Tornadoes have not been considered in developing basic wind speed
distributions shown in the figures.
FLOWCHART 5.6
Method 2 – Gust Effect Factors, G and Gf
N1 = 1 z
Rn =
Rh =
Eq. 6-12
7.47 N1
(1 + 10.3N1 ) 5 / 3
Eq. 6-11
1 − e − 2η for η > 0
η 2η 2
Rh = 1 for η = 0
where η = 4.6n1h / V z
FLOWCHART 5.6
Method 2 – Gust Effect Factors, G and Gf
RB =
1 − e − 2η for η > 0
η 2η 2
R B = 1 for η = 0
where η = 4.6n1 B / V z
RL =
1 − e − 2η for η > 0
η 2η
R L = 1 for η = 0
where η = 15.4n1 L / V z
Rn Rh R B (0.53 + 0.47 R L )
§ 1 + 1.7 I g 2 Q 2 + g 2 R 2
G f = 0.925¨
1 + 1.7 g v I z
Eq. 6-10
Eq. 6-8
FLOWCHART 5.7
Method 2 – Buildings, Main Wind-force-resisting Systems
Is the building enclosed or
partially enclosed?
Determine velocity pressure qh from
Flowchart 5.5 using the exposure that
results in the highest wind loads for
any wind direction at the site
Determine gust effect factor G from
Flowchart 5.6
Determine net pressure coefficient
CN from Figs. 6-18A through 6-18D
Does the building
have a parapet?
Determine velocity pressure qp
from Flowchart 5.5 evaluated at
the top of the parapet
Determine combined net
pressure coefficient GC pn :
GC pn = +1.5 windward
Determine net design pressure
p = q h GC N (Eq. 6-25) for monoslope,
pitched, or troughed roofs*
GC pn = −1.0 leeward
Determine combined net design
pressure on the parapet
p p = q p GC pn (Eq. 6-20)**
* Minimum wind pressures of 6.1.4.1 must also be
considered. See 6.5.13.2 for provision on free roofs with an
angle of plane of roof from horizontal ≤ 5 degrees and
containing fascia panels.
** See C6.5.11.5 for discussion on application of design
wind pressures on parapets.
FLOWCHART 5.7
Method 2 – Buildings, Main Wind-force-resisting Systems
Is the building a low-rise
building as defined in 6.2?
Determine velocity pressure q h using
Flowchart 5.5
Determine external pressure coefficients
(GC pf ) from Fig. 6-10 for surfaces 1 – 6 and
1E, 2E, 3E and 4E as shown in Fig. 6-10
Determine internal pressure coefficients
(GC pi ) from Fig. 6-5 based on enclosure
1. See Fig. 6-10 for the load cases that must
be considered.
2. Minimum wind pressures of 6.1.4.1 must
also be considered.
Determine design wind pressure
p = q h [(GC pf ) − (GC pi )]
(Eq. 6-18) on surfaces 1 – 6 and
1E, 2E, 3E, and 4E†
FLOWCHART 5.7
Method 2 – Buildings, Main Wind-force-resisting Systems
Determine whether the building is rigid or flexible and the
corresponding gust effect factor G or G f from Flowchart 5.6
Is the building
Determine velocity pressure q z for windward walls
along the height of the building and q h for leeward
walls, side walls and roof using Flowchart 5.5
Determine pressure coefficients C p for the walls
and roof from Fig. 6-6 or 6-8
Determine qi for the walls and roof using
Flowchart 5.5‡
qi = qh or qi = qz depending on
enclosure classification (see 6.5.12.2.1).
qi may conservatively be evaluated at
height h (qi = qh) where applicable.
FLOWCHART 5.7
Method 2 – Buildings, Main Wind-force-resisting Systems
Determine internal pressure coefficients (GCpi) from
Fig. 6-5 based on enclosure classification
Determine design wind pressures by Eq. 6-19:
• Windward walls: p z = q z G f C p − q h (GC pi )
• Leeward walls, side walls, and roofs: p h = q h G f C p − q h (GC pi ) ª
1. See 6.5.12.3 and Fig. 6-9 for the load cases that must be considered.
2. Minimum wind pressures of 6.1.4.1 must also be considered.
FLOWCHART 5.7
Method 2 – Buildings, Main Wind-force-resisting Systems
Determine velocity pressure qz for windward walls
along the height of the building and qh for leeward
walls, side walls and roof using Flowchart 5.5
Determine pressure coefficients Cp for the walls and
roof from Fig. 6-6 or 6-8
Determine qi for the walls and roof using
Flowchart 5.5‡‡
Determine internal pressure coefficients (GCpi) from
Fig. 6-5 based on enclosure classification
Determine design wind pressures by Eq. 6-17:
• Windward walls: p z = q z GC p − q h (GC pi )
• Leeward walls, side walls, and roofs: p h = q h GC p − q h (GC pi ) ªª
qi = qh or qi = qz depending on enclosure classification (see 6.5.12.2.1). qi may
conservatively be evaluated at height h (qi = qh) where applicable.
1. See 6.5.12.3 and Fig. 6-9 for the load cases that must be considered.
2. Minimum wind pressures of 6.1.4.1 must also be considered.
FLOWCHART 5.8
Method 2 – Buildings, Components and Cladding
Is the building enclosed or
partially enclosed?
Determine velocity pressure qh from
Flowchart 5.5 using the exposure that
results in the highest wind loads for
any wind direction at the site
Determine gust effect factor G from
Flowchart 5.6
Determine net pressure coefficient
C N from Figs. 6-19A through 6-19C
Determine net design pressure
p = q h GC N (Eq. 6-26) for monoslope,
pitched or troughed roofs*
Does the building
have a parapet?
Determine velocity pressure qp
from Flowchart 5.5 evaluated at
the top of the parapet
Determine external pressure
coefficient GCp from Figs. 6-11
through 6-17 and internal pressure
coefficient GCpi from Fig. 6-5
Determine design wind pressure on
the parapet p = q p (GC p − GC pi )
(Eq. 6-24)**
* Minimum wind pressures of 6.1.4.2 must also be
** See 6.5.12.4.4 for the two load cases that must be
considered. See C6.5.11.5 for discussion on application
of design wind pressures on parapets.
FLOWCHART 5.8
Method 2 – Buildings, Components and Cladding
Is the building a low-rise
building as defined in 6.2 or
a building with h ≤ 60 ft?
Determine velocity pressure q h using
Flowchart 5.5
Determine external pressure coefficients (GCp)
from Figs. 6-11 through 6-16 for the various
surfaces noted in the figures
Determine internal pressure coefficients (GCpi)
from Fig. 6-5
Determine design wind pressure
p = q h [(GC p ) − (GC pi )]
(Eq. 6-22) on the various surfaces†
Minimum wind pressures of 6.1.4.2 must
also be considered.
FLOWCHART 5.8
Method 2 – Buildings, Components and Cladding
Determine velocity pressure q z for windward walls
along the height of the building and q h for leeward
walls, side walls and roof using Flowchart 5.5
Determine external pressure coefficients (GC p ) for
the walls and roof from Fig. 6-17
Determine qi for the walls and roof using
Flowchart 5.5‡‡
Determine internal pressure coefficients (GC pi )
from Fig. 6-5 based on enclosure classification
Determine design wind pressures by Eq. 6-23:
• Windward walls: p z = q z (GC p ) − q h (GC pi )
• Leeward walls, side walls, and roofs: p h = q h (GC p ) − q h (GC pi ) ªª
qi = qh or qi = qz depending on enclosure classification (see 6.5.12.4.2).
qi may conservatively be evaluated at height h (qi = qh) where applicable.
1. Minimum wind pressures of 6.1.4.2 must also be considered.
2. An alternate method for C&C in buildings with 60 ft < h < 90 ft is given in 6.5.12.4.3.
FLOWCHART 5.9
Method 2 – Structures Other than Buildings
Is the structure a solid
freestanding wall or solid sign?
Determine velocity pressure q h from
Flowchart 5.5
Is the structure a
rooftop structure
or equipment on
a building with
h < 60 ft?
Determine gust effect factor G from
Flowchart 5.6
Determine net force coefficient C f
from Fig. 6-20
Determine design wind force
F = q h GC f As (Eq. 6-27) where As is
the gross area of the wall or sign
FLOWCHART 5.9
Method 2 – Structures Other than Buildings
Determine velocity pressure q z evaluated at height
z of the centroid of area A f based on the exposure
defined in 6.5.6.3 using Flowchart 5.5
Determine gust effect factor G from Flowchart 5.6
Determine force coefficient C f from
Figs. 6-21 through 6-23
Determine design wind force F = q z GC f A f
(Eq. 6-28) where A f is the projected area
normal to the wind except where C f is
specified for the actual surface area
FLOWCHART 5.9
Method 2 – Structures Other than Buildings
Determine velocity pressure q z evaluated at height
z of the centroid of area A f based on the exposure
defined in 6.5.6.3 using Flowchart 5.5
Determine gust effect factor G from Flowchart 5.6
Determine force coefficient C f from
Figs. 6-21 through 6-23
Is A f ≥ Bh ?
Is A f < 0.1Bh ?
Determine design wind force by increasing
force obtained by Eq. 6-28 by a factor that
decreases linearly from 1.9 to 1.0 as Af
increases from 0.1Bh to Bh
Determine design wind force by
increasing force obtained by Eq. 6-28
by 1.9: F = 1.9q z GC f A f
Determine design wind force by
Eq. 6-28: F = q z GC f A f
Alternate All-heights Method
FLOWCHART 5.10
Net Wind Pressure, pnet
Determine basic wind speed V from
Fig. 6-1 or Fig. 1609 (6.5.4)*
Determine wind stagnation pressure
qs from IBC Table 1609.6.2(1)**
Determine importance factor I from
Table 6-1 based on occupancy
category from IBC Table 1604.5
Determine exposure category (6.5.6)
Are all 5 conditions
of 6.5.7.1 met?
Topographic factor
Topographic factor K zt = 1.0
* See 6.5.4.1 and 6.5.4.2 for basic wind speed in
special wind regions and estimation of basic wind
speeds from regional climatic data. Tornadoes
have not been considered in developing basic
wind speed distributions shown in the figures.
** For basic wind speeds not shown, use
q s = 0.00256V .
K zt = (1 + K1 K 2 K 3 ) 2 where K1 ,
K 2 and K 3 are given in Fig. 6-4
FLOWCHART 5.10
Net Wind Pressure, pnet
Determine velocity pressure
exposure coefficient K z from
Table 6-3 (6.5.6.6)
Determine net pressure coefficient
Cnet from IBC Table 1609.6.2(2)†
Determine net wind pressure at
height z by IBC Eq. 6-34: ††
pnet = qs K z Cnet IK zt
Where C net has more than one value in IBC Table 1609.6.2(2), the more
severe wind load condition shall be used for design.
Wind pressures are to be applied to the building envelope wall and roof
surfaces in accordance with IBC 1609.6.4.4.
The following sections contain examples that illustrate the wind design provisions of
IBC 1609 and Chapter 6 in ASCE/SEI 7-05.
Example 5.1 – Warehouse Building using Method 1, Simplified Procedure
For the one-story warehouse illustrated in Figure 5.1, determine design wind pressures on
(1) the main wind-force-resisting system in both directions, (2) a solid precast wall panel
and (3) an open-web joist purlin using Method 1, Simplified Procedure.
7″ precast concrete wall (typ.)
4 @ 37′-0″ = 148′-0″
HSS column (typ.)
Open-web joist girders (typ.)
Metal roof deck
Open-web joist purlins @ 8′-0″
8 @ 32′-0″ = 256′-0″
Roof slope = 1/2 in./ft
Figure 5.1 Plan and Elevation of Warehouse Building
Surface Roughness:
St. Louis, MO
C (open terrain with scattered obstructions less than 30 ft in height)
Not situated on a hill, ridge or escarpment
Less than 300 people congregate in one area and the building is not
used to store hazardous or toxic materials
Part 1: Determine design wind pressures on MWFRS
Step 1: Use Flowchart 5.1 to check if the building meets all of the conditions of
6.4.1.1 so that Method 1 can be used to determine the wind pressures on the MWFRS.
1. The building is a simple diaphragm building as defined in 6.2, since the windward
and leeward wind loads are transmitted through the metal deck roof (diaphragm)
to the precast walls (MWFRS), and there are no structural separations in the
MWFRS. O.K.
2. Three conditions must be checked to determine if a building is a low-rise
a. Mean roof height = 20 ft < 60 ft O.K.1
b. Mean roof height = 20 ft < least horizontal dimension = 148 ft O.K.
c. The enclosure classification of the building, which depends on the number and
size of openings in the precast walls.
Assume that there are two Type A precast panels on each of the east and west
walls and two Type B precast panels on each of the north and south walls (see
Figure 5.2). The openings in these walls are door openings. All other precast
panels do not have door openings, and assume that there are no openings in
the roof.
By definition, an open building is a building having each wall at least 80
percent open. This building is not open including when all of the doors are
open at the same time. Therefore, it enclosed or partially enclosed. O.K.2
Thus, the building is a low-rise building. O.K.
For buildings with roof angles less than or equal to 10 degrees, the mean roof height is equal to the roof
eave height (see definition of mean roof height in 6.2); the roof angle in this example is approximately
2.4 degrees.
A more specific enclosure classification is determined in item 3 of this part of the solution.
12′ × 14′ opening
12′ × 14′ opening
3′ × 7′ opening
Ao = (12 × 14) + (3 × 7) = 189 sq ft
Ao = 2(12 × 14) = 336 sq ft
* Elevation of top of walls on north and south faces vary with roof slope.
Figure 5.2 Door Openings in Precast Wall Panels
3. A building is defined as enclosed when it does not comply with the requirements
for an open or partially enclosed building. It has been previously established that
the building is not open (see item 2 above).
If all of the doors are closed, the building is enclosed. Also, if all of the doors are
open at the same time, the building is enclosed: the first item under the definition
of a partially enclosed building is not satisfied, i.e., the total area of openings Ao
in a wall that receives positive external pressure is less than 1.1 times the sum of
the areas of openings Aoi in the balance of the building envelope.3
Assume that one 3 ft by 7 ft door is open and all other doors are closed. Check if
the building is partially enclosed per the definition in 6.2:
Ao = 3 × 7 = 21 sq ft > 1.1Aoi = 0 where Aoi = 0 , since it is assumed that
there are no other doors that are open.
b. Ao = 21 sq ft > 4 sq ft (governs) or 0.01Ag = 0.01 × 20 × 148 = 29.6 sq ft
and Aoi / Agi = 0 < 0.20.
Since both of these conditions are satisfied, the building is partially enclosed
when one 3 ft by 7 ft door is open and all other doors are closed. However, to
illustrate the use of Figure 6-2, assume the building is enclosed.4
For example, Ao = 2 × 336 = 672 sq ft < 1.1 Aoi = 1.1[672 + (2 × 378)] = 1,571 sq ft
Although 6.4.1.1 and Figure 6-2 state that Figure 6-2 is for enclosed buildings, this figure can also be used
to determine the wall pressures on a partially enclosed building, since the internal pressures cancel out.
However, tabulated pressures must be modified to obtain the correct roof pressures for a partially enclosed
The wind-borne debris provisions of 6.5.9.3 are not applicable to buildings
located outside of wind-borne debris regions (i.e., specific areas within hurricane
prone regions). O.K.
4. The building is regularly-shaped, i.e., it does not have any unusual geometric
irregularities in spatial form. O.K.
5. A flexible building is defined in 6.2 as one in which the fundamental natural
frequency of the building n1 is less than 1 Hz. Although it is evident by inspection
that the building is not flexible, the natural frequency will be determined and
compared to 1 Hz.
In lieu of obtaining the natural frequency of the building from a dynamic analysis,
Eq. C6-16 in the commentary of ASCE/SEI 7 is used to determine an approximate
value of n1 in the N-S direction for concrete shearwall systems:
n1 = 385(C w )0.5 / H
100 n § H ·
¨¨ ¸¸
Cw =
AB i =1© hi ¹ ª
«1 + 0.83¨ i
© i
AB = base area of the building = 148 × 256 = 37,888 sq ft
H , hi = building height and wall height, respectively = 20 ft
Ai = area of shearwall = (7 / 12) ×148 = 86.3 sq ft5
Di = length of shearwall = 148 ft
Cw =
2 × 100
= 0.45
37,888 ª
§ 20 ·
«1 + 0.83¨
¸ »
© 148 ¹ »¼
n1 = 385(0.45)0.5 / 20 = 12.9 Hz >> 1 Hz
Similar calculations in the E-W direction yield n1 = 17.1 Hz >> 1 Hz. Thus, the
building is not flexible. O.K.
Openings in precast wall panels are not considered.
6. The building does not have response characteristics that make it subject to acrosswind loading or other similar effects, and it is not sited at a location where
channeling effects or buffeting in the wake of upwind obstructions need to be
considered. O.K.
7. The building has a symmetrical cross-section in each direction and has a relatively
flat roof. O.K.
8. The building is exempted from torsional load cases as indicated in Note 5 of
Figure 6-10 (the building is one-story with a height h less than 30 ft and it has a
flexible roof diaphragm). O.K.
Since all of the conditions of 6.4.1.1 are satisfied, Method 1 may be used to
determine the design wind pressures on the MWFRS.
Step 2: Use Flowchart 5.3 to determine the net design wind pressures p s on the
1. Determine basic wind speed V from Figure 6-1 or Figure 1609.
From either of these figures, V = 90 mph for St. Louis, MO.
2. Determine importance factor I from Table 6-1 based on occupancy category from
IBC Table 1604.5.
From IBC Table 1604.5, the Occupancy Category is II, based on the occupancy
given in the design data. From Table 6-1, I = 1.0.
3. Determine exposure category.
In the design data, the surface roughness is given as C. It is assumed that
Exposures B and D are not applicable, so Exposure C applies (see 6.5.6.3).
4. Determine adjustment factor for height and exposure λ from Figure 6-2.
For a mean roof height of 20 ft and Exposure C, λ = 1.29.
5. Determine topographic factor K zt .
As noted in the design data, the building is not situated on a hill, ridge or
escarpment. Thus, topographic factor Kzt = 1.0 (6.5.7.2).
6. Determine simplified design wind pressures ps30 from Figure 6-2 for Zones A
through H on the building.
Wind pressures ps30 can be read directly from Figure 6-2 for V = 90 mph and a
roof angle between 0 and 5 degrees. Since the roof is essentially flat, only Load
Case 1 is considered (see Note 4 in Figure 6-2). These pressures, which are based
on Exposure B, h = 30 ft, Kzt = 1.0, and I = 1.0, are given in Table 5.2.
Table 5.2 Wind Pressures ps 30 on MWFRS
Horizontal pressures (psf)
Vertical pressures (psf)
7. Determine net design wind pressures p s = λK zt Ip s30 by Eq. 6-1 for Zones A
through H.
ps = 1.29 × 1.0 × 1.0 × p s30 = 1.29 ps30
The horizontal pressures in Table 5.3 represent the combination of the windward
and leeward net (sum of internal and external) pressures. Similarly, the vertical
pressures represent the net (sum of internal and external) pressures.
Table 5.3 Wind Pressures ps on MWFRS
Horizontal pressures (psf)
Vertical pressures (psf)
8. The net design pressures p s in Table 5.3 are to be applied to the surfaces of the
building in accordance with Figure 6-2.
According to Note 7 in Figure 6-2, the total horizontal load must not be less than
that determined by assuming ps = 0 in Zones B and D. Since the net pressures in
Zones B and D in this example act in the direction opposite to those in A and C,
they decrease the horizontal load. Thus, the pressures in Zones B and D are set
equal to 0 when analyzing the structure for wind in the transverse direction.
According to Note 2 in Figure 6-2, the load patterns for the transverse and
longitudinal directions are to be applied to each corner of the building, i.e., each
corner of the building must be considered a reference corner. Eight different load
cases need to be examined (four in the transverse direction and four in the
longitudinal direction). One load pattern in the transverse direction and one in the
longitudinal direction are illustrated in Figure 5.3.
8.8 psf
13.8 psf
11.4 psf
19.9 psf
11.0 psf
MWFRS Direction Being Evaluated
16.5 psf
8.8 psf
11.4 psf
13.8 psf
19.9 psf
MWFRS Direction Being Evaluated
11.0 psf
16.5 psf
Figure 5.3 Design Wind Pressures on MWFRS
The width of the end zone 2a in this example is equal to 16 ft, where a = least of
0.1(least horizontal dimension) = 0.1 × 148 = 14.8 ft or 0.4h = 0.4 × 20 = 8 ft
(governs). This value of a is greater than 0.04 (least horizontal dimension) = 0.04
× 148 = 5.9 ft or 3 ft (see Note 10a in Figure 6-2).
The minimum design wind load case of 6.4.2.1.1 must also be considered: the
load effects from the design wind pressures calculated above must not be less than
the load effects assuming that ps = + 10 psf in Zones A through D and ps = 0 psf
in Zones E through H (see Figure C6-1 for application of load).
Part 2: Determine design wind pressures on a solid precast wall panel
Step 1: Use Flowchart 5.2 to check if the building meets all of the conditions of
6.4.1.2 so that Method 1 can be used to determine the wind pressures on the C&C.
1. Mean roof height h = 20 ft < 60 ft O.K.
2. The building is assumed to be enclosed (see Part 1, Step 1, item 3) and the windborne debris provisions of 6.5.9.3 are not applicable O.K.
3. The building is regularly-shaped, i.e., it does not have any unusual geometric
irregularities in spatial form. O.K.
4. The building does not have response characteristics that make it subject to acrosswind loading or other similar effects, and it is not sited at a location where
channeling effects or buffeting in the wake of upwind obstructions need to be
considered. O.K.
5. The roof is essentially flat. O.K.
Since all of the conditions of 6.4.1.2 are satisfied, Method 1 may be used to
determine the design wind pressures on the C&C.
Step 2: Use Flowchart 5.4 to determine the net design wind pressures p net on the
1. The basic wind speed V, the importance factor I, the exposure category, the
adjustment factor for height and exposure λ , and the topographic factor Kzt have
all been determined previously (Part 1, Step 2, items 1 through 5) and are used in
calculating the wind pressures on the precast walls, which are C&C.6
2. Determine net design wind pressures pnet30 from Figure 6-3 for Zones 4 and 5,
which are the interior and end zones of walls, respectively.
λ from Figure 6-2 for MWFRS is the same as that in Figure 6-3 for C&C.
Wind pressures pnet30 can be read directly from Figure 6-3 for V = 90 mph and an
effective wind area.
The effective wind area is defined as the span length multiplied by an effective
width that need not be less than one-third the span length: 20 × (20/3) =
133.3 sq ft.7
According to Note 4 in Figure 6-3, tabulated pressures may be interpolated for
effective wind areas between those given, or the value associated with the lower
effective wind area may be used. The latter of these two options is utilized in this
example. The pressures pnet30 in Table 5.4 are obtained from Figure 6-3 for V =
90 mph and an effective wind area of 100 sq ft, and are based on Exposure B, h =
30 ft, Kzt = 1.0, and I = 1.0.
Table 5.4 Wind Pressures pnet 30 on Precast Walls
pnet 30 (psf)
3. Determine net design wind pressures pnet = λK zt Ipnet 30 by Eq. 6-2 for Zones 4
and 5.
pnet = 1.29 × 1.0 × 1.0 × pnet 30 = 1.29 pnet 30
The pressures in Table 5.5 represent the net (sum of internal and external)
pressures that are applied normal to the precast walls.
Table 5.5 Wind Pressures pnet on Precast Walls
pnet (psf)
The width of the end zone (Zone 5) a = 8 ft (see Part 1, Step 2, item 8).
In Zones 4 and 5, the computed positive and negative (absolute) pressures are
greater than the minimum values prescribed in 6.4.2.2.1 of +10 psf and -10 psf,
The smallest span length corresponding to the east and west walls is used, since this results in larger
Part 3: Determine design wind pressures on an open-web joist purlin
Step 1: Use Flowchart 5.2 to check if the building meets all of the conditions of
6.4.1.2 so that Method 1 can be used to determine the wind pressures on the C&C.
It was shown in Part 2, Step 1 that Method 1 may be used to determine the wind
forces on the C&C.
Step 2: Use Flowchart 5.4 to determine the net design wind pressures p net on the
1. The basic wind speed V, the importance factor I, the exposure category, the
adjustment factor for height and exposure λ , and the topographic factor Kzt have
all been determined previously (Part 1, Step 2, items 1 through 5) and are used in
calculating the wind pressures on the open-web joist purlins, which are subject to
C&C pressures.
2. Determine net design wind pressures pnet30 from Figure 6-3 for Zones 1, 2 and 3,
which are the interior, end and corner zones of the roof, respectively.
Effective wind area is equal to the larger of the purlin tributary area = 37 × 8 =
296 sq ft or the span length multiplied by an effective width that need not be less
than one-third the span length = 37 × (37/3) = 456.3 sq ft (governs).
The pressures pnet30 in Table 5.6 are obtained from Figure 6-3 for V = 90 mph, a
roof angle between 0 and 7 degrees, and an effective wind area of 100 sq ft.8
These pressures are based on Exposure B, h = 30 ft, Kzt = 1.0, and I = 1.0.
Table 5.6 Wind Pressures pnet30 on Open-web Joist Purlins
pnet 30 (psf)
3. Determine net design wind pressures pnet = λK zt Ipnet 30 by Eq. 6-2 for Zones 1,
2 and 3.
pnet = 1.29 × 1.0 × 1.0 × pnet 30 = 1.29 pnet 30
Where actual effective wind areas are greater than 100 sq ft, the tabulated pressure values associated with
an effective wind area of 100 sq ft are applicable.
The pressures in Table 5.7 represent the net (sum of internal and external)
pressures that are applied normal to the open-web joist purlins and that act over
the tributary area of each purlin, which is equal to 37 × 8 = 296 sq ft.
Table 5.7 Wind Pressures pnet on Open-web Joist Purlins
pnet (psf)
The width of the end and corner zones (Zones 2 and 3) a = 8 ft (see Part 1, Step 2,
item 8). The positive net design pressures in Zones 1, 2 and 3 must be increased to
the minimum value of 10 psf in accordance with 6.4.2.2.1. Figure 5.4 contains the
loading diagrams for typical open-web joist purlins located within the various
zones of the roof.
Zone 2
Type C (typ.)
Type A (typ.)
Zone 1
Type B (typ.)
Zone 3
Positive Pressure
Negative Pressure
17.2 x 8 = 137.6 plf
10 x 8 = 80.0 plf
Type A
Types A, B , and C
20.4 x 8 = 163.2 plf
137.6 plf
Type B
(20.4 + 17.2) x 4 = 150.4 plf
Type C
Figure 5.4 Open-web Joist Purlin Loading Diagrams
Example 5.2 – Warehouse Building using Low-rise Building Provisions of
Method 2, Analytical Procedure
For the one-story warehouse in Example 5.1, determine design wind pressures on (1) the
main wind-force-resisting system in both directions, (2) a solid precast wall panel, and
(3) an open-web joist purlin using the low-rise building provisions of Method 2,
Analytical Procedure. See Figure 5.1 for building dimensions.
Surface Roughness:
St. Louis, MO
C (open terrain with scattered obstructions less than 30 ft in height)
Not situated on a hill, ridge, or escarpment
Less than 300 people congregate in one area and the building is not
used to store hazardous or toxic materials
Part 1: Determine design wind pressures on MWFRS
Step 1: Check if the low-rise building provisions of 6.5.12.2.2 can be used to
determine the design wind pressures on the MWFRS.
The provisions of 6.5.12.2.2 may be used to determine design wind pressures
provided the building is a regular-shaped low-rise building as defined in 6.2. It was
shown in Example 5.1 (Part 1, Step 1, item 2) that this warehouse building is a lowrise building, and the building is regularly-shaped. Also, the building does not have
response characteristics that make it subject to across-wind loading or other similar
effects, and it is not sited at a location where channeling effects or buffeting in the
wake of upwind obstructions need to be considered.
The low-rise building provisions of Method 2, Analytical Procedure, can be used
to determine the design wind pressures on the MWFRS.
Step 2: Use Flowchart 5.7 to determine the design wind pressures on the MWFRS.
1. It is assumed in Example 5.1 (Part 1, Step 1, item 3) that the building is enclosed.
2. Determine velocity pressure qh using Flowchart 5.5.
a. Determine basic wind speed V from Figure 6-1 or Figure 1609.
From either of these figures, V = 90 mph for St. Louis, MO.
b. Determine wind directionality factor K d from Table 6-4.
For the MWFRS of a building structure, K d = 0.85.
c. Determine importance factor I from Table 6-1 based on occupancy category
from IBC Table 1604.5.
From IBC Table 1604.5, the Occupancy Category is II. From Table 6-1, I =
d. Determine exposure category.
In the design data, the surface roughness is given as C. It is assumed that
Exposures B and D are not applicable, so Exposure C applies (see 6.5.6.3).
e. Determine topographic factor K zt .
As noted in the design data, the building is not situated on a hill, ridge or
escarpment. Thus, topographic factor Kzt = 1.0 (6.5.7.2).
f. Determine velocity pressure exposure coefficient K h from Table 6-3.
For Exposure C and a mean roof height of 20 ft, K h = 0.90 from Table 6-3.
g. Determine velocity pressure qh at the mean roof height by Eq. 6-15.
qh = 0.00256 K h K zt K d V 2 I
= 0.00256 × 0.90 × 1.0 × 0.85 × 90 2 × 1.0 = 15.9 psf
3. Determine external pressure coefficients (GCpf) from Figure 6-10 for building
surfaces 1 through 6, 1E, 2E, 3E and 4E.
External pressure coefficients (GCpf) can be read directly from Table 6-10 using a
roof angle between 0 and 5 degrees for wind in the transverse direction. For wind
in the longitudinal direction, the pressure coefficients corresponding to a roof
angle of 0 degrees are to be used (see Note 7 in Figure 6-10). The pressure
coefficients summarized in Table 5.8 are applicable in both the transverse and
longitudinal directions in this example.
4. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For an enclosed building, (GC pi ) = +0.18, − 0.18.
Table 5.8 External Pressure Coefficients (GC pf ) for MWFRS
(GC pf )
5. Determine design wind pressure p by Eq. 6-18 on surfaces 1 through 6, 1E, 2E,
3E and 4E.
p = qh [(GC pf ) − (GC pi )] = 15.9[(GC pf ) − (±0.18)]
Calculation of design wind pressures is illustrated for surface 1:
For positive internal pressure: p = 15.9(0.40 – 0.18) = 3.5 psf
For negative internal pressure: p = 15.9[0.40 – (–0.18)] = 9.2 psf
A summary of the design wind pressures is given in Table 5.9. Pressures are
applicable to wind in the transverse and longitudinal directions and are provided
for both positive and negative internal pressures.
The end zone width 2a is equal to 16 ft, where a = least of 0.1 (least horizontal
dimension) = 0.1 × 148 = 14.8 ft or 0.4h = 0.4 × 20 = 8 ft (governs). This value of
a is greater than 0.04 (least horizontal dimension) = 0.04 × 148 = 5.9 ft or 3 ft (see
Note 9 in Figure 6-10).
The design wind pressures summarized in Table 5.9 act normal to the surface.
Table 5.9 Design Wind Pressures p on MWFRS
(GC pf )
Design Pressure, p (psf)
(GC pi ) = +0.18
(GC pi ) = -0.18
According to Note 8 in Figure 6-10, when the roof pressure coefficients (GCpf)
are negative in Zones 2 or 2E, they shall be applied in Zone 2/2E for a distance
from the edge of the roof equal to 50 percent of the horizontal dimension of the
building that is parallel to the direction of the MWFRS being designed or 2.5
times the eave height he at the windward wall, whichever is less. The remainder of
Zone 2/2E extending to the ridge line must use the pressure coefficients (GCpf) for
Zone 3/3E.
For this building:
Transverse direction: 0.5 × 256 = 128 ft
Longitudinal direction: 0.5 × 148 = 74 ft
2.5 he = 2.5 × 20 = 50 ft (governs in both directions)
Therefore, in the transverse direction, Zone 2/2E applies over a distance of 50 ft
from the edge of the windward roof, and Zone 3/3E applies over a distance of
128 – 50 = 78 ft in what is normally considered to be Zone 2/2E. In the
longitudinal direction, Zone 3/3E is applied over a distance of 74 – 50 = 24 ft.
The design pressures are to be applied on the building in accordance with the
eight load cases illustrated in Figure 6-10. As shown in the figure, each corner of
the building is considered a reference corner for wind loading in both the
transverse and longitudinal directions.
According to Note 4 in Figure 6-10, combinations of external and internal
pressures are to be evaluated to obtain the most severe loading. Thus, when both
positive and negative pressures are considered, a total of 16 separate loading
conditions must be evaluated for this building.9
Illustrated in Figures 5.5 and 5.6 are the design wind pressures for one load case
in the transverse direction and one load case in the longitudinal direction,
respectively, including positive and negative internal pressure.
10.0 psf
8.8 psf
8.8 psf
13.8 psf
7.5 psf
3.5 psf
11.3 psf
9.7 psf
11.3 psf
19.9 psf
10.0 psf
6.8 psf
MWFRS Direction Being Evaluated
Note: dashed arrows represent uniformly
distributed loads over surfaces 4E, 4, and 6
(a) Positive Internal Pressure
4.3 psf
3.0 psf
3.0 psf
8.1 psf
1.8 psf
9.2 psf
4.0 psf
5.6 psf
5.6 psf
14.2 psf
4.3 psf
12.6 psf
MWFRS Direction Being Evaluated
Note: dashed arrows represent uniformly
distributed loads over surfaces 4E, 4, and 6
(b) Negative Internal Pressure
Figure 5.5 Design Wind Pressures on MWFRS in Transverse Direction
In general, the number of load cases can be reduced for symmetrical buildings.
7.5 psf
8.8 psf
9.7 psf
8.8 psf
10.0 psf
11.3 psf
10.0 psf
13.8 psf
19.9 psf
13.8 psf
3.5 psf
6.8 psf
MWFRS Direction Being
Note: dashed arrows represent uniformly
distributed loads over surfaces 6, 4, and 4E
(a) Positive Internal Pressure
1.8 psf
3.0 psf
4.0 psf
3.0 psf
4.3 psf
5.6 psf
4.3 psf
14.2 psf
8.1 psf
8.1 psf
9.2 psf
12.6 psf
MWFRS Direction Being
Note: dashed arrows represent uniformly
distributed loads over surfaces 6, 4, and 4E
(b) Negative Internal Pressure
Figure 5.6 Design Wind Pressures on MWFRS in Longitudinal Direction
Torsional load cases, which are given in Figure 6-10, must be considered in
addition to the basic load cases noted above, unless one or more of the conditions
under the exception in Note 5 of the figure are satisfied. Since this building is one
story with a mean roof height h less than 30 ft, the first condition is satisfied, and
torsional load cases need not be considered. The building also satisfies the third
condition, as it is two stories or less in height and has a flexible diaphragm.
The minimum design loading of 6.1.4.1 must also be investigated (see Figure C61).
Part 2: Determine design wind pressures on solid precast wall panel
Step 1: Check if the low-rise provisions of 6.5.12.4.1 can be used to determine the
design wind pressures on the C&C.
The provisions of 6.5.12.4.1 may be used to determine design wind pressures on
C&C of low-rise buildings defined in 6.2 and for any regularly-shaped building with
a height less than or equal to 60 ft.
Use 6.5.12.4.1 to determine the design wind pressures on the C&C.
Step 2: Use Flowchart 5.8 to determine the design wind pressures on the precast
walls, which are C&C.
1. It is assumed in Example 5.1 (Part 1, Step 1, item 3) that the building is enclosed.
2. Determine velocity pressure qh using Flowchart 5.5.
Velocity pressure was determined in Part 1, Step 2, item 2 of this example and is
equal to 15.9 psf.
3. Determine external pressure coefficients (GCp) from Figure 6-11A for Zones 4
and 5.
Pressure coefficients for Zones 4 and 5 can be determined from Figure 6-11A
based on the effective wind area.
The effective wind area is defined as the span length multiplied by an effective
width that need not be less than one-third the span length: 20 × (20/3) =
133.3 sq ft.10
The pressure coefficients from the figure are summarized in Table 5.10.
Table 5.10 External Pressure Coefficients (GC p ) for Precast Walls
(GC p )
The smallest span length corresponding to the east and west walls is used, since this results in larger
Note 5 in Figure 6-11A states that values of (GCp) for walls are to be reduced by
10 percent when the roof angle is less than or equal to 10 degrees. Modified
values of (GCp) based on Note 5 are provided in Table 5.11.
Table 5.11 Modified External Pressure Coefficients (GC p ) for Precast Walls
(GC p )
The width of the end zone a = least of 0.1 (least horizontal dimension) = 0.1 ×
148 = 14.8 ft or 0.4h = 0.4 × 20 = 8 ft (governs), which is greater than 0.04 (least
horizontal dimension) = 0.04 × 148 = 5.9 ft or 3 ft (see Note 6 in Figure 6-11A).
4. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For an enclosed building, (GC pi ) = +0.18, − 0.18.
5. Determine design wind pressure p by Eq. 6-22 on Zones 4 and 5.
p = qh [(GC p ) − (GC pi )] = 15.9[(GC p ) − (±0.18)]
Calculation of design wind pressures is illustrated for Zone 4:
For positive (GC p ) : p = 15.9[0.72 – (–0.18)] = 14.3 psf
For negative (GC p ) : p = 15.9[–0.81 – (+0.18)] = –15.7 psf
These pressures act perpendicular to the face of the precast walls.
The maximum design wind pressures for positive and negative internal pressures
are summarized in Table 5.12.
Table 5.12 Design Wind Pressures p on Precast Walls
(GC p )
Design Pressure,
p (psf)
In Zones 4 and 5, the computed positive and negative pressures are greater than
the minimum values prescribed in 6.1.4.2 of +10 psf and -10 psf, respectively.
Part 3: Determine design wind pressures on an open-web joist purlin
Step 1: Check if the low-rise provisions of 6.5.12.4.1 can be used to determine the
design wind pressures on the C&C.
As shown in Part 2, Step 1 of this example, 6.5.12.4.1 may be used to determine the
design wind pressures on the C&C.
Step 2: Use Flowchart 5.8 to determine the design wind pressures on the open-web
joist purlins, which are C&C.
1. It is assumed in Example 5.1 (Part 1, Step 1, item 3) that the building is enclosed.
2. Determine velocity pressure qh using Flowchart 5.5.
Velocity pressure was determined in Part 1, Step 2, item 2 of this example and is
equal to 15.9 psf.
3. Determine external pressure coefficients (GCp) from Figure 6-11B for Zones 1, 2
and 3 for gable roofs with a roof slope less than or equal to 7 degrees.
Pressure coefficients for Zones 1, 2 and 3 can be determined from Figure 6-11B
based on the effective wind area.
Effective wind area = larger of 37 × 8 = 296 sq ft or 37 × (37/3) = 456.3 sq ft
The pressure coefficients from the figure are summarized in Table 5.13.
Table 5.13 External Pressure Coefficients (GC p ) for Open-web Joist Purlins
(GC p )
4. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For an enclosed building, (GC pi ) = +0.18, − 0.18.
5. Determine design wind pressure p by Eq. 6-22 on Zones 1, 2 and 3.
p = qh [(GC p ) − (GC pi )] = 15.9[(GC p ) − (±0.18)]
The maximum design wind pressures for positive and negative internal pressures
are summarized in Table 5.14.
The pressures in Table 5.14 are applied normal to the open-web joist purlins and
act over the tributary area of each purlin, which is equal to 37 × 8 = 296 sq ft. If
the tributary area were greater than 700 sq ft, the purlins could have been
designed using the provisions for MWFRSs (6.5.12.1.3).
The positive pressures on Zones 1, 2 and 3 must be increased to the minimum
value of 10 psf per 6.1.4.2.
Table 5.14 Design Wind Pressures p on Open-web Joist Purlins
(GC p )
Design Pressure,
p (psf)
The pressures determined by this method are the same as those determined by the
simplified method in Example 5.1. Thus, the loading diagrams in Figure 5.4 are
applicable in this example.
Example 5.3 – Warehouse Building using Provisions of Method 2,
Analytical Procedure
For the one-story warehouse in Example 5.1, determine design wind pressures on (1) the
main wind-force-resisting system in both directions, (2) a solid precast wall panel and (3)
an open-web joist purlin using the provisions of Method 2, Analytical Procedure. See
Figure 5.1 for building dimensions.
Surface Roughness:
St. Louis, MO
C (open terrain with scattered obstructions less than 30 ft in height)
Not situated on a hill, ridge or escarpment
Less than 300 people congregate in one area and the building is not
used to store hazardous or toxic materials
Part 1: Determine design wind pressures on MWFRS
Step 1: Check if the provisions of 6.5 can be used to determine the design wind
pressures on the MWFRS.
The provisions of 6.5 may be used to determine design wind pressures provided the
conditions of 6.5.1 and 6.5.2 are satisfied. It is clear that these conditions are satisfied
for this regular-shaped building that does not have response characteristics that make
it subject to across-wind loading or other similar effects, and is not sited at a location
where channeling effects or buffeting in the wake of upwind obstructions need to be
The provisions of Method 2, Analytical Procedure, can be used to determine the
design wind pressures on the MWFRS.
Step 2: Use Flowchart 5.7 to determine the design wind pressures on the MWFRS.
1. It is assumed in Example 5.1 (Part 1, Step 1, item 3) that the building is enclosed.
2. Determine whether the building is rigid or flexible and the corresponding gust
effect factor from Flowchart 5.6.
It was determined in Example 5.1, Step 1, item 5 that the building is rigid.
According to 6.5.8.1, gust effect factor G for rigid buildings may be taken as 0.85
or can be calculated by Eq. 6-4. For simplicity, use G = 0.85.
3. Determine velocity pressure qz for windward walls along the height of the
building and qh for leeward walls, side walls, and roof using Flowchart 5.5.
a. Determine basic wind speed V from Figure 6-1 or Figure 1609.
From either of these figures, V = 90 mph for St. Louis, MO.
b. Determine wind directionality factor K d from Table 6-4.
For the MWFRS of a building structure, K d = 0.85.
c. Determine importance factor I from Table 6-1 based on occupancy category
from IBC Table 1604.5.
From IBC Table 1604.5, the Occupancy Category is II. From Table 6-1, I =
d. Determine exposure category.
In the design data, the surface roughness is given as C. It is assumed that
Exposures B and D are not applicable, so Exposure C applies (see 6.5.6.3).
e. Determine topographic factor K zt .
As noted in the design data, the building is not situated on a hill, ridge or
escarpment. Thus, topographic factor K zt = 1.0 (6.5.7.2).
f. Determine velocity pressure exposure coefficients Kz and Kh from
Table 6-3.
Values of K z and K h for Exposure C are summarized in Table 5.15.
Table 5.15 Velocity Pressure Exposure Coefficient K z
Height above ground
level, z (ft)
g. Determine velocity pressure q z and qh by Eq. 6-15.
q z = 0.00256 K z K zt K d V 2 I = 0.00256 × K z × 1.0 × 0.85 × 90 2 × 1.0 = 17.63K z psf
A summary of the velocity pressures is given in Table 5.16.
Table 5.16 Velocity Pressure q z
Height above ground
level, z (ft)
q z (psf)
4. Determine pressure coefficients C p for the walls and roof from Figure 6-6.
For wind in the E-W (transverse) direction:
Windward wall: C p = 0.8 for use with q z
Leeward wall (L/B = 256/148 = 1.73): C p = −0.35 (from linear
interpolation) for use with qh
Side wall: C p = −0.7 for use with qh
Roof (normal to ridge with θ < 10 degrees and h/L = 20/256 = 0.08 < 0.5)11:
C p = −0.9,−0.18 from windward edge to h = 20 ft for use with qh
C p = −0.5,−0.18 from 20 ft to 2h = 40 ft for use with qh
C p = −0.3,−0.18 from 40 ft to 256 ft for use with qh
For wind in the N-S (longitudinal) direction:
Windward wall: C p = 0.8 for use with q z
Leeward wall (L/B = 148/256 = 0.58): C p = −0.5 for use with qh
Side wall: C p = −0.7 for use with qh
Roof (parallel to ridge with h/L = 20/148 = 0.14 < 0.5):
C p = −0.9,−0.18 from windward edge to h = 20 ft for use with qh
C p = −0.5,−0.18 from 20 ft to 2h = 40 ft for use with qh
C p = −0.3,−0.18 from 40 ft to 148 ft for use with qh
5. Determine qi for the walls and roof using Flowchart 5.5.
In accordance with 6.5.12.2.1, qi = qh = 15.9 psf for windward walls, side walls,
leeward walls and roofs of enclosed buildings.
6. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For an enclosed building, (GC pi ) = +0.18, − 0.18.
7. Determine design wind pressures p z and ph by Eq. 6-17.
Windward walls:
p z = q z GC p − qh (GC pi )
= (0.85 × 0.8 × q z ) − 15.9(±0.18)
= (0.68q z # 2.9) psf (external ± internal pressure)
Leeward wall, side walls and roof:
The smaller uplift pressures on the roof due to Cp = –0.18 may govern the design when combined with
roof live load or snow loads. This pressure is not shown in this example, but in general must be
ph = qhGC p − qh (GC pi )
= (15.9 × 0.85 × C p ) − 15.9(±0.18)
= (13.5C p # 2.9) psf (external ± internal pressure)
A summary of the maximum design wind pressures in the E-W and N-S
directions is given in Tables 5.17 and 5.18, respectively.
Table 5.17 Design Wind Pressures p in E-W (Transverse) Direction
Height above
ground level,
z (ft)
q (psf)
qGC p (psf)
q h (GC pi )
± 2.9
Side walls
Net pressure p (psf)
+ (GC pi )
− (GC pi )
± 2.9
± 2.9
± 2.9
± 2.9
± 2.9
± 2.9
* from windward edge to 20 ft
from 20 ft to 40 ft
from 40 ft to 256 ft
Table 5.18 Design Wind Pressures p in N-S (Longitudinal) Direction
Height above
ground level,
z (ft)
q (psf)
qGC p (psf)
q h (GC pi )
± 2.9
Side walls
* from windward edge to 20 ft
from 20 ft to 40 ft
from 40 ft to 148 ft
Net pressure p (psf)
+ (GC pi )
− (GC pi )
± 2.9
± 2.9
± 2.9
± 2.9
± 2.9
± 2.9
Illustrated in Figures 5.7 and 5.8 are the net design wind pressures in the E-W
(transverse) and N-S (longitudinal) directions, respectively, for positive and
negative internal pressure.
The MWFRS of buildings whose wind loads have been determined by 6.5.12.2.1
must be designed for the wind load cases defined in Figure 6-9 (6.5.12.3).
In Case 1, the full design wind pressures act on the projected area perpendicular
to each principal axis of the structure. These pressures are assumed to act
separately along each principal axis. The wind pressures on the windward and
leeward walls depicted in Figures 5.7 and 5.8 fall under Case 1.
According to the exception in 6.5.12.3, one-story buildings with h ≤ 30 ft need
only be designed for Load Case 1 and Load Case 3.
In Case 3, 75 percent of the wind pressures on the windward and leeward walls of
Case 1, which are shown in Figures 5.7 and 5.8, act simultaneously on the
building (see Figure 6-9). This load case, which needs to be considered in
addition to the load cases in Figures 5.7 and 5.8, accounts for the effects due to
wind along the diagonal of the building.
Finally, the minimum design wind loading prescribed in 6.1.4.1 must be
considered as a load case in addition to those load cases described above (see
Figure C6-1).
Part 2: Determine design wind pressures on solid precast wall panel
The design wind pressures on the precast walls panels are the same as those determined
in Part 2 of Example 5.2.
Part 3: Determine design wind pressures on an open-web joist purlin
The design wind pressures on the open-web joist purlins are the same as those determined
in Part 3 of Example 5.2.
12.4 psf
15.1 psf
7.6 psf
7.9 psf
7.0 psf
9.7 psf
12.4 psf
7.3 psf
MWFRS Direction Being Evaluated
Note: dashed arrows represent uniformly
distributed loads over leeward and side surfaces
(a) Positive Internal Pressure
6.6 psf
1.8 psf
9.3 psf
13.7 psf
1.2 psf
6.6 psf
3.9 psf
13.1 psf
MWFRS Direction Being Evaluated
Note: dashed arrows represent uniformly
distributed loads over leeward and side surfaces
(b) Negative Internal Pressure
Figure 5.7 Design Wind Pressures on MWFRS in E-W (Transverse) Direction
9.7 psf
12.4 psf
7.0 psf
12.4 psf
9.7 psf
15.1 psf
7.9 psf
7.3 psf
MWFRS Direction Being
Note: dashed arrows represent uniformly distributed
loads over side and leeward surfaces.
(a) Positive Internal Pressure
3.9 psf
6.6 psf
1.2 psf
6.6 psf
3.9 psf
9.3 psf
13.7 psf
MWFRS Direction Being
13.1 psf
Note: dashed arrows represent uniformly distributed
loads over side and leeward surfaces.
(b) Negative Internal Pressure
Figure 5.8 Design Wind Pressures on MWFRS in N-S (Longitudinal) Direction
Example 5.4 – Warehouse Building using Alternate All-heights Method
For the one-story warehouse in Example 5.1, determine design wind pressures on (1) the
main wind-force-resisting system in both directions, (2) a solid precast wall panel and (3)
an open-web joist purlin using the Alternate All-heights Method of IBC 1609.6. See
Figure 5.1 for building dimensions.
Surface Roughness:
St. Louis, MO
C (open terrain with scattered obstructions less than 30 ft in height)
Not situated on a hill, ridge or escarpment
Less than 300 people congregate in one area and the building is not
used to store hazardous or toxic materials
Part 1: Determine design wind pressures on MWFRS
Step 1: Check if the provisions of IBC 1609.6 can be used to determine the design
wind pressures on this building.
The provisions of IBC 1609.6 may be used to determine design wind pressures on
this regularly-shaped building provided the conditions of IBC 1609.6.1 are satisfied:
1. The height of the building is 20 ft, which is less than 75 ft, and the height-to-leastwidth ratio = 20/148 = 0.14 < 4. Also, it was shown in Example 5.1 (Part 1,
Step 1, item 5) that the fundamental frequency n1 > 1 Hz in both directions.
2. As was discussed in Example 5.1 (Part 1, Step 1, item 6), this building is not
sensitive to dynamic effects. O.K.
3. This building is not located on a site where channeling effects or buffeting in the
wake of upwind obstructions need to be considered. O.K.
4. As was shown in Example 5.1 (Part 1, Step 1, item 1), the building meets the
requirements of a simple diaphragm building as defined in 6.2. O.K.
5. The fifth condition is not applicable in this example.
The provisions of the Alternate All-heights Method of IBC 1609.6 can be used to
determine the design wind pressures on the MWFRS.12
This method can also be used to determine design wind pressures on the C&C (IBC 1609.6).
Step 2: Use Flowchart 5.10 to determine the design wind pressures on the MWFRS.
1. Determine basic wind speed V from Figure 6-1 or Figure 1609.
From either of these figures, V = 90 mph for St. Louis, MO.
2. Determine the wind stagnation pressure qs from IBC Table 1609.6.2(1).
For V = 90 mph, qs = 20.7 psf.
3. Determine importance factor I from Table 6-1 based on occupancy category from
IBC Table 1604.5.
From IBC Table 1604.5, the Occupancy Category is II. From Table 6-1, I = 1.0.
4. Determine exposure category.
In the design data, the surface roughness is given as C. It is assumed that
Exposures B and D are not applicable, so Exposure C applies (see 6.5.6.3).
5. Determine topographic factor K zt .
As noted in the design data, the building is not situated on a hill, ridge or
escarpment. Thus, topographic factor K zt = 1.0 (6.5.7.2).
6. Determine velocity pressure exposure coefficients K z from Table 6-3.
Values of K z for Exposure C are summarized in Table 5.19.
Table 5.19 Velocity Pressure Exposure Coefficient K z
Height above ground
level, z (ft)
7. Determine net pressure coefficients Cnet for the walls and roof from IBC
Table 1609.6.2(2) assuming the building is enclosed.
For wind in the E-W (transverse) direction:
Windward wall: Cnet = 0.43 for positive internal pressure
Cnet = 0.73 for negative internal pressure
Cnet = −0.51 for positive internal pressure
Cnet = −0.21 for negative internal pressure
Side walls:
Cnet = −0.66 for positive internal pressure
Cnet = −0.35 for negative internal pressure
Leeward roof (wind perpendicular to ridge):
Cnet = −0.66 for positive internal pressure
Cnet = −0.35 for negative internal pressure
Windward roof (wind perpendicular to ridge with roof slope < 2 : 12 ):
Leeward wall:
Cnet = −1.09,−0.28 for positive internal pressure
Cnet = −0.79, 0.02 for negative internal pressure
For wind in the N-S (longitudinal) direction:
Windward wall: Cnet = 0.43 for positive internal pressure
Cnet = 0.73 for negative internal pressure
Leeward wall: Cnet = −0.51 for positive internal pressure
Cnet = −0.21 for negative internal pressure
Side walls:
Cnet = −0.66 for positive internal pressure
Cnet = −0.35 for negative internal pressure
Roof (wind parallel to ridge):
Cnet = −1.09 for positive internal pressure
Cnet = −0.79 for negative internal pressure
8. Determine net design wind pressures pnet by IBC Eq. 16-34.
pnet = q s K z Cnet IK zt = 20.7 K z Cnet
A summary of the net design wind pressures in the E-W and N-S directions is
given in Tables 5.20 and 5.21, respectively. Illustrated in Figures 5.9 and 5.10 are
the net design wind pressures in the E-W (transverse) and N-S (longitudinal)
directions, respectively, for positive and negative internal pressure.13
The MWFRS of buildings whose wind loads have been determined by
IBC 1609.6 must be designed for the wind load cases defined in ASCE/SEI
Figure 6-9 (IBC 1609.6.4.1). In Case 1, the full design wind pressures act on the
projected area perpendicular to each principal axis of the structure. These
pressures are assumed to act separately along each principal axis. The wind
For wind in the E-W direction, only the Condition 1 wind pressures on the roof are illustrated in
Figures 5.9 and 5.10. Although these pressures are not shown in the figures, they must be considered in
the overall design.
pressures on the windward and leeward walls depicted in Figures 5.9 and 5.10 fall
under Case 1.
Table 5.20 Net Design Wind Pressures pnet in E-W (Transverse) Direction
Windward wall
Leeward wall
Side walls
level, z (ft)
+ Internal
- Internal
Net design pressure pnet
+ Internal
- Internal
Table 5.21 Net Design Wind Pressures pnet in N-S (Longitudinal) Direction
Windward wall
Leeward wall
Side walls
level, z (ft)
+ Internal
- Internal
Net design pressure pnet
+ Internal
- Internal
According to IBC 1609.6.4.2, windward wall pressures are based on height z, and
leeward walls, side walls and roof pressures are based on mean roof height h.
IBC 1609.6.4.1 requires consideration of torsional effects as indicated in Figure 69. According to the exception in 6.5.12.3, one-story buildings with h ≤ 30 ft need
only be designed for Load Case 1 and Load Case 3.
In Case 3, 75 percent of the wind pressures on the windward and leeward walls of
Case 1, which are shown in Figures 5.9 and 5.10, act simultaneously on the
building (see ASCE/SEI Figure 6-9). This load case, which needs to be
considered in addition to the load cases in Figures 5.9 and 5.10, accounts for the
effects due to wind along the diagonal of the building.
12.3 psf
20.3 psf
9.5 psf
8.0 psf
12.3 psf
12.3 psf
7.6 psf
MWFRS Direction Being Evaluated
Note: dashed arrows represent uniformly
distributed loads over leeward and side surfaces
(a) Positive Internal Pressure
6.5 psf
14.7 psf
3.9 psf
13.6 psf
6.5 psf
6.5 psf
12.8 psf
MWFRS Direction Being Evaluated
Note: dashed arrows represent uniformly
distributed loads over leeward and side surfaces
(b) Negative Internal Pressure
Figure 5.9 Design Wind Pressures on MWFRS in E-W (Transverse) Direction
9.5 psf
12.3 psf
12.3 psf
20.3 psf
8.0 psf
MWFRS Direction Being
7.6 psf
Note: dashed arrows represent uniformly distributed
loads over side and leeward surfaces.
(a) Positive Internal Pressure
3.9 psf
6.5 psf
6.5 psf
14.7 psf
13.6 psf
MWFRS Direction Being
12.8 psf
Note: dashed arrows represent uniformly distributed
loads over side and leeward surfaces.
(b) Negative Internal Pressure
Figure 5.10 Design Wind Pressures on MWFRS in N-S (Longitudinal) Direction
Finally, the minimum design wind loading prescribed in IBC 1609.6.3 must be
considered as a load case in addition to those load cases described above. The minimum
10 psf wind pressure acts on the area of the building projected on a plane normal to the
direction of wind (see ASCE/SEI 6.1.4 and ASCE/SEI Figure C6-1).
Part 2: Determine design wind pressures on solid precast wall panel
Step 1: Check if the provisions of IBC 1609.6 can be used to determine the design
wind pressures on the C&C.
It was shown in Part 1 of this example that the provisions of IBC 1609.6 can be used
to determine the design wind pressures on the C&C.
Step 2: Use Flowchart 5.10 to determine the design wind pressures on the C&C.
1. through 6. These items are the same as those shown in Part 1 of this example.
7. Determine net pressure coefficients Cnet for Zones 4 and 5 in ASCE/SEI
Figure 6-11A from IBC Table 1609.6.2(2).
The effective wind area is defined as the span length multiplied by an effective
width that need not be less than one-third the span length: 20 × (20/3) =
133.3 sq ft.14
The net pressure coefficients from IBC Table 1609.6.2(2) for C&C (walls) not in
areas of discontinuity (item 4, h ≤ 60 ft, Zone 4) and in areas of discontinuity
(item 5, h ≤ 60 ft, Zone 5) are summarized in Table 5.22.15
Table 5.22 Net Pressure Coefficients Cnet for Precast Walls
The width of the end zone a = least of 0.1 (least horizontal dimension) = 0.1 ×
148 = 14.8 ft or 0.4h = 0.4 × 20 = 8 ft (governs), which is greater than 0.04 (least
horizontal dimension) = 0.04 × 148 = 5.9 ft or 3 ft (see Note 6 in Figure 6-11A).
The smallest span length corresponding to the east and west walls is used, since this results in larger
Linear interpolation was used to determine the values of C net in Table 5.22 [see note a in IBC
Table 1609.6.2(2)].
8. Determine net design wind pressures pnet by IBC Eq. 16-34.
pnet = q s K z Cnet IK zt = (20.7 × 0.90)Cnet = 18.6Cnet
where K z = K h = 0.90 from Table 5.19.
A summary of the design wind pressures on the precast walls is given in
Table 5.23. These pressures act perpendicular to the face of the precast walls.
Table 5.23 Design Wind Pressures on Precast Walls
Design Pressure,
pnet (psf)
In Zones 4 and 5, the computed positive and negative pressures are greater than
the minimum values prescribed in IBC 1609.6.3 of +10 psf and −10 psf,
Part 3: Determine design wind pressures on an open-web joist purlin
Step 1: Check if the provisions of IBC 1609.6 can be used to determine the design
wind pressures on the C&C.
It was shown in Part 1 of this example that the provisions of IBC 1609.6 can be used
to determine the design wind pressures on the C&C.
Step 2: Use Flowchart 5.10 to determine the design wind pressures on the C&C.
1. through 6. These items are the same as those shown in Part 1 of this example.
7. Determine net pressure coefficients Cnet for Zones 1, 2 and 3 in ASCE/SEI
Figure 6-11C from IBC Table 1609.6.2(2).
Effective wind area = larger of 37 × 8 = 296 sq ft or 37 × (37/3) = 456.3 sq ft
The net pressure coefficients from IBC Table 1609.6.2(2) for C&C (roofs) not in
areas of discontinuity (item 2, gable roof with flat < slope < 6:12, Zone 1) and in
areas of discontinuity (item 3, gable roof with flat < slope < 6:12, Zones 2 and 3)
are summarized in Table 5.24.
Table 5.24 Net Pressure Coefficients Cnet for Open-web Joist Purlins
8. Determine net design wind pressures pnet by IBC Eq. 16-34.
pnet = q s K z Cnet IK zt = (20.7 × 0.90)Cnet = 18.6Cnet
where K z = K h = 0.90 from Table 5.19.
A summary of the design wind pressures on the open-web joist purlins is given in
Table 5.25.
Table 5.25 Design Wind Pressures on Open-web Joist Purlins
Design Pressure,
pnet (psf)
The pressures in Table 5.25 are applied normal to the open-web joist purlins and
act over the tributary area of each purlin, which is equal to 37 × 8 = 296 sq ft. If
the tributary area were greater than 700 sq ft, the purlins could have been
designed using the provisions for MWFRSs (6.5.12.1.3).
The positive pressures on Zones 1, 2 and 3 must be increased to the minimum
value of 10 psf in accordance with IBC 1609.6.3.
Example 5.5 – Residential Building using Method 2, Analytical Procedure
For the three-story residential building illustrated in Figure 5.11, determine design wind
pressures on (1) the main wind-force-resisting system in both directions, (2) a typical
wall stud in the third story, (3) a typical roof truss and (4) a typical roof sheathing panel
using Method 2, Analytical Procedure. Note that door and window openings are not
shown in the figure.
3/4″ plywood sheathing
(4′ × 8′ sheets)
2′-9″ overhang (typ.)
Wood trusses @ 2′-0″ (typ.)
Wall construction: 2 × 6 @ 16″ (typ.)
1/2″ plywood sheathing
(4′ × 8′ sheets)
θ = 44.4°
Roof Plan
Roof ridge elevation:
Eave elevation:
Third floor elevation:
Second floor elevation:
Ground floor elevation:
South Elevation
32′-0″ 12′-0″
θ = 23°
East Elevation
West Elevation
Figure 5.11 Roof Plan and Elevations of Three-story Residential Building
Surface Roughness:
Sacramento, CA
B (suburban area with numerous closely spaced obstructions having
the size of single-family dwellings and larger)
Not situated on a hill, ridge or escarpment
Residential building where less than 300 people congregate in one
Part 1: Determine design wind pressures on MWFRS
Step 1: Check if the provisions of 6.5 can be used to determine the design wind
pressures on the MWFRS.
The provisions of 6.5 may be used to determine design wind pressures provided the
conditions of 6.5.1 and 6.5.2 are satisfied. It is clear that these conditions are satisfied
for this residential building that does not have response characteristics that make it
subject to across-wind loading or other similar effects, and that is not sited at a
location where channeling effects or buffeting in the wake of upwind obstructions
need to be considered.
The provisions of Method 2, Analytical Procedure, can be used to determine the
design wind pressures on the MWFRS.16
Step 2: Use Flowchart 5.7 to determine the design wind pressures on the MWFRS.
1. It is assumed in this example that the building is enclosed.
2. Determine whether the building is rigid or flexible and the corresponding gust
effect factor from Flowchart 5.6.
Assume that the fundamental frequency of the building n1 has been determined to
be greater than 1 Hz. Thus, the building is rigid.
According to 6.5.8.1, gust effect factor G for rigid buildings may be taken as 0.85
or can be calculated by Eq. 6-4. For simplicity, use G = 0.85.
3. Determine velocity pressure qz for windward walls along the height of the
building and qh for leeward walls, side walls and roof using Flowchart 5.5.
Even though the building is less than 60 ft in height, it is not recommended to use the low-rise provisions
of 6.5.12.2, since L-, T-, and U-shaped buildings are considered to be outside of the scope of that
a. Determine basic wind speed V from Figure 6-1 or Figure 1609.
From either of these figures, V = 85 mph for Sacramento, CA.
b. Determine wind directionality factor K d from Table 6-4.
For the MWFRS of a building structure, K d = 0.85.
c. Determine importance factor I from Table 6-1 based on occupancy category
from IBC Table 1604.5.
From IBC Table 1604.5, the Occupancy Category is II for a residential
building. From Table 6-1, I = 1.0.
d. Determine exposure category.
In the design data, the surface roughness is given as B. Assume that
Exposure B is applicable in all directions (6.5.6.3).
e. Determine topographic factor K zt .
As noted in the design data, the building is not situated on a hill, ridge or
escarpment. Thus, topographic factor Kzt = 1.0 (6.5.7.2).
f. Determine velocity pressure exposure coefficients Kz and Kh from
Table 6-3.
According to Note 1 in Table 6-3, values of Kz and Kh under Case 2 for
Exposure B must be used for MWFRSs in buildings that are not designed
using Figure 6-10 for low-rise buildings. Values of Kz and Kh are summarized
in Table 5.26.
Mean roof height =
44 + 32
= 38 ft
Table 5.26 Velocity Pressure Exposure Coefficient K z
Height above ground
level, z (ft)
g. Determine velocity pressure q z and qh by Eq. 6-15.
q z = 0.00256 K z K zt K d V 2 I = 0.00256 × K z × 1.0 × 0.85 × 852 × 1.0 = 15.72 K z psf
A summary of the velocity pressures is given in Table 5.27.
Table 5.27 Velocity Pressure q z
Height above ground
level, z (ft)
q z (psf)
4. Determine pressure coefficients C p for the walls and roof from Figure 6-6.
Since the building is not symmetric, all four wind directions normal to the walls
must be considered.
Figure 5.12 provides identification marks for each surface of the building.
Tables 5.28 through 5.31 contain the external pressure coefficients for wind in all
four directions.
Figure 5.12 Identification Marks for Building Surfaces
Table 5.28 External Pressure Coefficients C p for Wind from West to East
Windward wall
2, 4, 6
Side wall
3, 5
Leeward wall
8, 9
10, 11
Windward roof
Leeward roof
Edge to 38′
-0.90, -0.18
38′ to 76′
-0.50, -0.18
76′ to end
-0.30, -0.18
* See 6.5.11.4.1
** Obtained by linear interpolation using L/B = 133.5/72 = 1.9
Normal to ridge with θ = 44.4 degrees and h/L = 38/133.5 = 0.3
The smaller uplift pressures on the roof due to Cp = -0.18 may govern the design when combined
with roof live load or snow loads. This pressure is not shown in this example, but in general must be
Table 5.29 External Pressure Coefficients C p for Wind from East to West
Leeward wall
2, 4, 6
Side wall
3, 5
Windward wall
3, 5
Leeward roof
8, 9
Windward roof
10, 11
Edge to 38′
-0.90, -0.18
38′ to 76′
-0.50, -0.18
76′ to end
-0.30, -0.18
* Obtained by linear interpolation using L/B = 133.5/72 = 1.9
** See 6.5.11.4.1
Normal to ridge with θ = 44.4 degrees and h/L = 38/133.5 = 0.3
The smaller uplift pressures on the roof due to Cp = -0.18 may govern the design when combined
with roof live load or snow loads. This pressure is not shown in this example, but in general must be
Table 5.30 External Pressure Coefficients C p for Wind from North to South
1, 3, 5
Side wall
Windward wall
4, 6
Leeward wall
7, 8
Edge to 38′
-0.90, -0.18
38′ to 76′
-0.50, -0.18
76′ to end
-0.30, -0.18
9, 11
Leeward roof
Windward roof
0.11, -0.35‡
* See 6.5.11.4.1
**L/B = 72/133.5 = 0.54
For surface 8, Cp = -0.90 for the entire length. The smaller uplift pressures on the roof due to
Cp = -0.18 may govern the design when combined with roof live load or snow loads. This pressure is
not shown in this example, but in general must be considered.
Normal to ridge with θ = 23 degrees and h/L = 38/72 = 0.53. On windward roof, values were
obtained by linear interpolation.
Table 5.31 External Pressure Coefficients C p for Wind from South to North
1, 3, 5
Side wall
Leeward wall
4, 6
Windward wall
4, 6
7, 9
8, 10
Edge to 38′
-0.90, -0.18
38′ to 76′
-0.50, -0.18
76′ to end
-0.30, -0.18
Leeward roof
Windward roof
0.11, -0.35
*L/B = 72/133.5 = 0.54
** See 6.5.11.4.1
For surface 9, Cp = -0.90 for the entire length. The smaller uplift pressures on the roof due to
Cp = -0.18 may govern the design when combined with roof live load or snow loads. This pressure is
not shown in this example, but in general must be considered.
Normal to ridge with θ = 23 degrees and h/L = 38/72 = 0.53. Values were obtained by linear
5. Determine qi for the walls and roof using Flowchart 5.5.
In accordance with 6.5.12.2.1, qi = qh = 11.8 psf for windward walls, side walls,
leeward walls and roofs of enclosed buildings.
6. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For an enclosed building, (GC pi ) = +0.18, − 0.18.
7. Determine design wind pressures p z and ph by Eq. 6-17.
Windward walls:
p z = q z GC p − qh (GC pi )
= (0.85 × 0.8 × q z ) − 11.8(±0.18)
= (0.68q z # 2.1) psf (external ± internal pressure)
Leeward wall, side walls and roof:
ph = qhGC p − qh (GC pi )
= (11.8 × 0.85 × C p ) − 11.8(±0.18)
= (10.0C p # 2.1) psf (external ± internal pressure)
ph = qhGC p
= 11.8 × 0.85 × 0.8 = 8.0 psf
A summary of the maximum design wind pressures in all four wind directions is
given in Tables 5.32 through 5.35.
The MWFRS of buildings whose wind loads have been determined by 6.5.12.2.1
must be designed for the wind load cases defined in Figure 6-9 (6.5.12.3). Since
the building in this example is not symmetrical, all four wind directions must be
considered when combining wind loads according to Figure 6-9.
In Case 1, the full design wind pressures act on the projected area perpendicular
to each principal axis of the structure. These pressures are assumed to act
separately along each principal axis. The wind pressures on the windward and
leeward walls given in Tables 5.32 through 5.35 fall under Case 1.
Table 5.32 Design Wind Pressures p for Wind from West to East
Height above
ground level,
z (ft)
q (psf)
2, 4, 6
3, 5
8, 9
10, 11
qGC p (psf)
q h (GC pi )
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
Net pressure p (psf)
+ (GC pi )
− (GC pi )
* from windward edge to 38 ft
from 38 ft to 76 ft
from 76 ft to end
Table 5.33 Design Wind Pressures p for Wind from East to West
Height above
ground level,
z (ft)
q (psf)
2, 4, 6
3, 5
8, 9
10, 11
* from windward edge to 38 ft
from 38 ft to 76 ft
from 76 ft to end
qGC p (psf)
q h (GC pi )
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
Net pressure p (psf)
+ (GC pi )
− (GC pi )
Table 5.34 Design Wind Pressures p for Wind from North to South
Height above
ground level,
z (ft)
q (psf)
9, 11
1, 3, 5
4, 6
7, 8
qGC p (psf)
q h (GC pi )
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
Net pressure p (psf)
+ (GC pi )
− (GC pi )
* from windward edge to 38 ft
from 38 ft to 76 ft
from 76 ft to end
Table 5.35 Design Wind Pressures p for Wind from South to North
Height above
ground level,
z (ft)
q (psf)
8, 10
1, 3, 5
4, 6
7, 9
* from windward edge to 38 ft
from 38 ft to 76 ft
from 76 ft to end
qGC p (psf)
q h (GC pi )
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
± 2.1
Net pressure p (psf)
+ (GC pi )
− (GC pi )
In Case 2, 75 percent of the design wind pressures on the windward and leeward
walls are applied on the projected area perpendicular to each principal axis of the
building along with a torsional moment. The wind pressures and torsional
moment are applied separately for each principal axis.
The wind pressures and torsional moment at the mean roof height for Case 2 are
as follows:17
For west-to-east wind: 0.75 PWX = 0.75 × 10.1 = 7.6 psf (surface 1)
0.75PLX = 0.75 × 1.1 = 0.8 psf (surfaces 3 and 5)
M T = 0.75( PWX + PLX ) B X e X
= 0.75(10.1 + 1.1) × 72 × (±0.15 × 72)
= ±6,532 ft - lb/ft
For north-to-south wind: 0.75 PWY = 0.75 × 10.1 = 7.6 psf (surface 2)
0.75 PLY = 0.75 × 2.9 = 2.2 psf (surfaces 4 and 6)
M T = 0.75( PWY + PLY ) BY eY
= 0.75(10.1 + 2.9) × 133.5 × (±0.15 × 133.5)
= ±26,065 ft - lb/ft
Similar calculations can be made for east-to-west wind and for south-to-north
wind. All four of these load combinations must be considered for Case 2, since
the building is not symmetrical. Similar calculations can also be made at other
elevations below the roof.
In Case 3, 75 percent of the wind pressures of Case 1 are applied to the building
simultaneously. This accounts for wind along the diagonal of the building. Like in
Case 2, four load combinations must be considered for Case 3.
In Case 4, 75 percent of the wind pressures and torsional moments defined in
Case 2 act simultaneously on the building. As with all of the other cases, four load
combinations must be considered for this load case as well.
Finally, the minimum design wind loading prescribed in 6.1.4.1 must be
considered as a load case in addition to those load cases described above.
Part 2: Determine design wind pressures on wall stud
Use Flowchart 5.8 to determine the design wind pressures on the walls studs in the third
story, which are C&C.
Since the internal pressures cancel out in the horizontal direction, it makes no difference whether the net
pressures based on positive or negative internal pressure are used in the load cases for the design of the
MWFRS. In this example, the net pressures based on negative internal pressure are used.
1. It is assumed that the building is enclosed.
2. Determine velocity pressure qh using Flowchart 5.5.
Velocity pressure was determined in Part 1, Step 2, item 3 of this example and is
equal to 11.8 psf.18
3. Determine external pressure coefficients (GC p ) from Figure 6-11A for Zones 4
and 5.
Pressure coefficients for Zones 4 and 5 can be determined from Figure 6-11A
based on the effective wind area.
The effective wind area is the larger of the tributary area of a wall stud and the
span length multiplied by an effective width that need not be less than one-third
the span length.
Effective wind area = larger of 10 × (16/12) = 13.3 sq ft or 10 × (10/3) =
33.3 sq ft (governs).
The pressure coefficients from the figure are summarized in Table 5.36.
Table 5.36 External Pressure Coefficients (GC p ) for Wall Studs
(GC p )
The width of the end zone a = least of 0.1 (least horizontal dimension) = 0.1 × 72
= 7.2 ft (governs) or 0.4h = 0.4 × 38 = 15.2 ft. This value of a is greater than 0.04
(least horizontal dimension) = 0.04 × 72 = 2.9 ft or 3 ft (see Note 6 in Figure 611A).
4. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For an enclosed building, (GC pi ) = +0.18, − 0.18.
In accordance with Note 1 of Table 6-3, values of Kh under Case 1 are to be used for C&C. At a mean
roof height of 38 ft, Kh under Cases 1 and 2 are the same. Thus, qh used in the design of the MWFRS can
be used in the design of the C&C in this example.
5. Determine design wind pressure p by Eq. 6-22 in Zones 4 and 5.
p = qh [(GC p ) − (GC pi )] = 11.8[(GC p ) − (±0.18)]
Calculation of design wind pressures is illustrated for Zone 4:
For positive (GC p ) : p = 11.8[0.9 – (–0.18)] = 12.7 psf
For negative (GC p ) : p = 11.8[–1.0 – (+0.18)] = –13.9 psf
The maximum design wind pressures for positive and negative internal pressures
are summarized in Table 5.37.
Table 5.37 Design Wind Pressures p on Wall Studs
(GC p )
Design Pressure,
p (psf)
The pressures in Table 5.37 are applied normal to the wall studs and act over the
tributary area of each stud.
In Zones 4 and 5, the computed positive and negative pressures are greater than
the minimum values prescribed in 6.1.4.2 of +10 psf and -10 psf, respectively.
Part 3: Determine design wind pressures on roof trusses
Use Flowchart 5.8 to determine the design wind pressures on the roof trusses spanning in
the N-S and E-W directions, which are C&C.
1. It is assumed in that the building is enclosed.
2. Determine velocity pressure qh using Flowchart 5.5.
Velocity pressure was determined in Part 1, Step 2, item 3 of this example and is
equal to 11.8 psf.19
In accordance with Note 1 of Table 6-3, values of Kh under Case 1 are to be used for C&C. At a mean
roof height of 38 ft, Kh under Cases 1 and 2 are the same. Thus, qh used in the design of the MWFRS can
be used in the design of the C&C in this example.
3. Determine external pressure coefficients (GCp) from appropriate figures for roof
trusses spanning in the N-S and E-W directions.
a. Trusses spanning in the N-S direction
Pressure coefficients for Zones 1, 2 and 3 can be determined from Figure 611C ( 7$ < θ = 23$ < 27$ ) based on the effective wind area assuming a 4 ft by
2 ft panel size. Included are the pressure coefficients for the overhanging
portions of the trusses.
Effective wind area = larger of 56.5 × 2 = 113 sq ft or 56.5 × (56.5/3) =
1,064.1 sq ft (governs).
The pressure coefficients from Figure 6-11C are summarized in Table 5.38.
According to Note 5 in the table, values of (GCp) for roof overhangs include
pressure contributions from both the upper and lower surfaces of the
Table 5.38 External Pressure Coefficients (GC p ) for Roof Trusses Spanning in the N-S
(GC p )
b. Trusses spanning in the E-W direction
Pressure coefficients for Zones 1, 2 and 3 can be determined from Figure 611D ( 27$ < θ = 44.4$ < 45$ ) based on the effective wind area. Included are the
pressure coefficients for the overhanging portions of the trusses.
Effective wind area = larger of 24.5 × 2 = 49 sq ft or 24.5 × (24.5/3) =
200 sq ft (governs).
The pressure coefficients from Figure 6-11D are summarized in Table 5.39.
According to Note 5 in the table, values of (GCp) for roof overhangs include
pressure contributions from both the upper and lower surfaces of the
Table 5.39 External Pressure Coefficients (GC p ) for Roof Trusses Spanning in the
E-W Direction
(GC p )
4. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For an enclosed building, (GC pi ) = +0.18, − 0.18.
5. Determine design wind pressure p by Eq. 6-22 in Zones 1, 2 and 3.
p = qh [(GC p ) − (GC pi )] = 11.8[(GC p ) − (±0.18)]
a. Trusses spanning in the N-S direction
The maximum design wind pressures for positive and negative internal
pressures are summarized in Table 5.40.
b. Trusses spanning in the E-W direction
The maximum design wind pressures for positive and negative internal
pressures are summarized in Table 5.41.
The pressures in Tables 5.40 and 5.41 are applied normal to the roof trusses
and act over the tributary area of each truss. If the tributary areas were greater
than 700 sq ft, the trusses could have been designed using the provisions for
MWFRSs (6.5.12.1.3).
The positive pressures in Zones 1, 2 and 3 for roof trusses spanning in the N-S
direction must be increased to the minimum value of 10 psf in accordance
with 6.1.4.2.
Table 5.40 Design Wind Pressures p on Roof Trusses Spanning in the N-S Direction
(GC p )
Design Pressure,
p (psf)
* Net overhang pressure = q h (GC p )
Table 5.41 Design Wind Pressures p on Roof Trusses Spanning in the E-W Direction
(GC p )
Design Pressure,
p (psf)
* Net overhang pressure = q h (GC p )
Figure 5.13 contains the loading diagrams for typical trusses located within
various zones of the roof.
Part 4: Determine design wind pressures on roof sheathing panel
Use Flowchart 5.8 to determine the design wind pressures on the roof sheathing panels,
which are C&C.
1. It is assumed in that the building is enclosed.
2. Determine velocity pressure qh using Flowchart 5.5.
Velocity pressure was determined in Part 1, Step 2, item 3 of this example and is
equal to 11.8 psf.20
Zone 2
Zone 1
Zone 2
Type B (typ.)
Type A (typ.)
Zone 2
Zone 3
Type C (typ.)
Zone 3
Zone 1
Type D (typ.)
7.2′ (typ.)
Zone 2
Zone 3
2.75′ (typ.)
Positive Pressure
Negative Pressure
16.3 x 2 = 32.6 plf
11.6 x 2 = 23.2 plf
10 x 2 = 20.0 plf
Types A, B , and C
7.2′ (typ.)
26.0 x 2 = 52.0 plf
Type A
25.7 x 2 = 51.4 plf
11.6 x 2 = 23.2 plf
Type D
16.3 x 2 = 32.6 plf
29.5 x 2 = 59.0 plf
Type B
25.7 x 2 = 51.4 plf
16.3 x 2 = 32.6 plf
29.5 x 2 = 59.0 plf
26.0 x 2 = 52.0 plf
Type C
13.9 x 2 = 27.8 plf
21.2 x 2 = 42.4 plf
Type D
Figure 5.13 Roof Truss Loading Diagrams
In accordance with Note 1 of Table 6-3, values of Kh under Case 1 are to be used for C&C. At a mean
roof height of 38 ft, Kh under Cases 1 and 2 are the same. Thus, qh used in the design of the MWFRS can
be used in the design of the C&C in this example.
3. Determine external pressure coefficients (GCp) from appropriate figures for roof
panels on the east and west wings.
a. Roof panels on the east wing
Pressure coefficients for Zones 1, 2, and 3 can be determined from Figure 611C ( 7$ < θ = 23$ < 27$ ) based on the effective wind area assuming a 4 ft by
2 ft panel size. Included are the pressure coefficients for the overhanging
portions of panels.
Effective wind area = larger of 2 × 4 = 8 sq ft (governs) or 2 × (2/3) =
1.33 sq ft.
The pressure coefficients from Figure 6-11C are summarized in Table 5.42.
According to Note 5 in the table, values of (GCp) for roof overhangs include
pressure contributions from both the upper and lower surfaces of the
Table 5.42 External Pressure Coefficients (GC p ) for Roof Panels on the East Wing
(GC p )
b. Roof panels on the west wing
Pressure coefficients for Zones 1, 2, and 3 can be determined from Figure 611D ( 27$ < θ = 44.4$ < 45$ ) based on the effective wind area. Included are the
pressure coefficients for the overhanging portions of the trusses.
Effective wind area = larger of 2 × 4 = 8 sq ft (governs) or 2 × (2/3) =
1.33 sq ft.
The pressure coefficients from Figure 6-11D are summarized in Table 5.43.
According to Note 5 in the table, values of (GCp) for roof overhangs include
pressure contributions from both the upper and lower surfaces of the
Table 5.43 External Pressure Coefficients (GC p ) for Roof Panels on the West Wing
(GC p )
4. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For an enclosed building, (GC pi ) = +0.18, − 0.18.
5. Determine design wind pressure p by Eq. 6-22 in Zones 1, 2 and 3.
p = qh [(GC p ) − (GC pi )] = 11.8[(GC p ) − (±0.18)]
a. Roof panels on the east wing
The maximum design wind pressures for positive and negative internal
pressures are summarized in Table 5.44.
Table 5.44 Design Wind Pressures p on Roof Panels on the East Wing
(GC p )
Design Pressure,
p (psf)
* Net overhang pressure = q h (GC p )
b. Roof panels on the west wing
The maximum design wind pressures for positive and negative internal
pressures are summarized in Table 5.45.
Table 5.45 Design Wind Pressures p on Roof Panels on the West Wing
(GC p )
Design Pressure,
p (psf)
* Net overhang pressure = q h (GC p )
The pressures in Tables 5.44 and 5.45 are applied normal to the roof panels
and act over the tributary area of each panel.
The positive pressures in Zones 1, 2 and 3 for roof panels on the east wing
must be increased to the minimum value of 10 psf in accordance with 6.1.4.2.
Example 5.6 – Six-Story Hotel using Method 2, Analytical Procedure
For the six-story hotel illustrated in Figure 5.14, determine (1) design wind pressures on
the main wind-force-resisting system in both directions and (2) design wind forces on the
rooftop equipment using Method 2, Analytical Procedure. Note that door and window
openings are not shown in the figure.
Surface Roughness:
Miami, FL
C (adjacent to water in hurricane prone region)
Not situated on a hill, ridge or escarpment
Residential building where less than 300 people congregate in one
Part 1: Determine design wind pressures on MWFRS
Step 1: Check if the provisions of 6.5 can be used to determine the design wind
pressures on the MWFRS.
The provisions of 6.5 may be used to determine design wind pressures provided the
conditions of 6.5.1 and 6.5.2 are satisfied. It is clear that these conditions are satisfied
for this residential building that does not have response characteristics that make it
subject to across-wind loading or other similar effects, and that is not sited at a
location where channeling effects or buffeting in the wake of upwind obstructions
need to be considered.
The provisions of Method 2, Analytical Procedure, can be used to determine the
design wind pressures on the MWFRS.21
Structural system: conventionally reinforced
concrete moment frames in both directions
Reinforced concrete parapet (typ.)
Rooftop unit (L = 16′, W = 7′, H = 6′)
supported on a 6″ concrete pad
North/South Elevation
Figure 5.14 Plan and Elevation of Six-story Hotel
Step 2: Use Flowchart 5.7 to determine the design wind pressures on the MWFRS.
1. From Figure 6-1 or Figure 1609, the basic wind speed V is equal to 145 mph.
Thus, the building is located within a wind-borne debris region, since the basic
wind speed is greater than 120 mph.22
The mean roof height is greater than 60 ft, so the low-rise provisions of 6.5.12.2 cannot be used.
In this example, the building is located within one mile of the coastal mean high water line where V >
110 mph. This satisfies another condition of a wind-borne debris region.
According to 6.5.9.3, glazing in buildings located in wind-borne debris regions
must be protected with an approved impact-resistant covering or it must be
impact-resistant glazing.23 It is assumed that impact-resistant glazing is used over
the entire height of the building, so the building is classified as enclosed.
2. Determine the design wind pressure effects of the parapet on the MWFRS.
a. Determine the velocity pressure qp at the top of the parapet from
Flowchart 5.5.
Determine basic wind speed V from Figure 6-1 or Figure 1609.
As was determined in item 1, V = 145 mph for Miami, FL.
Determine wind directionality factor Kd from Table 6-4.
For building structures, Kd = 0.85
Determine importance factor I from Table 6-1 based on occupancy
category from IBC Table 1604.5.
From IBC Table 1604.5, the Occupancy Category is II for a residential
building. From Table 6-1, I = 1.0 for buildings in hurricane prone regions
with V > 100 mph.
Determine exposure category.
In the design data, the surface roughness is given as C. Assume that
Exposure C is applicable in all directions (6.5.6.3).
Determine topographic factor K zt .
As noted in the design data, the building is not situated on a hill, ridge or
escarpment. Thus, topographic factor Kzt = 1.0 (6.5.7.2).
Determine velocity pressure exposure coefficient Kh from
Table 6-3.
For Exposure C at a height of 68 ft at the top of the parapet, Kh = 1.16 by
linear interpolation.
Impact-resistant glazing must conform to ASTM E 1886 and ASTM E 1996 or other approved test
methods and performance criteria. Glazing in Occupancy Category II, III, or IV buildings located over
60 ft above the ground and over 30 ft above aggregate surface roofs located within 1,500 ft of the
building shall be permitted to be unprotected (6.5.9.3).
Determine velocity pressure qp evaluated at the top of the parapet by
Eq. 6-15.
q p = 0.00256 K h K zt K d V 2 I
= 0.00256 × 1.16 × 1.0 × 0.85 × 145 2 × 1.0 = 53.1 psf
b. Determine combined net pressure coefficient GC pn for the parapets.
In accordance with 6.5.12.2.4, GCpn = 1.5 for windward parapet and
GCpn = –1.0 for leeward parapet.
c. Determine combined net design pressure p p on the parapet by Eq. 6-20.
p p = q p GC pn
= 53.1 × 1.5 = 79.7 psf on windward parapet
= 53.1 × (−1.0) = −53.1 psf on leeward parapet
The forces on the windward and leeward parapets can be obtained by
multiplying the pressures by the height of the parapet:
Windward parapet: F = 79.7 × 4.5 = 358.7 plf
Leeward parapet: F = 53.1 × 4.5 = 239.0 plf
3. Determine whether the building is rigid or flexible and the corresponding gust
effect factor by Flowchart 5.6.
In lieu of determining the natural frequency n1 of the building from a dynamic
analysis, Eq. C6-15 in the commentary of ASCE/SEI 7 is used to compute n1 for
concrete moment-resisting frames:
n1 =
= 1.04 Hz
Since n1 > 1.0 Hz, the building is defined as a rigid building. For simplicity, use
G = 0.85 (6.5.8.1).24
4. Determine velocity pressure qz for windward walls along the height of the
building and qh for leeward walls, side walls, and roof using Flowchart 5.5.
Most of the quantities needed to compute qz and qh were determined in item 2
According to C6.2, most rigid buildings have a height to minimum width ratio less than 4. In this
example, 63.5/75.33 = 0.84 < 4.
The velocity exposure coefficients K z and K h are summarized in Table 5.46.
Table 5.46 Velocity Pressure Exposure Coefficient K z
Height above ground
level, z (ft)
Velocity pressures q z and qh are determined by Eq. 6-15:
q z = 0.00256 K z K zt K d V 2 I
= 0.00256 × K z × 1.0 × 0.85 × 1452 × 1.0 = 45.75 K z psf
A summary of the velocity pressures is given in Table 5.47.
Table 5.47 Velocity Pressure q z
Height above ground
level, z (ft)
q z (psf)
5. Determine pressure coefficients C p for the walls and roof from Figure 6-6.
For wind in the E-W direction:
Windward wall: C p = 0.8 for use with q z
Leeward wall (L/B = 328.75/75.33 = 4.4): C p = −0.2 for use with qh
Side wall: C p = −0.7 for use with qh
Roof (normal to ridge with θ < 10 degrees and parallel to ridge for all θ with
h/L = 63.5/328.75 = 0.19 < 0.5)25:
C p = −0.9,−0.18 from windward edge to h = 63.5 ft for use with qh
C p = −0.5,−0.18 from 63.5 ft to 2h = 127.0 ft for use with qh
C p = −0.3,−0.18 from 127.0 ft to 328.75 ft for use with qh
For wind in the N-S direction:
Windward wall: C p = 0.8 for use with q z
Leeward wall (L/B = 75.33/328.75 = 0.23): C p = −0.5 for use with qh
Side wall: C p = −0.7 for use with qh
Roof (normal to ridge with θ < 10 degrees and parallel to ridge for all θ with
h/L = 63.5/75.33 = 0.84):
C p = −1.0,−0.18 from windward edge to h/2 = 31.75 ft for use with qh 26
C p = −0.76,−0.18 from 31.75 ft to h = 63.5 ft for use with qh
C p = −0.64,−0.18 from 63.5 ft to 75.33 ft for use with qh
6. Determine qi for the walls and roof using Flowchart 5.5.
In accordance with 6.5.12.2.1, qi = qh = 52.2 psf for windward walls, side walls,
leeward walls, and roofs of enclosed buildings.
7. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For an enclosed building, (GC pi ) = +0.18, − 0.18.
8. Determine design wind pressures p z and ph by Eq. 6-17.
Windward walls:
The smaller uplift pressures on the roof due to Cp = –0.18 may govern the design when combined with
roof live load or snow loads. This pressure is not shown in this example, but in general must be
Cp = 1.3 may be reduced based on area over which it is applicable = (63.5/2) × 328.75 = 10,438 sq ft >
1,000 sq ft. Reduction factor = 0.8 from Figure 6-6. Thus, Cp = 0.8 × (-1.3) = -1.04 was used in the
linear interpolation to determine Cp for h/L = 0.84.
p z = q z GC p − qh (GC pi )
= (0.85 × 0.8 × q z ) − 52.2(±0.18)
= (0.68q z # 9.4) psf (external ± internal pressure)
Leeward wall, side walls and roof:
ph = qhGC p − qh (GC pi )
= (52.2 × 0.85 × C p ) − 52.2(±0.18)
= (44.4C p # 9.4) psf (external ± internal pressure)
A summary of the maximum design wind pressures in the E-W and N-S
directions is given in Tables 5.48 and 5.49, respectively.
Table 5.48 Design Wind Pressures p in the E-W Direction
Height above
ground level,
z (ft)
q (psf)
qGC p (psf)
q h (GC pi )
± 9.4
± 9.4
± 9.4
± 9.4
± 9.4
± 9.4
± 9.4
± 9.4
± 9.4
± 9.4
± 9.4
± 9.4
± 9.4
Net pressure p (psf)
+ (GC pi )
− (GC pi )
* from windward edge to 63.5 ft
from 63.5 ft to 127.0 ft
from 127.0 ft to 328.75 ft
Illustrated in Figures 5.15 and 5.16 are the external design wind pressures in the
E-W and N-S directions, respectively. Included in the figures are the forces on the
windward and leeward parapets, which add to the overall wind forces in the
direction of analysis. When considering horizontal wind forces on the MWFRS, it
is clear that the effects from the internal pressure cancel out. On the roof, the
effects from internal pressure add directly to those from the external pressure.
Table 5.49 Design Wind Pressures p in the N-S Direction
Height above
ground level,
z (ft)
q (psf)
qGC p (psf)
q h (GC pi )
± 9. 4
Net pressure p (psf)
+ (GC pi )
− (GC pi )
± 9.4
± 9.4
± 9. 4
± 9.4
± 9.4
± 9. 4
± 9.4
± 9.4
± 9. 4
± 9.4
± 9.4
± 9.4
* from windward edge to 31.75 ft
from 31.75 ft to 63.5 ft
from 63.5 ft to 75.33 ft
40.0 psf
22.2 psf
358.7 plf
35.5 psf
35.2 psf
33.9 psf
13.3 psf
239.0 plf
32.4 psf
30.5 psf
29.2 psf
28.0 psf
± 9.4 psf internal pressure
26.5 psf
Figure 5.15 Design Wind Pressures in the E-W Direction
8.9 psf
44.4 psf
33.7 psf
358.7 plf
35.5 psf
35.2 psf
33.9 psf
239.0 plf
32.4 psf
30.5 psf
29.2 psf
28.0 psf
28.4 psf
± 9.4 psf internal
22.2 psf
26.5 psf
Figure 5.16 Design Wind Pressures in the N-S Direction
The MWFRS of buildings whose wind loads have been determined by 6.5.12.2.1
must be designed for the wind load cases defined in Figure 6-9 (6.5.12.3). In
Case 1, the full design wind pressures act on the projected area perpendicular to
each principal axis of the structure. These pressures are assumed to act separately
along each principal axis. The wind pressures on the windward and leeward walls
depicted in Figures 5.15 and 5.16 fall under Case 1.
In Case 2, 75 percent of the design wind pressures on the windward and leeward
walls are applied on the projected area perpendicular to each principal axis of the
building along with a torsional moment. The wind pressures and torsional
moments, both of which vary over the height of the building, are applied
separately for each principal axis.
As an example of the calculations that need to be performed over the height of the
building, the wind pressures and torsional moment at the mean roof height for
Case 2 are as follows:
For E-W wind: 0.75 PWX = 0.75 × 35.5 = 26.6 psf (windward wall)
0.75 PLX = 0.75 × 8.9 = 6.7 psf (leeward wall)
M T = 0.75( PWX + PLX ) B X e X
= 0.75(35.5 + 8.9) × 75.33 × (±0.15 × 75.33)
= ±28,345 ft - lb/ft
For N-S wind:
0.75 PWY = 0.75 × 35.5 = 26.6 psf (windward wall)
0.75 PLY = 0.75 × 22.2 = 16.7 psf (leeward wall)
M T = 0.75( PWY + PLY ) BY eY
= 0.75(35.5 + 22.2) × 328.75 × (±0.15 × 328.75)
= ±701,552 ft - lb/ft
In Case 3, 75 percent of the wind pressures of Case 1 are applied to the building
simultaneously. This accounts for wind along the diagonal of the building.
In Case 4, 75 percent of the wind pressures and torsional moments defined in
Case 2 act simultaneously on the building.
Figure 5.17 illustrates Load Cases 1 through 4 for MWFRS wind pressures acting
on the projected area at the mean roof height. Note that internal pressures are
always equal and opposite to each other and therefore not included. Similar
loading diagrams can be obtained at other locations below the mean roof height.
Finally, the minimum design wind loading prescribed in 6.1.4.1 must be
considered as a load case in addition to those load cases described above.
35.5 psf
8.9 psf
26.6 psf
35.5 psf
26.6 psf
6.7 psf
16.7 psf
Case 3
22.2 psf
Case 1
26.6 psf
28,345 ft-lb/ft
26.6 psf
6.7 psf
20.0 psf
20.0 psf
547,423 ft-lb/ft
12.5 psf
701,552 ft-lb/ft
Case 4
16.7 psf
Case 2
Figure 5.17 Load Cases 1 through 4 at the Mean Roof Height
5.0 psf
Part 2: Determine design wind forces on rooftop equipment
Use Flowchart 5.9 to determine the wind force on the rooftop equipment.
1. Determine velocity pressure qz evaluated at height z of the centroid of area Af of
the rooftop unit.
The distance from the ground level to the centroid of the rooftop unit = 63.5 + 0.5
+ (6/2) = 67 ft
From Table 6-3, Kz = 1.16 (by linear interpolation) for Exposure C at a height of
67 ft above the ground level.
Velocity pressures q z is determined by Eq. 6-15:
q z = 0.00256 K z K zt K d V 2 I
= 0.00256 × 1.16 × 1.0 × 0.90 × 145 2 × 1.0 = 56.2 psf
where K d = 0.90 for square-shaped rooftop equipment (see Table 6-4).
2. Determine gust effect factor G from Flowchart 5.6.
The gust effect factor G is equal to 0.85 (see Part 1, Step 2, item 3 of this
3. Determine force coefficient C f from Figure 6-21 for rooftop equipment.
h = 67 ft
Least horizontal dimension D of rooftop unit = 7 ft
h/D = 67/7 = 9.6
From Figure 6-21, C f = 1.5 by linear interpolation for square cross-section.
4. Check if A f is less than 0.1Bh.
For the smaller face, A f = 6 × 7 = 42 sq ft
For the larger face, A f = 6 × 16 = 96 sq ft
0.1Bh = 0.1 × 75.33 × 63.5 = 478.4 sq ft > A f
5. Determine design wind force F by Eq. 6-28.
Since A f < 0.1Bh , F is equal to 1.9 times the value obtained by Eq. 6-28:27
F = 1.9q z GC f A f
= 1.9 × 56.2 × 0.85 × 1.5 × 42 / 1,000 = 5.7 kips on the smaller face
= 1.9 × 56.2 × 0.85 × 1.5 × 96 / 1,000 = 13.1 kips on the larger face
These forces act perpendicular to the respective faces of the equipment.
Example 5.7 – Six-Story Hotel Located on an Escarpment using Method 2,
Analytical Procedure
For the six-story hotel in Example 5.6, determine (1) design wind pressures on the main
wind-force-resisting system in both directions and (2) design wind forces on the rooftop
equipment using Method 2, Analytical Procedure assuming the structure is located on an
Surface Roughness:
Miami, FL
C (adjacent to water in hurricane prone region)
2-D Escarpment (see Figure 5.18)
Residential building where less than 300 people congregate in one
Part 1: Determine design wind pressures on MWFRS
Step 1: Check if the provisions of 6.5 can be used to determine the design wind
pressures on the MWFRS.
It was shown in Part 1, Step 1 of Example 5.6 that Method 2 can be used to determine
the design wind pressures on the MWFRS.
The mean roof height of the building h is equal to 63.5 ft, which is slightly greater than the 60-ft limit
prescribed in 6.5.15.1. The force is increased by 1.9 for conservatism.
Lh = 40′
x = 30′
H/2 = 15′
H = 30′
Figure 5.18 Six-story Hotel on Escarpment
Step 2: Use Flowchart 5.7 to determine the design wind pressures on the MWFRS.
1. It was determined in Part 1, Step 2, item 1 of Example 5.6 that the basic wind
speed V is equal to 145 mph and that the building is enclosed.
2. Determine the design wind pressure effects of the parapet on the MWFRS.
a. Determine the velocity pressure qp at the top of the parapet from
Flowchart 5.5.
Determine basic wind speed V from Figure 6-1 or Figure 1609.
As noted in item 1, V = 145 mph for Miami, FL.
Determine wind directionality factor K d from Table 6-4.
For building structures, K d = 0.85.
Determine importance factor I from Table 6-1 based on occupancy
category from IBC Table 1604.5.
From Example 5.5, I = 1.0.
Determine exposure category.
In the design data, the surface roughness is given as C. Assume that
Exposure C is applicable in all directions (6.5.6.3).
Determine topographic factor K zt .
Check if all five conditions of 6.5.7.1 are satisfied:
- Assume that the topography is such that conditions 1 and 2 are
- Condition 3 is satisfied since the building is located near the crest
of the escarpment.
H / Lh = 30 / 40 = 0.75 > 0.2 , so condition 4 is satisfied.
H = 30 ft > 15 ft for Exposure C, so condition 5 is satisfied.
Since all five conditions of 6.5.7.1 are satisfied, wind speed-up effects at
the escarpment must be considered in the design, and Kzt must be
determined by Eq. 6-3:
K zt = (1 + K1K 2 K3 ) 2
where the multipliers K1 , K 2 and K 3 are given in Figure 6-4 for
Exposure C. Also given in the figure are parameters and equations that can
be used to determine the multipliers for any exposure category.
It was determined above that H / Lh = 0.75 . According to Note 2 in
Figure 6-4, where H / Lh > 0.5 , use H / Lh = 0.5 when evaluating K1 and
substitute 2 H for Lh when evaluating K 2 and K3 .
From Figure 6-4, K1 = 0.43 for a 2-D escarpment with H / Lh = 0.5. This
multiplier is related to the shape of the topographic feature and the
maximum wind speed-up near the crest.
The multiplier K 2 accounts for the reduction in speed-up with distance
upwind or downwind of the crest. Since x / 2 H = 30 / 60 = 0.5 , K 2 = 0.88
for a 2-D escarpment from Figure 6-4.
The multiplier K3 accounts for the reduction in speed-up with height z
above the local ground surface. Even though the velocity pressure q p is
evaluated at the top of the parapet, the multiplier K3 is conservatively
determined at the height z corresponding to the centroid of the parapet,
which is equal to 63.5 + (4.5/2) = 65.75 ft. Thus, z / 2 H = 65.75 / 60 = 1.1
and K3 = 0.07 from Figure 6-4 (by linear interpolation) for a 2-D
Therefore, K zt = [1 + (0.43 × 0.88 × 0.07)]2 = 1.05.
Determine velocity pressure exposure coefficient Kh from
Table 6-3.
For Exposure C at a height of 68 ft at the top of the parapet, Kh = 1.16 by
linear interpolation.
Determine velocity pressure qp evaluated at the top of the parapet by
Eq. 6-15.
q p = 0.00256 K h K zt K dV 2 I
= 0.00256 × 1.16 × 1.05 × 0.85 × 1452 × 1.0 = 55.7 psf
This velocity pressure is 5 percent greater than that determined in
Example 5.6 where the building is not on an escarpment.
b. Determine combined net pressure coefficient GC pn .
In accordance with 6.5.12.2.4, GCpn = 1.5 for windward parapet and
GCpn = –1.0 for leeward parapet.
c. Determine combined net design pressure p p on the parapet by Eq. 6-20.
p p = q pGC pn
= 55.7 × 1.5 = 83.6 psf on windward parapet
= 55.7 × (−1.0) = −55.7 psf on leeward parapet
The forces on the windward and leeward parapets can be obtained by
multiplying the pressures by the height of the parapet:
Windward parapet: F = 83.6 × 4.5 = 376.2 plf
Leeward parapet: F = 55.7 × 4.5 = 250.7 plf
These forces are 5 percent greater than those determined in Example 5.6.
3. Determine whether the building is rigid or flexible and the corresponding gust
effect factor by Flowchart 5.6.
It was determined in Part 1, Step 2, item 3 of Example 5.6 that the building is
rigid and G = 0.85.
4. Determine velocity pressure qz for windward walls along the height of the
building and qh for leeward walls, side walls and roof using Flowchart 5.5.
Velocity pressures q z and qh are determined by Eq. 6-15:
q z = 0.00256 K z K zt K dV 2 I
= 0.00256 × K z × K zt × 0.85 × 1452 × 1.0 = 45.75 K z K zt psf
The velocity exposure coefficient K z and the topographic factor K zt vary with
height above the local ground surface. Values of K z were determined in
Example 5.6 (see Table 5.46) and are repeated in Table 5.50 for convenience.
From item 2 above, K zt = [1 + (0.43 × 0.88 × K3 )]2 = [1 + 0.38 K3 ]2 .
Values of K zt are given in Table 5.50 as a function of z / 2 H = z/60 where z is
taken midway between the height range.28 Also given in the table is a summary of
the velocity pressure q z over the height of the building.
Table 5.50 Velocity Pressure q z
Height above ground
level, z (ft)
z/2H *
K zt
q z (psf)
*z is taken midway between the height range
As an example, determine K zt at a height z = 60 ft above the local ground level.
The multiplier K 3 is determined based on a height z taken midway between the
range of 60 ft and 50 ft., i.e., z = 55 ft. Thus, z / 2 H = 55 / 60 = 0.92 , and by linear
interpolation from Figure 6-4, K 3 = 0.10 for a 2-D escarpment.29 Then,
K zt = [1 + (0.38 × 0.10)]2 = 1.08
5. Determine pressure coefficients C p for the walls and roof from Figure 6-6.
It is unconservative to use the top height of the range when determining K 3 .
K 3 may also be computed from the equation given in Fig. 6-4: K 3 = e
− γz / Lh
− ( 2.5 × 0.92 )
= 0.10 .
The pressure coefficients are the same as those determined in Part 1, Step 2,
item 5 of Example 5.6.
6. Determine qi for the walls and roof using Flowchart 5.5.
In accordance with 6.5.12.2.1, qi = qh = 55.3 psf for windward walls, side walls,
leeward walls and roofs of enclosed buildings.
7. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For an enclosed building, (GC pi ) = +0.18, − 0.18.
8. Determine design wind pressures p z and ph by Eq. 6-17.
Windward walls:
p z = q z GC p − qh (GC pi )
= (0.85 × 0.8 × q z ) − 55.3(±0.18)
= (0.68q z # 10.0) psf (external ± internal pressure)
Leeward wall, side walls and roof:
ph = qh GC p − qh (GC pi )
= (55.3 × 0.85 × C p ) − 55.3(±0.18)
= (47.0C p # 10.0) psf (external ± internal pressure)
A summary of the maximum design wind pressures in the E-W and N-S
directions is given in Tables 5.51 and 5.52, respectively.
The percent increase in the external pressure on the windward wall of the building
due to the escarpment is summarized in Table 5.53.
The external pressures on the leeward wall, side wall and roof as well as the
internal pressure increase by 6 percent, since these pressures depend on the
velocity pressure at the roof height qh.
Load Cases 1 through 4, depicted in Figure 6-9, must be investigated for the
windward and leeward pressures, similar to that shown in Example 5.6.
Table 5.51 Design Wind Pressures p in the E-W Direction
qGC p (psf)
q h (GC pi )
Net pressure p (psf)
Height above
ground level,
z (ft)
q (psf)
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
+ (GC pi )
− (GC pi )
± 10.0
± 10.0
± 10.0
± 10.0
* from windward edge to 63.5 ft
from 63.5 ft to 127.0 ft
from 127.0 ft to 328.75 ft
Table 5.52 Design Wind Pressures p in the N-S Direction
qGC p (psf)
q h (GC pi )
Height above
ground level,
z (ft)
q (psf)
± 10.0
Net pressure p (psf)
+ (GC pi )
− (GC pi )
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
± 10.0
* from windward edge to 31.75 ft
from 31.75 ft to 75.33 ft
Table 5.53 Comparison of External Design Wind Pressures on the Windward Wall with
and without an Escarpment
Height above
ground level,
z (ft)
External pressure qGC p
Part 2: Determine design wind forces on rooftop equipment
The calculations in Part 2 of Example 5.6 can be modified to account for the speed-up at
the escarpment. In particular, the topographic factor Kzt must be determined at the
centroid of the rooftop unit, which is 67 ft above the local ground level.
In this case, z / 2 H = 67 / 60 = 1.12 and by linear interpolation from Figure 6-4,
K 3 = 0.07.
Thus, K zt = [1 + (0.38 × 0.07)]2 = 1.05.
The velocity pressure q z is equal to 1.05 times that determined in Example 5.6, i.e.,
q z = 1.05 × 56.2 = 59.0 psf.
Consequently, the design wind forces on the rooftop units supported on the building
located on the escarpment are 5 percent greater than those determined in Example 5.6:
F = 1.9q z GC f A f
= 1.05 × 5.7 = 6.0 kips on the smaller face
= 1.05 × 13.1 = 13.8 kips on the larger face
Example 5.8 – Six-Story Hotel using Alternate All-heights Method
For the six-story hotel in Example 5.6, determine (1) design wind pressures on the main
wind-force-resisting system in both directions and (2) design wind forces on the rooftop
equipment using the Alternate All-heights Method of IBC 1609.6.
Part 1: Determine design wind pressures on MWFRS
Step 1: Check if the provisions of IBC 1609.6 can be used to determine the design
wind pressures on this building.
The provisions of IBC 1609.6 may be used to determine design wind pressures on
this regularly-shaped building provided the conditions of IBC 1609.6.1 are satisfied:
1. The height of the building is 63 ft-6 in., which is less than 75 ft, and the height-toleast-width ratio = 63.5/75.33 = 0.84 < 4. Also, it was shown in Example 5.6 (Part
1, Step 2, item 3) that the fundamental frequency n1 = 1.04 Hz > 1 Hz. O.K.
2. As was discussed in Example 5.6 (Part 1, Step 1), this building is not sensitive to
dynamic effects. O.K.
3. This building is not located on a site where channeling effects or buffeting in the
wake of upwind obstructions need to be considered. O.K.
4. This building meets the requirements of a simple diaphragm building as defined
in 6.2, since the windward and leeward wind loads are transmitted through the
reinforced concrete floor slabs (diaphragms) to the reinforced concrete moment
frames (MWFRS), and there are no structural separations in the MWFRS. O.K.
5. The fifth condition is applicable to the rooftop equipment only.
The provisions of the Alternate All-heights Method of IBC 1609.6 can be used to
determine the design wind pressures on the MWFRS.30
Step 2: Use Flowchart 5.10 to determine the design wind pressures on the MWFRS.
1. Determine basic wind speed V from Figure 6-1 or Figure 1609.
From either of these figures, V = 145 mph for Miami, FL. Thus, the building is
located within a wind-borne debris region, since the basic wind speed is greater
than 120 mph.31 According to ASCE/SEI 6.5.9.3 and IBC 1609.1.2, glazing in
buildings located in wind-borne debris regions must be protected with an
approved impact-resistant covering or it must be impact-resistant glazing.32 It is
This method can also be used to determine design wind pressures on the C&C (IBC 1609.6).
In this example, the building is located within one mile of the coastal mean high water line where V >
110 mph. This satisfies another condition of a wind-borne debris region.
Impact-resistant glazing must conform to ASTM E 1886 and ASTM E 1996 or other approved test
methods and performance criteria. Glazing in Occupancy Category II, III, or IV buildings located over
60 ft above the ground and over 30 ft above aggregate surface roofs located within 1,500 ft of the
building shall be permitted to be unprotected (6.5.9.3). Also see IBC 1609.1.2 for other exceptions.
assumed that impact-resistant glazing is used over the entire height of the
building, so the building is classified as enclosed.
2. Determine the wind stagnation pressure qs from IBC Table 1609.6.2(1), footnote
For V = 145 mph, qs = 0.00256 × 1452 = 53.8 psf.
3. Determine importance factor I from Table 6-1 based on occupancy category from
IBC Table 1604.5.
From IBC Table 1604.5, the Occupancy Category is II for a residential building.
From Table 6-1, I = 1.0 for buildings in hurricane prone regions with V >
100 mph.
4. Determine exposure category.
In the design data, the surface roughness is given as C. Assume that Exposure C is
applicable in all directions (see 6.5.6.3).
5. Determine topographic factor K zt .
As noted in the design data, the building is not situated on a hill, ridge or
escarpment. Thus, topographic factor K zt = 1.0 (6.5.7.2).
6. Determine velocity pressure exposure coefficients K z from Table 6-3.
Values of K z for Exposure C are summarized in Table 5.54.
Table 5.54 Velocity Pressure Exposure Coefficient K z
Height above ground
level, z (ft)
7. Determine net pressure coefficients Cnet for the walls, roof from IBC
Table 1609.6.2(2) assuming the building is enclosed.
Windward wall: Cnet
Leeward wall: Cnet
Side walls:
= 0.43 for positive internal pressure
= 0.73 for negative internal pressure
= −0.51 for positive internal pressure
= −0.21 for negative internal pressure
= −0.66 for positive internal pressure
= −0.35 for negative internal pressure
= −1.09 for positive internal pressure
= −0.79 for negative internal pressure
= 1.28 for windward
= −0.85 for leeward
These net pressure coefficients are applicable for wind in both the N-S and E-W
8. Determine net design wind pressures pnet by IBC Eq. 16-34.
pnet = qs K z Cnet IK zt
Windward walls: pnet = 53.8K z Cnet
Leeward walls, side walls and roof: pnet = 53.8 × 1.14 × Cnet = 61.3Cnet
Parapet: pnet = 53.8 × 1.16 × Cnet = 62.4Cnet
A summary of the maximum design wind pressures is given in Tables 5.55.
Illustrated in Figures 5.19 are the net design wind pressures in the N-S direction
with positive internal pressure.33 Included in the figure are the following forces on
the windward and leeward parapets, which add to the overall wind forces in the
direction of analysis:
On the windward parapet: pnet = 62.4 × 1.28 = 79.9 psf
F = 79.9 × 4.5 = 359.6 plf
On the leeward parapet:
p net = 62.4 × (−0.85) = −53.0 psf
F = −53.0 × 4.5 = −238.5 plf
Wind pressures in the E-W direction are the same as in the N-S direction.
Table 5.55 Net Design Wind Pressures pnet
Height above
ground level,
z (ft)
Net design pressure
pnet (psf)
+ Internal - Internal
+ Internal
- Internal
66.8 psf
359.6 plf
26.4 psf
26.1 psf
25.2 psf
24.1 psf
22.7 psf
21.8 psf
20.8 psf
238.5 plf
31.3 psf
19.7 psf
Figure 5.19 Net Design Wind Pressures in the N-S Direction
The MWFRS of buildings whose wind loads have been determined by
IBC 1609.6 must be designed for the wind load cases defined in ASCE/SEI
Figure 6-9 (IBC 1609.6.4.1). In Case 1, the full design wind pressures act on the
projected area perpendicular to each principal axis of the structure. These
pressures are assumed to act separately along each principal axis. The wind
pressures on the windward and leeward walls depicted in Figures 5.19 fall under
Case 1.
The calculations for the additional load cases that need to be considered in this
example are similar to those shown in Example 5.6.
Part 2: Determine design wind forces on rooftop equipment
According to item 5 under IBC 1609.6.1, the applicable provisions in ASCE/SEI
Chapter 6 are to be used to determine the design wind forces on rooftop equipment.
Calculations for the rooftop equipment in this example are provided in Part 2 of
Example 5.6.
Example 5.9 – Fifteen-Story Office Building using Method 2, Analytical
For the 15-story office building depicted in Figure 5.20, determine (1) design wind
pressures on the main wind-force-resisting system in both directions and (2) design wind
forces on the cladding using Method 2, Analytical Procedure.
Structural system: structural steel moment
frames in both directions
Typical floor height: 15 ft
Cladding consists of mullions that span 15 ft
between floor slabs and glazing panels that
are supported by the mullions, which are
spaced 5 ft on center. Glazing panels are 5 ft
wide by 7 ft-6 in. high.
Figure 5.20 Fifteen-story Office Building
Surface Roughness:
Chicago, IL
B (suburban area with numerous closely spaced obstructions having
the size of single-family dwellings and larger)
Not situated on a hill, ridge or escarpment
Business occupancy where less than 300 people congregate in one
Part 1: Determine design wind pressures on MWFRS
Step 1: Check if the provisions of 6.5 can be used to determine the design wind
pressures on the MWFRS.
The provisions of 6.5 may be used to determine design wind pressures provided the
conditions of 6.5.1 and 6.5.2 are satisfied. It is clear that these conditions are satisfied
for this office building that does not have response characteristics that make it subject
to across-wind loading or other similar effects, and that is not sited at a location
where channeling effects or buffeting in the wake of upwind obstructions need to be
The provisions of Method 2, Analytical Procedure, can be used to determine the
design wind pressures on the MWFRS.
Step 2: Use Flowchart 5.7 to determine the design wind pressures on the MWFRS.
1. For illustrative purposes, it is assumed in this example that the building is
partially enclosed.34
2. Determine whether the building is rigid or flexible and the corresponding gust
effect factor from Flowchart 5.6.
In lieu of determining the natural frequency n1 of the building from a dynamic
analysis, Eq. C6-14 in the commentary of ASCE/SEI 7 is used to compute n1 for
steel moment-resisting frames:
n1 =
= 0.3 Hz < 1.0 Hz
Since n1 < 1.0 Hz, the building is defined as a flexible building. Thus, the gust
effect factor Gf for flexible buildings must be determined by Eq. 6-8 in 6.5.8.2.
a. Determine g Q and g v
g Q = g v = 3.4
For office buildings of this type, it is common to assume that the building is enclosed. Where buildings
have operable windows or where it is anticipated that debris may compromise some of the windows
during a windstorm, it may be more appropriate to assume that the building is partially enclosed.
b. Determine g R
g R = 2 ln(3,600n1 ) +
2 ln(3,600n1 )
= 2 ln(3,600 × 0.3) +
= 3.9
2 ln(3,600 × 0.3 )
Eq. 6-9
c. Determine I z
z = 0.6h = 0.6 × 225 = 135 ft > z min = 30 ft
Table 6-2 for Exposure B
1/ 6
§ 33 ·
I z = c¨ ¸
© z ¹
Eq. 6-5 and Table 6-2 for Exposure B
1/ 6
§ 33 ·
= 0.30¨
© 135 ¹
= 0.24
d. Determine Q
§ z ·
L z = "¨ ¸
© 33 ¹
Eq. 6-7 and Table 6-2 for Exposure B
1/ 3
§ 135 ·
= 320¨
© 33 ¹
= 511.8 ft
§ B+h·
1 + 0.63¨¨
© Lz ¹
Eq. 6-6
§ 150 + 225 ·
1 + 0.63¨
© 511.8 ¹
= 0.81
e. Determine R
From Figure 6-1 or Figure 1609, the basic wind speed V is equal to 90 mph
for Chicago, IL.
§ z · § 88 ·
Vz = b ¨ ¸ V ¨ ¸
© 33 ¹ © 60 ¹
1/ 4
§ 135 ·
= 0.45¨
© 33 ¹
= 84.5 ft/sec
§ 88 ·
× 90¨ ¸
© 60 ¹
Eq. 6-14 and Table 6-2 for Exposure B
N1 = 1 z
Eq. 6-12
0.3 × 511.8
= 1.8
Rn =
ηh =
7.47 N1
(1 + 10.3 N1 )5 / 3
7.47 × 1.8
[1 + (10.3 × 1.8)]5 / 3
Eq. 6-11
= 0.10
4.6 × 0.3 × 225
= 3.7
1 − e − 2η h
ηh 2ηh
(1 − e − 2×3.7 ) = 0.23
3.7 2 × 3.7
Rh =
ηB =
4.6 × 0.3 × 150
= 2.5
1 − e − 2η B
η B 2η B
(1 − e − 2× 2.5 ) = 0.32
2.5 2 × 2.5
RB =
Eq. 6-13(a)
Eq. 6-13(a)
ηL =
15.4 × 0.3 × 150
= 8.2
RL =
1 − e − 2η L
η L 2η2L
(1 − e − 2×8.2 ) = 0.12
8.2 2 × 8.2 2
Eq. 6-13(a)
Assume damping ratio β = 0.01 (see C6.5.8 for suggested value for steel
Rn Rh RB (0.53 + 0.47 RL )
× 0.10 × 0.23 × 0.32[0.53 + (0.47 × 0.12)]
= 0.66
Eq. 6-10
f. Determine G f
§ 1 + 1.7 I g 2 Q 2 + g 2 R 2
G f = 0.925¨
1 + 1.7 g v I z
§ 1 + (1.7 × 0.24) (3.4 2 × 0.812 ) + (3.9 2 × 0.66 2 ) ·
= 0.925¨¨
1 + (1.7 × 3.4 × 0.24)
= 0.98
3. Determine velocity pressure qz for windward walls along the height of the
building and qh for leeward walls, side walls and roof using Flowchart 5.5.
a. Determine basic wind speed V from Figure 6-1 or Figure 1609.
As was determined above, V = 90 mph for Chicago, IL.
b. Determine wind directionality factor K d from Table 6-4.
For the MWFRS of a building structure, K d = 0.85.
Eq. 6-8
c. Determine importance factor I from Table 6-1 based on occupancy category
from IBC Table 1604.5.
From IBC Table 1604.5, the Occupancy Category is II for an office building.
From Table 6-1, I = 1.0.
d. Determine exposure category.
In the design data, the surface roughness is given as B. Assume that
Exposure B is applicable in all directions (6.5.6.3).
e. Determine topographic factor K zt .
As noted in the design data, the building is not situated on a hill, ridge or
escarpment. Thus, topographic factor K zt = 1.0 (6.5.7.2).
f. Determine velocity pressure exposure coefficients Kz and Kh from
Table 6-3.
According to Note 1 in Table 6-3, values of K z and K h under Case 2 for
Exposure B must be used for MWFRSs in buildings that are not designed
using Figure 6-10 for low-rise buildings. Values of K z and K h are
summarized in Table 5.56.
Table 5.56 Velocity Pressure Exposure Coefficient K z for MWFRS
Height above ground
level, z (ft)
g. Determine velocity pressure q z and qh by Eq. 6-15.
q z = 0.00256 K z K zt K d V 2 I = 0.00256 × K z × 1.0 × 0.85 × 90 2 × 1.0 = 17.63K z psf
A summary of the velocity pressures is given in Table 5.57.
4. Determine pressure coefficients C p for the walls and roof from Figure 6-6.
The pressure coefficients will be the same in both the N-S and E-W directions,
since the building is square in plan.
Windward wall: C p = 0.8 for use with q z
Table 5.57 Velocity Pressure q z for MWFRS
Height above ground
level, z (ft)
q z (psf)
Leeward wall (L/B = 150/150 = 1.0): C p = −0.5 for use with qh
Side wall: C p = −0.7 for use with qh
Roof (normal to ridge with θ < 10 degrees and parallel to ridge for all θ with h/L
= 225/150 = 1.5 > 1.0)35:
C p = −1.04,−0.18 from windward edge to h/2 = 112.5 ft for use with
qh 36
C p = −0.7,−0.18 from 112.5 ft to 150 ft for use with qh
5. Determine qi for the walls and roof using Flowchart 5.5.
According to 6.5.12.2.1, it is permitted to take qi = qh = 21.9 psf for windward
walls, side walls, leeward walls and roofs of partially enclosed buildings.
6. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For a partially enclosed building, (GC pi ) = +0.55, − 0.55.
7. Determine design wind pressures p z and ph by Eq. 6-19.
Windward walls:
p z = q z G f C p − qh (GC pi )
= (0.98 × 0.8 × q z ) − 21.9(±0.55)
= (0.78q z # 12.1) psf (external ± internal pressure)
Leeward wall, side walls and roof:
ph = qhG f C p − qh (GC pi )
= (21.9 × 0.98 × C p ) − 21.9(±0.55)
= (21.5C p # 12.1) psf (external ± internal pressure)
A summary of the maximum design wind pressures that are valid in both the N-S
and E-W directions is given in Table 5.58.
Illustrated in Figure 5.21 are the external design wind pressures in the N-S or EW directions. When considering horizontal wind forces on the MWFRS, it is clear
that the effects from the internal pressure cancel out. On the roof, the effects from
internal pressure add directly to those from the external pressure.
The smaller uplift pressures on the roof due to Cp = –0.18 may govern the design when combined with
roof live load or snow loads. This pressure is not shown in this example, but in general must be
Cp = 1.3 may be reduced based on area over which it is applicable = (225/2) × 150 = 16,875 sq ft >
1,000 sq ft. Reduction factor = 0.8 from Figure 6-6. Thus, Cp = 0.8 × (-1.3) = -1.04.
The MWFRS of buildings whose wind loads have been determined by 6.5.12.2.1
must be designed for the four wind load cases defined in Figure 6-9 (6.5.12.3). In
Case 1, the full design wind pressures act on the projected area perpendicular to
each principal axis of the structure. These pressures are assumed to act separately
along each principal axis. The wind pressures on the windward and leeward walls
depicted in Figure 5.18 fall under Case 1.
In Case 2, 75 percent of the design wind pressures on the windward and leeward
walls are applied on the projected area perpendicular to each principal axis of the
building along with a torsional moment. The wind pressures and torsional
moments, both of which vary over the height of the building, are applied
separately for each principal axis.
Table 5.58 Design Wind Pressures p for MWFRS
q (psf)
qGC p (psf)
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
± 12.1
* from windward edge to 112.5 ft
q h (GC pi )
Height above
ground level,
z (ft)
from 112.5 ft to 150.0 ft
22.4 psf
15.1 psf
17.1 psf
16.5 psf
16.1 psf
15.5 psf
15.0 psf
14.3 psf
13.7 psf
13.2 psf
12.8 psf
12.3 psf
11.7 psf
11.2 psf
10.5 psf
9.6 psf
9.1 psf
8.5 psf
7.9 psf
± 12.1 psf internal
10.8 psf
Figure 5.21 Design Wind Pressures in the N-S or E-W Directions
As an example of the calculations that need to be performed over the height of the
building, the wind pressures and torsional moment at the mean roof height for
Case 2 are as follows:
0.75 PWX = 0.75 × 17.1 = 12.8 psf (windward wall)
0.75PLX = 0.75 × 10.8 = 8.1 psf (leeward wall)
For flexible buildings, the eccentricity e that is used to determine the torsional
moment MT is given by Eq. 6-21. Assuming that the elastic shear center and
the center of mass coincide (i.e., eR = 0),
(gQQeQ )2 + (g R ReR )2
1 + 1.7 I z (g Q Q )2 + (g R R )2
eQ + 1.7 I z
(0.15 × 150) + (1.7 × 0.24) [3.4 × 0.81 × (0.15 × 150)]2 + 0
1 + (1.7 × 0.24) (3.4 × 0.81)2 + (3.9 × 0.66)2
= 18.8 ft
The eccentricity determined by Eq. 6-21 is less than that for a rigid building,
which is equal to 0.15 × 150 = 22.5 ft (see Figure 6-9). For conservatism, an
eccentricity of 22.5 ft is used in this example.
M T = 0.75( PWX + PLX ) B X e X
= 0.75(17.1 + 10.8) × 150 × (±0.15 × 150)
= ±70,622 ft - lb/ft
In Case 3, 75 percent of the wind pressures of Case 1 are applied to the building
simultaneously. This accounts for wind along the diagonal of the building. In
Case 4, 75 percent of the wind pressures and torsional moments defined in Case 2
act simultaneously on the building.
Figure 5.22 illustrates Load Cases 1 through 4 for MWFRS wind pressures acting
on the projected area at the mean roof height. Similar loading diagrams can be
obtained at other locations below the mean roof height. The minimum design
wind loading prescribed in 6.1.4.1 must be considered as a load case in addition to
those load cases described above.
17.1 psf
12.8 psf
10.8 psf
17.1 psf
8.1 psf
12.8 psf
8.1 psf
Case 3
10.8 psf
Case 1
8.1 psf
12.8 psf
9.6 psf
70,622 ft-lb/ft
6.1 psf
9.6 psf
12.8 psf
6.1 psf
70,622 ft-lb/ft
105,933 ft-lb/ft
Case 4
8.1 psf
Case 2
Figure 5.22 Load Cases 1 through 4 at the Mean Roof Height
The above wind pressure calculations assume a uniform design pressure over the
incremental heights above ground level, which are given in Table 6-3. Alternatively,
wind pressures can be computed at each floor level, and a uniform pressure is assumed
between midstory heights above and below the floor level under consideration.
Shown in Figure 5.23 are the wind pressures computed at each floor level for this
example building. It can be shown that the base shears in Figures 5.21 and 5.23 are
virtually the same.
22.4 psf
15.1 psf
17.1 psf
16.8 psf
16.5 psf
16.1 psf
15.7 psf
15.3 psf
14.8 psf
14.3 psf
13.8 psf
13.2 psf
12.5 psf
11.7 psf
10.8 psf
9.6 psf
7.9 psf
± 12.1 psf internal
10.8 psf
Figure 5.23 Design Wind Pressures Computed at the Floor Levels in the N-S or E-W
Part 2: Determine design wind pressures on cladding
Use Flowchart 5.8 to determine the design wind pressures on the cladding.
1. It is assumed that the building is partially enclosed.
2. Determine velocity pressures q z and qh using Flowchart 5.5.
For buildings with a mean roof height greater than 60 ft, the velocity pressures on
the C&C on the windward walls vary with height above the base of the building.
Most of the quantities needed to determine qz and qh were determined in Part 1,
Step 2, item 3 of this example.
The velocity exposure coefficients K z and K h are summarized in Table 5.59.
Table 5.59 Velocity Pressure Exposure Coefficient K z for C&C
Height above ground
level, z (ft)
According to Note 1 in Table 6-3, values of KZ under Case 1 must be used for
C&C in Exposure B. That is why the values of KZ in Table 5.50 (Case 1) differ
from those in Table 5.47 (Case 2) for the MWFRS up to a height of 25 ft.
Velocity pressures q z and qh are determined by Eq. 6-15:
q z = 0.00256 K z K zt K d V 2 I = 0.00256 × K z × 1.0 × 0.85 × 90 2 × 1.0 = 17.63K z psf
A summary of the velocity pressures is given in Table 5.60.
3. Determine external pressure coefficients (GCp)from Figure 6-17 for Zones 4 and
Pressure coefficients for Zones 4 and 5 can be determined from Figure 6-17 based
on the effective wind area.
Table 5.60 Velocity Pressure q z for C&C
Height above ground
level, z (ft)
q z (psf)
In general, the effective wind area is the larger of the tributary area and the span
length multiplied by an effective width that need not be less than one-third the
span length.
Effective wind area for the mullions = larger of 15 × 5 = 75 sq ft or 15 × (15/3) =
75 sq ft.
Effective wind area for the glazing panels = larger of 7.5 × 5 = 37.5 sq ft
(governs) or 5 × (5/3) = 8.3 sq ft.
The pressure coefficients from Figure 6-17 for the cladding are summarized in
Table 5.61.
Table 5.61 External Pressure Coefficients (GC p ) for C&C
(GC p )
Glazing Panels
The width of the end zone (Zone 5) a = 0.1(least horizontal dimension) = 0.1 ×
150 = 15 ft, which is greater than 3 ft (see Note 8 in Figure 6-17).
4. Determine internal pressure coefficients (GC pi ) from Figure 6-5.
For a partially enclosed building, (GC pi ) = +0.55, − 0.55.
5. Determine design wind pressure p by Eq. 6-23 in Zones 4 and 5.
p = q(GC p ) − qh (GC pi )
= q(GC p ) − 21.9(±0.55)
= [q(GC p ) # 12.1] psf (external ± internal pressure)
where q = q z for positive external pressure and q = qh for negative external
pressure (6.5.12.4.2). Note that q i = q h for negative internal pressure in partially
enclosed buildings. Also, q i = q h may be conservatively used for positive internal
The maximum design wind pressures for positive and negative internal pressures
are summarized in Table 5.62. The maximum positive pressure, which varies with
height, is obtained with negative internal pressure. Similarly, the maximum
negative pressure, which is a constant over the height of the building, is obtained
with positive internal pressure. The pressures in Table 5.62 are applied normal to
the cladding and act over the respective tributary area. The computed positive and
negative pressures are greater than the minimum values prescribed in 6.1.4.2 of
+10 psf and -10 psf, respectively.
Table 5.62 Design Wind Pressures p on C&C
level, z
Design Pressure, p (psf)
Zone 4
Glazing Panels
Zone 4
Zone 5
Zone 5
Table 5.62 Design Wind Pressures p on C&C (continued)
level, z
Design Pressure, p (psf)
Zone 4
Glazing Panels
Zone 4
Zone 5
Zone 5
Example 5.10 – Agricultural Building using Method 2, Analytical
For the agricultural building depicted in Figure 5.24, determine (1) design wind pressures
on the main wind-force-resisting system in both directions and (2) design wind pressures
on the roof trusses using Method 2, Analytical Procedure.
5 @ 15′ = 75′
Structural system: wood frames (no walls)
Wood trusses spaced 3′ on center
3/4″ plywood sheathing (4′ × 8′ sheets)
Figure 5.24 Agricultural Building
Surface Roughness:
Ames, IA
C (open terrain with scattered obstructions having heights less than
30 ft)
Not situated on a hill, ridge or escarpment
Utility and miscellaneous occupancy
Part 1: Determine design wind pressures on MWFRS
Step 1: Check if the provisions of 6.5 can be used to determine the design wind
pressures on the MWFRS.
The provisions of 6.5 may be used to determine design wind pressures provided the
conditions of 6.5.1 and 6.5.2 are satisfied. It is clear that these conditions are satisfied
for this agricultural building that does not have response characteristics that make it
subject to across-wind loading or other similar effects, and that is not sited at a
location where channeling effects or buffeting in the wake of upwind obstructions
need to be considered.
The provisions of Method 2, Analytical Procedure, can be used to determine the
design wind pressures on the MWFRS.37
Step 2: Use Flowchart 5.7 to determine the design wind pressures on the MWFRS.
1. Since the building does not have any walls, it is classified as open.
2. Determine velocity pressure qh using Flowchart 5.5.
a. Determine basic wind speed V from Figure 6-1 or Figure 1609.
From Figure 6.1 or Figure 1609, V = 90 mph for Ames, IA.
b. Determine wind directionality factor K d from Table 6-4.
For the MWFRS of a building structure, K d = 0.85.
Even though the building is less than 60 ft in height, Method 1, Simplified Procedure cannot be used to
determine the wind pressures, since the building is not enclosed.
c. Determine importance factor I from Table 6-1 based on occupancy category
from IBC Table 1604.5.
From IBC Table 1604.5, the Occupancy Category is I for an agricultural
facility. From Table 6-1, I = 0.87.
d. Determine exposure category.
In the design data, the surface roughness is given as C. Assume that
Exposure C is applicable in all directions (6.5.6.3).
e. Determine topographic factor K zt .
As noted in the design data, the building is not situated on a hill, ridge or
escarpment. Thus, topographic factor Kzt = 1.0 (6.5.7.2).
f. Determine velocity pressure exposure coefficient K h from Table 6-3.
Mean roof height =
20 + 30
= 25 ft
From Table 6-3, K h = 0.94 for Exposure C.
g. Determine velocity pressure qh by Eq. 6-15.
qh = 0.00256 K h K zt K d V 2 I = 0.00256 × 0.94 × 1.0 × 0.85 × 90 2 × 0.87 = 14.4 psf
3. Determine gust effect factor G from Flowchart 5.6.
Assuming the building is rigid, G may be computed by Eq. 6-4 or may be taken as
0.85. For simplicity, use G = 0.85.
4. Determine net pressure coefficients C N for the pitched roof.
a. Wind in the E-W direction ( γ = 0$ , 180$ )
Figure 6-18B is used to determine the net pressure coefficients CNW and CNL
on the windward and leeward portions of the roof surface for wind in the E-W
direction. These net pressure coefficients include contributions from both the
top and bottom surfaces of the roof (see Note 1 in Figure 6-18B).
The wind pressures on the roof depend on the level of wind flow restriction
below the roof. Clear wind flow implies that little (less than or equal to 50
percent) or no portion of the cross-section below the roof is blocked by goods
or materials (see Note 2 in Figure 6-18B). Obstructed wind flow means that a
significant portion (more than 50 percent) of the cross-section is blocked.
Since the usage of the space below the roof is not known, wind pressures will
be determined for both situations.
A summary of the net pressure coefficients is given in Table 5.63. The roof
angle in this example is equal to approximately 18.4 degrees, and the values
of CNW and CNL were obtained by linear interpolation (see Note 3 in Figure 618B).
Table 5.63 Windward and Leeward Net Pressure Coefficients CNW and CNL for Wind
in the E-W Direction
Clear Wind
Obstructed Wind
b. Wind in the N-S direction ( γ = 90$ )
Figure 6-18D is used to determine the net pressure coefficients CN at various
distances from the windward edge of the roof.
Net pressure coefficients are given in Table 5.64.
Table 5.64 Net Pressure Coefficients CN for Wind in the N-S Direction
Distance from
Windward Edge
≤ h = 25 ′
Clear Wind
Wind Flow
> h = 25′, ≤ 2h = 50′
≥ 2h = 50′
5. Determine net design pressure p by Eq. 6-25.
p = q h GC N = 14.4 × 0.85 × C N = 12.2C N psf
A summary of the net design wind pressures is given in Table 5.65 for wind in the
E-W direction and in Table 5.66 for wind in the N-S direction. These pressures
act perpendicular to the roof surface.
Table 5.65 Net Design Wind Pressures (psf) for Wind in the E-W Direction
Clear Wind
Obstructed Wind
Table 5.66 Net Design Wind Pressures (psf) for Wind in the N-S Direction
Clear Wind
Wind Flow
> h = 25′, ≤ 2h = 50′
≥ 2 h = 50 ′
Distance from
Windward Edge
≤ h = 25 ′
The minimum design wind loading prescribed in 6.1.4.1 must be considered as a
load case in addition to Load Cases A and B described above.
Part 2: Determine design wind pressures on roof trusses
Use Flowchart 5.8 to determine the design wind pressures on the roof trusses, which are
1. Determine velocity pressure qh using Flowchart 5.5.
Velocity pressure was determined in Part 1, Step 2, item 2 of this example and is
equal to 14.4 psf.
2. Determine gust effect factor G from Flowchart 5.6.
The gust effect factor was determined in Part 1, Step 2, item 3 of this example and
is equal to 0.85.
3. Determine net pressure coefficients C N for the pitched roof.
Figure 6-19B is used to determine the net pressure coefficients CN for Zones 1, 2
and 3 on the roof. In this example, h / L = 25 / 60 = 0.42 , which is between the
limits of 0.25 and 1.0.
The magnitude of the net pressure coefficient depends on the effective wind area,
which is the larger of the tributary area and the span length multiplied by an
effective width that need not be less than one-third the span length.
Effective wind area = larger of 60 × 3 = 180 sq ft or 60 × (60/3) = 1,200 sq ft
The width of the zones a = least of 0.1 (least horizontal dimension) = 0.1 × 60 =
6.0 ft (governs) or 0.4h = 0.4 × 25 = 10.0 ft. This value of a is greater than 0.04
(least horizontal dimension) = 0.04 × 60 = 2.4 ft or 3 ft (see Note 6 in Figure
The effective wind area is greater than 4.0a 2 = 4.0 × 6.0 2 = 144 sq ft; this
information is also needed to determine the net pressure coefficients.
Like in the case of the MWFRS, the magnitude of C N depends on whether the
wind flow is clear or obstructed. Both situations are examined in this example.
For clear wind flow, C N = 1.15,−1.06 in Zones 1, 2 and 3. Linear interpolation
was used to determine these values for a roof angle of 18.4 degrees and an
effective wind area > 4.0a 2 (see Note 3 in Figure 6-19B).
For obstructed wind flow, C N = 0.50,−1.51 in Zones 1, 2, and 3 by linear
4. Determine net design wind pressure p by Eq. 6-26.
p = q h GC N = 14.4 × 0.85 × C N = 12.2C N psf
For clear wind flow: p = 14.0 psf, -12.9 psf in Zones 1, 2 and 3
For obstructed wind flow: p = 6.1 psf, -18.4 psf in Zones 1, 2 and 3
In the case of obstructed wind flow, the net positive pressure must be increased to
10 psf to satisfy the minimum requirements of 6.1.4.2.
Illustrated in Figure 5.25 are the loading diagrams on a typical interior roof truss
for clear wind flow. Similar loading diagrams can be obtained for obstructed wind
14.0 x 3 = 42.0 plf
12.9 x 3 = 38.7 plf
Figure 5.25 Roof Truss Loading Diagrams for Clear Wind Flow
Example 5.11 – Freestanding Masonry Wall using Method 2, Analytical
Determine the design wind forces on the architectural freestanding masonry screen wall
depicted in Figure 5.26 using Method 2, Analytical Procedure (6.5.14).
Screen wall is 40% open
Figure 5.26 Freestanding Masonry Wall
Basic wind speed:
Exposure category:
Occupancy Category:
90 mph
Not situated on a hill, ridge or escarpment
I (low hazard to human life)
Use Flowchart 5.9 to determine the design wind force on the freestanding wall.
1. Determine velocity pressure qh using Flowchart 5.5.
a. Determine basic wind speed V from Figure 6-1 or Figure 1609.
The basic wind speed was given in the design data as V = 90 mph.
b. Determine wind directionality factor K d from Table 6-4.
For solid signs or solid freestanding walls, K d = 0.85.
c. Determine importance factor I from Table 6-1 based on occupancy category
from IBC Table 1604.5.
The Occupancy Category is given as I in the design data. From Table 6-1, I =
d. Determine exposure category.
Exposure B is given in the design data.
e. Determine topographic factor K zt .
As noted in the design data, the building is not situated on a hill, ridge or
escarpment. Thus, topographic factor K zt = 1.0 (6.5.7.2).
f. Determine velocity pressure exposure coefficient K h from Table 6-3.
From Table 6-3, K h = 0.70 in Case 2 under Exposure B at a height of 30 ft.
g. Determine velocity pressure qh by Eq. 6-15.
qh = 0.00256 K h K zt K d V 2 I = 0.00256 × 0.70 × 1.0 × 0.85 × 90 2 × 0.87 = 12.3 psf
2. Determine gust effect factor G from Flowchart 5.6.
Assuming the wall is rigid, G may be computed by Eq. 6-4 or may be taken as
0.85. For simplicity, use G = 0.85.
3. Determine net force coefficient C f from Figure 6-20.
In general, Cases A, B and C must be considered for freestanding walls. A
different loading condition is considered in each case.
In this example, the aspect ratio B/s = 45/30 = 1.5 < 2. Therefore, according to
Note 3 in Figure 6-20, only Cases A and B need to be considered.
From Figure 6-20, Cf = 1.43 (by linear interpolation) for Cases A and B with s/h =
1 and B/s = 1.5 (see Note 5 in Figure 6-20).
According to Note 2 in Figure 6-20, force coefficients for solid freestanding walls
with openings may be multiplied by the reduction factor [1 − (1 − ε)1.5 ] where ε
is equal to the ratio of the solid area to gross area of the wall. In this example,
ε = 0.6 , and the reduction factor = 0.75. Therefore, C f = 0.75 × 1.43 = 1.07.
4. Determine design wind force F from Eq. 6-27 (6.5.14).
F = qhGC f As = 12.3 × 0.85 × 1.07 × (30 × 45) = 15,102 lbs
In Case A, this force acts at a distance equal to 5 percent of the height of the wall
above the geometric center of the wall, i.e., the resultant force is located at (30/2)
+ (0.05 × 30) = 16.5 ft above the ground level (see Figure 6-20).
In Case B, the resultant force is located 16.5 ft above the ground level and (45/2)
– (0.2 × 45) = 13.5 from the windward edge of the wall (see Figure 6-20).
The minimum design wind loading prescribed in 6.1.4.1 must be considered as a load
case in addition to Load Cases A and B described above.
CHAPTER 6
According to IBC 1613.1, the effects of earthquake motion on structures and their
components are to be determined in accordance with ASCE/SEI 7-05, excluding
Chapter 14 (Material Specific Seismic Design and Detailing Requirements) and
Appendix 11A (Quality Assurance Provisions).1 These chapters from ASCE/SEI 7 have
been excluded because the IBC includes quality assurance provision in Chapter 17 and
structural material provisions in Chapters 19 through 23.
A summary of the chapters of ASCE/SEI 7 that contain earthquake load provisions that
are referenced by the 2009 IBC is provided in Table 6.1. The primary focus of the
discussion in this chapter of the publication is on Chapters 11, 12, 13, 15, 20, 21 and 22
of ASCE/SEI 7.
Table 6.1 Summary of Chapters in ASCE/SEI 7-05 that Are Referenced by the 2009
IBC for Earthquake Load Provisions
Seismic Design Criteria
Seismic Design Requirements for Building Structures
Seismic Design Requirements for Nonstructural
Seismic Design Requirements for Nonbuilding Structures
Seismic Response History Procedures
Seismic Design Requirements for Seismically Isolated
Seismic Design Requirements for Structures with
Damping Systems
Soil Structure Interaction for Seismic Design
Site Classification Procedure for Seismic Design
Site-Specific Ground Motion Procedures for Seismic
Seismic Ground Motion and Long Period Transition
Seismic Design Reference Documents
IBC 1613.6 contains modifications to the ASCE/SEI 7-05 earthquake load provisions. Note that the
modifications are optional rather than mandatory.
IBC 1613.1 lists four exemptions to seismic design requirements presented in this
1. Detached one- and two-family dwellings that are assigned to Seismic Design
Category (SDC) A, B or C (i.e, SDS < 0.5 and SD1 < 0.2), or located where
SS < 0.4.2
2. Conventional light-frame wood construction that conforms to IBC 2308.3
3. Agricultural storage structures where human occupancy is incidental.
4. Structures that are covered under other regulations, such as vehicular bridges,
electrical transmission towers, hydraulic structures, buried utility lines and
nuclear reactors.
ASCE/SEI 11.1.2 contains essentially the same exemptions as the IBC; the only
difference occurs in the second exemption, which is stated in ASCE/SEI 11.1.2 as
follows: detached one- and two-family wood-frame dwellings not included in Exception
1 that are less than or equal to two stories, satisfying the limitations and constructed in
accordance with the International Residential Code® (IRC®).
The seismic requirements of the IBC need not be applied to structures that meet at least
one of these four criteria.
Seismic requirements for existing buildings are contained in IBC 1613.3. In general,
additions, alterations, repairs or change of occupancy of structures are governed by the
provisions of IBC Chapter 34, Existing Structures. The seismic resistance requirements
of IBC 3403.4.1 and 3404.4.1 must be satisfied where additions and alterations are made
to an existing building, respectively.
IBC 3408 provides guidance with respect to the impact a change of occupancy has on an
existing building.
Seismic Ground Motion Values
Mapped Acceleration Parameters
IBC Figures 1613.5(1), 1613.5(2) and ASCE/SEI Figures 22-1, 22-2 contain contour
maps of the conterminous United States giving SS and S1, which are the mapped
maximum considered earthquake (MCE) spectral response accelerations at periods of
0.2 sec and 1.0 sec, respectively, for a Site Class B soil profile and 5-percent damping.4
Definitions of Seismic Design Category and spectral response accelerations SS, SDS and SD1 are given in
subsequent sections of this publication.
Limitations for conventional light-frame wood construction are given in IBC 2308.2.
The MCE spectral response accelerations, which are directly related to base shear, reflect the maximum
level of earthquake ground shaking that is considered reasonable for the design of new structures.
IBC Figures 1613.5(3) through 1613.5(14) and ASCE/SEI Figures 22-3 through 22-14
contain similar contour maps for specific regions of the conterminous United States,
Alaska, Hawaii, Puerto Rico and U.S. commonwealths and territories.
In lieu of the maps, MCE spectral response accelerations can be obtained from the
Ground Motion Parameter Calculator that can be accessed on the United States
Geological Survey (USGS) website.5 Accelerations are output for a specific latitudelongitude or zip code, which is input by the user. More accurate spectral accelerations for
a given site are typically obtained by inputting a latitude-longitude, especially in areas
where the mapped ground motions are highly variable or where a zip code encompasses a
large area.
Where SS ≤ 0.15 and S1 ≤ 0.04, the structure is permitted to be assigned to SDC A
(IBC 1613.5.1). These areas are considered to have very low seismic risk based solely on
the mapped ground motions.
Site Class
Six site classes are defined in IBC Table 1613.5.2 and ASCE/SEI Table 20.3-1. A site is
to be classified as one of these six site classes based on one of three soil properties (soil
shear wave velocity, standard penetration resistance or blow count and soil undrained
shear strength) measured over the top 100 ft of the site. Steps for classifying a site are
given in IBC 1613.5.5.1 and ASCE/SEI 20.3. Methods of determining the site class
where the soil is not homogeneous over the top 100 ft are provided in IBC 1613.5.5 and
ASCE/SEI 20.4.
Site Class A is hard rock, which is typically found in the eastern United States, while Site
Class B is a softer rock that is commonly found in western parts of the country. Site
Classes C, D and E indicate progressively softer soils, while Site Class F indicates soil so
poor that a site-specific geotechnical investigation and dynamic site response analysis is
required to determine site coefficients. Site-specific ground motion procedures for
seismic design are given in ASCE/SEI Chapter 21.
At locations or in cases where soil property measurements to a depth of 100 ft are not
feasible, the registered design professional that is responsible for preparing the
geotechnical report may estimate soil properties from geological conditions. When soil
properties are not known in sufficient detail to determine the site class in accordance with
code provisions, Site Class D must be used, unless the building official requires that
Site E or F must be used at the site.
Site Coefficients and Adjusted MCE Spectral Response Acceleration Parameters
Once the mapped spectral accelerations and site class have been established, the MCE
spectral response acceleration for short periods SMS and at 1-second period SM1 adjusted
for site class effects are determined by IBC Eqs. 16-36 and 16-37, respectively, or
ASCE/SEI Eqs. 11.4-1 and 11.4-2, respectively:
S MS = Fa S S
S M 1 = Fv S1
where Fa = short-period site coefficient determined from IBC Table 1613.5.3(1) or
ASCE/SEI Table 11.4-1 and Fv = long-period site coefficient determined from
IBC Table 1613.5.3(2) or ASCE/SEI Table 11.4-2. For site classes other than B, an
adjustment to the mapped spectral response accelerations is necessary.
Typically, ground motion is amplified in softer soils (Site Classes C through E) and
attenuated in stiffer soils (Site Class A). This can be observed in the tables where the
magnitudes of Fa and Fv increase going from Site Class A to F for a given mapped
ground motion acceleration. The only exception to this occurs for short periods where
S S ≥ 1.0 and the Site Class changes from D to E. Very soft soils are not capable of
amplifying the short-period components of subsurface rock motion; in fact,
deamplification occurs in such cases.
Design Spectral Response Acceleration Parameters
Five-percent damped design spectral response accelerations at short periods SDS and at 1sec period SD1 are determined by IBC Eqs. 16-38 and 16-39, respectively, or ASCE/SEI
Eqs. 11.4-3 and 11.4-4, respectively:
S DS =
S MS
S D1 =
The design ground motion is 2/3 = 1/1.5 times the soil-modified MCE ground motion; the
basis of this factor is that it is highly unlikely that a structure designed by the code
provisions will collapse when subjected to ground motion that is 1.5 times as strong as
the design ground motion. More information on the two-thirds adjustment factor can be
found in Chapter 3 of the NEHRP Commentary (FEMA 450).
Occupancy Category and Importance Factor
Occupancy categories are defined in IBC Table 1604.5 and ASCE/SEI Table 1-1. An
importance factor I is assigned to a building or structure in accordance with ASCE/SEI
Table 11.5-1 based on its occupancy category. Larger values of I are assigned to high
occupancy and essential facilities to increase the likelihood that such structures would
suffer less damage and continue to function during and following a design earthquake.
Seismic Design Category
All buildings and structures must be assigned to a Seismic Design Category (SDC) in
accordance with IBC 1613.5.6 or ASCE/SEI 11.6. In general, a SDC is a function of
occupancy or use and the design spectral accelerations at the site.
The SDC is determined twice: first as a function of S DS by IBC Table 1613.5.6(1) or
ASCE/SEI Table 11.6-1 and second as a function of S D1 by IBC Table 1613.5.6(2) or
ASCE/SEI Table 11.6-2. The more severe seismic design category of the two governs.
Where S1 is less than 0.75, the SDC may be determined by IBC Table 1613.5.6(1) or
ASCE/SEI Table 11.6-1 based solely on S DS provided all of the four conditions listed
under IBC 1613.5.6.1 or ASCE/SEI 11.6 are satisfied.
Where S1 is greater than or equal to 0.75, conditions under which SDC E and SDC F are
to be assigned are also given in IBC 1613.5.6 and ASCE/SEI 11.6.
The SDC is a trigger mechanism for many seismic requirements, including
permissible seismic force-resisting systems
limitations on building height
consideration of structural irregularities
the need for additional special inspections, structural testing and structural
observation for seismic resistance
Design Requirements for SDC A
Structures assigned to SDC A need only comply with the requirements of 11.7.6 To
ensure general structural integrity, the lateral force-resisting system must be proportioned
to resist a lateral force at each floor level equal to 1 percent of the total dead load at that
floor level, as depicted in Figure 6.1.
According to 11.7.2, the lateral forces are to be applied independently in each of two
orthogonal directions.
Requirements for load path connections, connection to supports, and anchorage of
concrete or masonry walls are given in 11.7.3, 11.7.4 and 11.7.5, respectively.
From this point on in this chapter, referenced section, table and figure numbers are from ASCE/SEI 7-05
unless noted otherwise.
V = 0.01(w1 + w2 + w3 + wr)
Figure 6.1 Design Seismic Force Distribution for Structures Assigned to SDC A
Basic Requirements
Basic requirements for seismic analysis and design of building structures are contained in
12.1. In general, the structure must have a complete lateral and vertical force-resisting
system that is capable of providing adequate strength, stiffness and energy dissipation
capacity when subjected to the design ground motion. Also, a continuous load path with
adequate strength and stiffness must be provided to transfer all forces from the point of
application to the final point of resistance.
Design seismic forces and their distribution over the height of a structure are to be
established in accordance with one of the procedures in 12.6. A simplified design
procedure may also be used, provided all of the limitations of 12.14 are satisfied. These
methods are covered in subsequent sections of this publication.
General requirements for connections between members and connection to supports are
given in 12.1.3 and 12.1.4, respectively. Basic requirements for foundation design are
contained in 12.1.5.
Seismic Force-Resisting Systems
The basic seismic force-resisting systems are listed in Table 12.2-1. Included in the table
are the response modification coefficients R to be used in determining the base shear V,
the system overstrength factors O to be used in determining element design forces, and
the deflection amplification factors Cd to be used in determining design story drift.
Table 12.2-1 also contains important information on system limitations with respect to
SDC and height. Some systems, such as special steel moment frames and special
reinforced concrete moment frames, can be utilized in structures assigned to any SDC
with no height limitations. Other less ductile systems can be utilized with no height
limitations in some SDCs while others are permitted in structures up to certain heights.
The least ductile systems are not permitted in the higher SDCs under any circumstances.
Information on combinations of framing systems in different and in the same direction is
given in 12.2.2 and 12.2.3, respectively. Different seismic force-resisting systems may be
used along two orthogonal axes of the structure. The more stringent system limitations of
Table 12.2-1 apply where different seismic force-resisting systems are used in the same
direction. Occupancy Category I or II buildings that are two stories or less in height and
that use light-frame construction or flexible diaphragms are permitted to be designed
using the least value of R for the different structural systems in each independent line of
resistance (12.2.3.2). Also, the value of R used for the design of the diaphragms in such
structures must be less than or equal to the least value for any of the systems utilized in
that same direction.
Specific requirements for dual systems, cantilever column systems, inverted pendulumtype structures, special moment frames and other systems are given in 12.2.5. In shear
wall-frame interactive systems, which are permitted in structures assigned to SDC B
only, the shear walls must be able to resist at least 75 percent of the design story shear at
each story (12.2.5.10).
Diaphragm Flexibility, Configuration Irregularities, and Redundancy
Diaphragm Flexibility
The relative flexibility of diaphragms must be considered in the structural analysis
(12.3.1). Floor and roof systems that can be idealized as flexible or rigid diaphragms are
given in 12.3.1.1 and 12.3.1.2, respectively. Floor/roof systems that do not satisfy the
conditions of these sections are permitted to be idealized as flexible diaphragms where
the computed in-plane deflection of the diaphragm under lateral load is greater than two
times the average story drift of adjoining vertical elements of the seismic force-resisting
system (see 12.3.1.3 and Figure 12.3-1).
Irregular and Regular Classifications
Structures are classified as regular or irregular based on the criteria in 12.3.2. In general,
buildings having irregular configurations in plan and/or elevation suffer greater damage
than those having regular configurations when subjected to design earthquakes.
Table 12.3-1 contains the types and descriptions of horizontal structural irregularities that
must be considered as a function of SDC. Similar information is provided in Table 12.3-2
for vertical structural irregularities. Tables 6.2 and 6.3 summarize and illustrate the
information provided in Tables 12.3-1 and 12.3-2, respectively.
Limitations and additional requirements for systems with particular types of structural
irregularities are given in 12.3.3 and are summarized in Table 6.4.
The equivalent lateral force procedure using elastic analysis is not capable of accurately
predicting the earthquake effects on certain types of irregular buildings. As indicated in
Section 6.3.6 of this publication, more comprehensive structural analyses must be
performed for structures with specific types of horizontal and vertical irregularities.
§ Δ + Δ2 ·
Δ 2 > 1.4¨ 1
Projection d > 0.15c
• Projection b > 0.15a
§ Δ + Δ2 ·
Δ 2 > 1.2¨ 1
Irregularity Type
Table 6.2 Horizontal Structural Irregularities (12.3.2.1)
D, E and F
D, E and F
D, E and F
C, D, E and F
B, C, D, E and F
C, D, E and F
D, E and F
B, C, D, E and F
E and F
B, C and D
C and D
C and D
B, C and D
Table 12.6-1
Table 12.6-1
Table 12.6-1
SDC Application
Irregularity Type
Table 12.6-1
Vertical lateral force-resisting
elements are not parallel to or
symmetric about the major
orthogonal axes of the seismic
force-resisting system
C, D, E and F
B, C, D, E and F
D, E and F
B, C, D, E and F
D, E and F
B, C, D, E and F
B, C, D, E and F
D, E and F
B, C, D, E and F
Table 12.6-1
in effective
diaphragm stiffness > 50%
from one story to the next
Ɣ Changes
Discontinuities in lateralforce-resisting path, such as
out-of-plane offsets of vertical
SDC Application
D, E and F
D, E and F
Table 12.6-1
• Area of opening > 0.5ab
Table 6.2 Horizontal Structural Irregularities (12.3.2.1) (continued)
Soft Story
Extreme Soft
Irregularity Type
Horizontal dimension of seismic forceresisting system in story > 130% of that in
adjacent story
(a roof that is lighter than the floor below
need not be considered)
Story mass > 150% (adjacent story mass)
< 60% (story stiffness above)
< 70% (average stiffness of 3 stories
< 70% (story stiffness above)
< 80% (average stiffness of 3 stories
Soft story stiffness
Soft story stiffness
Table 6.3 Vertical Structural Irregularities (12.3.2.2)
Table 12.6-1
Table 12.6-1
Table 12.6-1
Table 12.6-1
D, E and F
D, E and F
E and F
D, E and F
D, E and F
SDC Application
in Vertical
in Capacity—
Weak Story
in Capacity—
Weak Story
Irregularity Type
strength above)
Story strength = total strength of seismicresisting elements sharing story shear for
direction under consideration
• Weak story strength < 65% (story
strength above)
• Story strength = total strength of seismicresisting elements sharing story shear for
direction under consideration
• Weak story strength < 80% (story
In-plane offset of lateral force-resisting
elements > lengths of those elements, or
reduction in stiffness of resisting elements in
story below
Table 6.3 Vertical Structural Irregularities (12.3.2.2) (continued)
Table 12.6-1
Table 12.6-1
Table 12.6-1
D, E and F
B and C
D, E and F
E and F
D, E and F
B, C, D, E and F
D, E and F
D, E and F
SDC Application
Table 6.4 Limitations and Additional Requirements for Systems with Structural
Irregularities (12.3.3)
E or F
Irregularity Type
Vertical irregularity Type 5b
• Horizontal irregularity Type 1b
• Vertical irregularities Type 1b, 5a or
Vertical irregularity Type 5b
Horizontal irregularity Type 4 or
vertical irregularity Type 4
• Horizontal irregularities Type 1a, 1b,
2, 3 or 4
• Vertical irregularity Type 4
Limitations/Additional Requirements
Not permitted
Height limited to two stories or
30 ft*
Design supporting members and
their connections to resist load
combinations with overstrength
factor of 12.4.3.2
Design forces determined by 12.8.1
shall be increased by 1.25 for
connections of diaphragms to
vertical elements and to collectors
and for connections of collectors to
vertical elements.**
* The limit does not apply where the weak story is capable of resisting a total seismic force equal
to Ω o times the design force prescribed in 12.8.
** Collectors and their connections must also be designed for these increased forces unless they are
designed for the load combinations with overstrength factor of 12.4.3.2 in accordance with 12.10.2.1.
The redundancy factor ȡ is a measure of the redundancy inherent in the seismic forceresisting system. The value of this factor is either 1.0 or 1.3. Conditions where ȡ is equal
to 1.0 are given in 12.3.4. The redundancy factor is 1.0 for structures assigned to SDC B
or C.
In essence, the redundancy factor has the effect of reducing the response modification
coefficient R for less redundant structures, which, in turn, increases the seismic demand
on the system.
Seismic Load Effects and Combinations
The seismic load effect E that is used in the load combinations defined in 2.3 and 2.4
consists of effects of horizontal ( Eh = ρQE ) and vertical ( Ev = 0.2 S DS D ) seismic forces
where QE are the effects (bending moments, shear forces, axial forces) on the structural
members obtained from the structural analysis where the base shear V is distributed over
the height of the structure (12.4).
Basic load combinations for strength design and allowable stress design are summarized
in 12.4.2.3 and Table 6.5. Additional information on these load combinations can be
found in Chapter 2 of this publication.
Table 6.5 Seismic Load Combinations (12.4.2.3)
Strength Design*
(1.2 D + 0.2 S DS ) D + ρQE + L + 0.2S
(0.9 − 0.2 S DS ) D + ρQE + 1.6 H
Allowable Stress Design
(1.0 + 0.14S DS ) D + H + F + 0.7ρQE
(1.0 + 0.105S DS ) D + H + F + 0.525ρQE + 0.75 L + 0.75( Lr or S or R )
(0.6 − 0.14 S DS ) D + 0.7ρQE + H
* Load factor on L in combination 5 is permitted to equal 0.5 for all occupancies in which Lo in
Table 4-1 is less than or equal to 100 psf, with the exception of garages or areas occupied as places of
public assembly. The load factor on H in combination 7 shall be set equal to zero if the structural action
due to H counteracts that due to E. Where lateral earth pressure provides resistance to structural actions
from other forces, it shall not be included in H but shall be included in the design resistance.
The seismic load effect including overstrength factor Em that is used in the load
combinations defined in 12.4.3.2 consists of the horizontal seismic load effect with
overstrength factor ( Emh = Ω o QE ) and the effects of vertical ( Ev = 0.2 S DS D ) seismic
forces where the overstrength factor Ω o is given in Table 12.2-1 as a function of the
seismic force-resisting system (12.4.3).
Basic combinations for strength design with overstrength factor and allowable stress
design with overstrength factor are summarized in 12.4.3.2 and Table 6.6. These
combinations pertain only to those structural elements that are listed in 12.3.3.3 (elements
supporting discontinuous walls or frames in structures assigned to SDC B through F) and
12.10.2.1 (collector elements, splices and their connections to resisting elements in
structures assigned to SDC C through F). See Chapter 2 of this publication for additional
information on these load combinations.
Direction of Loading
According to 12.5.1, seismic forces must be applied to the structure in directions that
produce the most critical load effects on the structural members. The requirements are
based on the SDC and are summarized in Table 6.7.
Table 6.6 Seismic Load Combinations with Overstrength Factor (12.4.3.2)
Strength Design*
(1.2 D + 0.2 S DS ) D + Ω oQE + L + 0.2S
(0.9 − 0.2 S DS ) D + Ω oQE + 1.6 H
Allowable Stress Design
(1.0 + 0.14S DS ) D + H + F + 0.7Ω oQE
(1.0 + 0.105S DS ) D + H + F + 0.525Ω oQE + 0.75 L + 0.75( Lr or S or R)
(0.6 − 0.14 S DS ) D + 0.7Ω oQE + H
* Load factor on L in combination 5 is permitted to equal 0.5 for all occupancies in which Lo in
Table 4-1 is less than or equal to 100 psf, with the exception of garages or areas occupied as places of
public assembly. The load factor on H in combination 7 shall be set equal to zero if the structural action
due to H counteracts that due to E. Where lateral earth pressure provides resistance to structural actions
from other forces, it shall not be included in H but shall be included in the design resistance.
Table 6.7 Direction of Loading Requirements (12.5)
Design seismic forces applied independently in each of two orthogonal
directions and orthogonal interaction effects are permitted to be neglected.
• Conform to requirements of SDC B
• Structures with horizontal irregularity Type 5:
− Orthogonal combination procedure: Apply 100 percent of the seismic
forces for one direction plus 30 percent of the seismic forces for the
perpendicular direction on the structure simultaneously where the forces
are computed in accordance with 12.8 (equivalent lateral force analysis
procedure), 12.9 (modal response spectrum analysis procedure) or 16.1
(linear response history procedure), or
− Simultaneous application of orthogonal ground motion: Apply orthogonal
pairs of ground motion acceleration histories simultaneously to the
structure using 16.1 (linear response history procedure) or 16.2 (nonlinear
response history procedure).
• Conform to requirements of SDC C
• Any column or wall that forms part of two or more intersecting seismic
force-resisting systems that is subjected to axial load due to seismic forces
along either principal axis that is greater than or equal to 20 percent of the
axial design strength of the column or wall must be designed for the most
critical load effect due to application of seismic forces in any direction.*
* Either of the procedures of 12.5.3 a or b are permitted to be used to satisfy this requirement.
Analysis Procedure Selection
Requirements on the type of procedure that can be used to analyze the structure for
seismic loads are given in 12.6 and are summarized in Table 12.6-1. As noted previously,
the permitted analytical procedures depend on the SDC, the occupancy of the structure,
characteristics of the structure (height and period) and the presence of any structural
Modeling Criteria
Requirements pertaining to the construction of an adequate model for the purposes of
seismic load analysis are given in 12.7. It is permitted to assume that the base of the
structure is fixed. However, if the flexibility of the foundation is considered, the
requirements of 12.13.3 (foundation load-deformation characteristics) or Chapter 19 (soil
structure interaction for seismic design) must be satisfied.
A three-dimensional analysis is required for structures that have horizontal irregularity
Types 1a, 1b, 4 or 5. In such cases, a minimum of three dynamic degrees of freedom
(translation in two orthogonal directions and torsional rotation about the vertical axis)
must be included at each level of the structure.
Cracked section properties must be used when analyzing concrete and masonry
structures. In steel moment frames, the contribution of panel zone deformations to overall
story drift must be included.
The definition of the effective seismic weight W that is used in determining the base
shear V is given in 12.7.2. In addition to the dead load of the structure, a portion of the
storage live load, the partition load, the weight of permanent operating equipment and a
portion of the snow load, where applicable, must be included in W.
Equivalent Lateral Force Procedure
The provisions of the Equivalent Lateral Force Procedure are contained in 12.8. This
analysis procedure can be used for all structures assigned to SDC B and C as well as
some types of structures assigned to SDC D, E and F (see Table 12.6-1).
Seismic Base Shear, V
In short, the seismic base shear V is determined as a function of the design response
accelerations S DS and S D1 , the response modification coefficient R, the importance
factor I, the fundamental period of the structure T and the effective seismic weight W.
The design spectrum defined by the equations in 12.8 is depicted in Figure 6.2. The longperiod transition period TL is given in Figures 22-15 through 22-20.
¨ ¸
Design Base Shear, V
SD1TL W
2§ R ·
¨ ¸
TS = SD1/SDS
V = 0.044S DSIW
¨ ¸
Period, T
Figure 6.2 Design Response Spectrum According to the Equivalent Lateral Force
Procedure (12.8)
Vertical Distribution of Seismic Forces
Once the seismic base shear V has been determined, it is distributed over the height of the
building in accordance with 12.8.3. For structures with a fundamental period less than or
equal to 0.5 sec, V is distributed linearly over the height, varying from zero at the base to
a maximum value at the top (see Figure 6.3a). When T is greater than 2.5 sec, a parabolic
distribution is to be used (see Figure 6.3b). For a period between these two values, a
linear interpolation between a linear and parabolic distribution is permitted or a parabolic
distribution may be utilized.
Horizontal Distribution of Forces
The seismic design story shear Vx in story x is the sum of the lateral forces acting at the
floor or roof level supported by that story and all of the floor levels above, including the
roof. The story shear is distributed to the vertical elements of the seismic force-resisting
system in the story based on the lateral stiffness of the diaphragm.
For flexible diaphragms, Vx is distributed to the vertical elements of the seismic forceresisting system based on the area of the diaphragm tributary to each line of resistance.
a. T 0.5 sec
b. T > 0.5 sec
Figure 6.3 Vertical Distribution of Seismic Forces (12.8.3)
For diaphragms that are not flexible, Vx is distributed based on the relative stiffness of the
vertical resisting elements and the diaphragm. Inherent and accidental torsion must be
considered in the overall distribution (12.8.4.1 and 12.8.4.2). Where Type 1a or 1b
irregularity is present in structures assigned to SDC C, D, E or F, the accidental torsional
moment is to be amplified in accordance with 12.8.4.3 (see Figure 12.8-1).
The structure must be designed to resist the overturning effects caused by the seismic
forces. In such cases, the critical load combinations are typically those where the effects
from gravity and seismic loads counteract.
Story Drift Determination
Design story drift Δ is determined in accordance with 12.8.6 and is computed as the
difference of the deflections δ x at the center of mass of the diaphragms at the top and
bottom of the story under consideration (see Figure 12.8-2).
The deflections δ x at each floor level are obtained by multiplying the deflections δ xe
(the deflections determined by an elastic analysis using the code-prescribed forces
applied at each floor level) by the deflection amplification factor Cd in Table 12.2-1 and
dividing by the importance factor I (see Eq. 12.8-15). Limits on the design story drifts are
given in 12.12 and are covered in Section 6.3.12 of this publication.
P-Delta Effects
Member forces and story drifts induced by P-delta effects must be considered in member
design and in the evaluation of overall stability of a structure where such effects are
significant. Equation 12.8-16 can be used to evaluate the need to consider P-delta effects.
Equation 12.8-17 is used to check if the structure is potentially unstable; this equation
must be satisfied even where computer software is utilized to determine second-order
Modal Response Spectral Analysis
Requirements on how to conduct a modal analysis of a structure are given in 12.9. Such
an analysis is permitted for any structure assigned to SDC B through F and is required for
regular and irregular structures where T 3.5 TS and certain types of irregular structures
assigned to SDC D through F (see Table 12.6-1). The number of modes that need to be
considered, methods on how to combine the results from the different modes and
provisions on scaling the design values of the combined response are contained in 12.9.
6.3.10 Diaphragms, Chords, and Collectors
Diaphragm Design Forces
In structures assigned to SDC B and higher, floor and roof diaphragms must be designed
to resist seismic forces from base shear determined from the structural analysis or the
force Fpx determined by Eq. 12.10-1, whichever is greater. Upper and lower limits of Fpx
are provided in 12.10.1.1.
An example of an offset in the vertical resisting elements of the seismic force-resisting
system is illustrated in Figure 6.4. In such cases, the diaphragm that is required to transfer
design forces from the vertical elements above the diaphragm to the vertical elements
below the diaphragm must be designed for this additional force.
Figure 6.4 Example of Vertical Offsets in the Seismic Force-Resisting System (12.10.1)
The redundancy factor ȡ applies to the design of diaphragms in structures assigned to
SDC D, E and F; it is equal to 1.0 for inertial forces calculated by Eq 12.10-1. Where the
diaphragm transfers design forces from the vertical elements above the diaphragm to the
vertical elements below the diaphragm, the redundancy factor ρ for the structure applies
to these forces. Also, the requirements of 12.3.3.4 must be satisfied for structures having
horizontal or vertical irregularities indicated in that section.
Collector Elements
The provisions of 12.10.2.1 apply to collector elements in structures assigned to SDC C
and higher. Collectors, which are also commonly referred to as drag struts, are elements
in a structure that are used to transfer diaphragm loads from the diaphragm to the
elements of the lateral force-resisting system where the lengths of the vertical elements in
the lateral force-resisting system are less than the length of the diaphragm at that location.
For example, the collector beams in Figure 6.5 collect the force from the diaphragm and
distribute it to the shear wall.
Collector beam
Shear wall
Shear wall (collector
beams not required)
Collector beam
Horizontal seismic force
Figure 6.5 Example of Collector Beams and Shear Walls
In general, collector elements, splices and connections to resisting elements must all be
designed to resist the load combinations with overstrength factor of 12.4.3.2 in addition
to the applicable strength design or allowable stress design load combinations. Structures
or portions of structures braced entirely by light-frame shear walls, collector elements,
splices and connections need only be designed to resist the diaphragm forces prescribed
in 12.10.1.1.
6.3.11 Structural Walls and Their Anchorage
Out-of-Plane Forces
In addition to forces in the plane of the wall, structural walls and their anchorage in
structures assigned to SDC B and higher must be designed for an out-of-plane force equal
to 0.4SDSI times the weight of the structural wall or 0.10 times the weight of the structural
wall, whichever is greater (12.11.1).
Anchorage of Concrete or Masonry Structural Walls
Concrete or masonry structural walls in structures assigned to SDC B and higher must be
anchored to the roof and floor members that provide lateral support for the walls. The
anchorage must be designed to resist the greater of three forces specified in 12.11.2.
Where the anchor spacing is greater than 4 ft, the walls must be designed to resist
bending between the anchors.
The requirements of 12.11.2.1 must be satisfied for concrete or masonry walls anchored
to flexible diaphragms. Additional requirements for diaphragms in structures assigned to
SDC C through F are given in 12.11.2.2.
6.3.12 Drift and Deformation
Once the design drifts have been determined in accordance with 12.8.6, they are
compared to the allowable story drift Δ a in Table 12.12-1. The drift limits depend on the
occupancy category and are generally more restrictive for categories III and IV. These
lower limits on drift are meant to provide a higher level of performance for more
important occupancies. Drift limits also depend on the type of structure.
Provisions for moment frames in structures assigned to SDC D through F, diaphragm
deflection, and building separation are contained in 12.12.1.1, 12.12.2 and 12.12.3,
For structures assigned to SDC D through F, structural members that are not part of the
seismic force-resisting system must be designed for deformation compatibility. These
members must be designed to adequately resist applicable gravity load effects when
subjected to the design story drift ¨ determined in accordance with 12.8.6. The
deformation compatibility requirements are given in 12.12.4.
6.3.13 Foundation Design
General requirements for the design of various types of foundation systems are given in
12.13. The special seismic requirements are driven by SDC.
The requirements in ASCE/SEI 7 apply to structural separations. IBC 1613.6.7 provides requirements for
separation between adjacent buildings on the same property and buildings adjacent to a property line not
on a public way in SDC D, E or F. The minimum separation provisions were in the 1997 UBC and the
2000 and the 2003 editions of the IBC but not the 2006 IBC because it references ASCE/SEI 7 for seismic
design requirements.
For other than cantilever column and inverted pendulum systems, overturning effects at
the soil-foundation interface are permitted to be reduced by 25 percent where the
equivalent lateral force procedure is used. A 10-percent reduction is permitted for
foundations of structures designed in accordance with the modal analysis requirements of
6.3.14 Simplified Alternative Structural Design Criteria for Simple Bearing Wall or
Building Frame Systems
The simplified design procedure is entirely self-contained in 12.14. The simplified
method is permitted to be used in lieu of other analytical procedures in Chapter 12 for the
analysis and design of simple bearing wall or building frame systems provided the 12
limitations of 12.14.1.1 are satisfied.
The provisions of 12.14 are intended to apply to a defined set of essentially regularlyconfigured structures where a reduction in requirements is deemed to be warranted. Only
those systems specifically listed in Table 12.14-1 are permitted to be used. Note that drift
controlled structural systems such as moment resisting frames are not permitted. Some of
the more noteworthy characteristics of this simplified method are as follows:
1. The simplified procedure applies to Occupancy Category I or II structures up to
three stories in height, which are founded on Site Class A, B, C or D soils and
which are assigned to SDC B, C, D or E.
2. The procedure is limited to bearing wall or building frame systems. Design
coefficients for these structural systems are given in Table 12.14-1, which
contains essentially the same information as Table 12.2-1, except that the system
overstrength factor O, the deflection amplification factor Cd, and system height
limitations have been omitted in Table 12.14-1, based on the limitations and
requirements of the simplified method. For seismic load combinations involving
overstrength, the system overstrength factor is 2.5 (12.14.3.2). Since the
permissible types of lateral force-resisting systems are relatively stiff when
compared to moment-resisting frames in structures within the prescribed
limitations, design drift need not be calculated (12.14.8.5).
3. Given the prescriptive requirements for system configuration, definitions of and
design provisions for system irregularities are not needed.
4. Design and detailing requirements are independent of the SDC.
5. The redundancy coefficient ρ does not apply.
6. The seismic base shear V is determined by Eq. 12.14-11 and is a function of the
design spectral response acceleration at short periods S DS (short period plateau),
the number of stories in the structure and the response modification coefficient R.
Determination of the period of the structure T is not needed.
7. Vertical distribution of the seismic base shear V is based on tributary weight
(Eq. 12.14-12). Diaphragms must be designed to resist the design seismic forces
calculated by the provisions of 12.14.8.2 (12.14.7.4).
8. For flexible diaphragms, horizontal distribution of seismic forces to the vertical
elements of the seismic force-resisting system is based on tributary area. For
diaphragms that are not flexible, distribution is based on the relative stiffness of
the seismic force-resisting elements considering any torsional moment resulting
from eccentricity between the center of mass and the center of rigidity (12.14.8.3).
Accidental torsion and dynamic amplification of torsion need not be considered.
9. Calculations for P-delta effects need not be considered.
10. Load combinations are prescribed in 12.14.3 for strength design, allowable stress
design and seismic load effect including a 2.5 overstrength factor.
11. Design seismic forces are permitted to be applied separately in each orthogonal
direction and the combination of effects from the two directions need not be
considered (12.14.6).
Additional requirements of the simplified method can be found in 12.14.
Chapter 13 establishes minimum design criteria for nonstructural components that are
permanently attached to structures and for their supports and attachments. Included are
provisions for architectural components, mechanical and electrical components, and
Nonstructural components that weigh greater than or equal to 25 percent of the effective
seismic weight W of the structure must be designed as nonbuilding structures in
accordance with 15.3.2 (13.1.1). Nonstructural components are assigned to the same SDC
as the structure that they occupy or to which they are attached (13.1.2).
The component importance factor Ip is equal to 1.0 except when the following conditions
apply where it is equal to 1.5 (13.1.3):
1. The component is required to function for life-safety purposes after an earthquake
(including fire protection sprinkler systems).
2. The component contains hazardous materials.
3. The component is in or attached to an Occupancy Category IV structure and is
needed for continued operation of the facility or its failure could impair the
continued operation of the facility.
A list of nonstructural components that are exempt from the requirements of this section
is given in 13.1.4. Also see Table 13.2-1 for a summary of applicable requirements.
Seismic Demands on Nonstructural Components
The horizontal seismic design force that is to be applied to the center of gravity of the
component F p is determined by Eq. 13.3-1. This force is a function of the following:
a p = component amplification factor given in Table 13.5-1 for architectural
components and 13.6-1 for mechanical and electrical components.
S DS = design spectral response acceleration at short periods determined in
accordance with 11.4.4.
W p = component operating weight.
R p = component response modification factor given in Table 13.5-1 for architectural
components and 13.6-1 for mechanical and electrical components.
I p = component importance factor (1.0 or 1.5).
height in structure of point of attachment of component with respect to the
base. For items at or below the base, z = 0 . The value of z / h need not
exceed 1.0.
average roof height of structure with respect to the base.
Component seismic forces F p are to be applied independently in at least two orthogonal
horizontal directions in combination with the service loads that are associated with the
component. Requirements are also given for vertically cantilevered systems. Minimum
and maximum values of F p are determined by Eqs. 13.3-2 and 13.3-3, respectively.
Equation 13.3-4 can be used to calculate F p based on accelerations determined by a
modal analysis. Requirements for seismic relative displacements are given in 13.3.2.
Nonstructural Component Anchorage
Components and their supports are to be attached or anchored to the structure in
accordance with the requirements of 13.4. Forces in the attachments are to be determined
using the prescribed forces and displacements specified in 13.3.1 and 13.3.2. Anchors in
concrete or masonry elements must satisfy the provisions of 13.4.2.
Architectural Components
General design and detailing requirements for architectural components are contained in
13.5. All architectural components and attachments must be designed for the seismic
forces defined in 13.3.1. Specific requirements are stipulated for:
• Exterior nonstructural wall panels
• Glass
• Suspended ceilings
• Access floors
• Partitions
• Glass in glazed curtain walls, glazed storefronts and glazed partitions
Mechanical and Electrical Components
The requirements of 13.6 are to be satisfied for mechanical and electrical components and
their supports.
Equation 13.6-1 can be used to determine the fundamental period Tp of the mechanical or
electrical component.
Requirements are provided for the following systems:
• Utility and service lines
• HVAC ductwork
• Piping systems
• Boilers and pressure vessels
• Elevators and escalators
Nonbuilding structures supported by the ground or by other structures must be designed
and detailed to resist the minimum seismic forces set forth in Chapter 15.
The selection of a structural analysis procedure for a nonbuilding structure is based on its
similarity to buildings. Nonbuilding structures that are similar to buildings exhibit
behavior similar to that of building structures; however, their function and performance
are different. According to 15.1.3, structural analysis procedures for such buildings are to
be selected in accordance with 12.6 and Table 12.6-1, which are applicable to building
structures. Guidelines and recommendations on the use of these methods are given in
C15.1.3. In short, the provisions for building structures need to be carefully examined
before they are applied to nonbuilding structures.
Nonbuilding structures that are not similar to buildings exhibit behavior that is markedly
different than that of building structures. Most of these types of structures have reference
documents that address their unique structural performance and behavior. Such reference
documents are permitted to be used to analyze the structure (15.1.3). In addition, the
following procedures may be used: equivalent lateral force procedure (12.8), modal
analysis procedure (12.9), linear response history analysis procedure (16.1) and nonlinear
response history analysis procedure (16.2). In the case of nonbuilding structures similar
to buildings, guidelines and recommendations on the proper analysis method to utilize for
nonbuilding structures that are not similar to buildings are given in C15.1.3.
Reference Documents
As noted above, reference documents may be used to design nonbuilding structures for
earthquake load effects. References that have seismic requirements based on the same
force and displacement levels used in ASCE/SEI 7-05 are listed in Chapter 23 (15.2). The
provisions in the reference documents are subject to the amendments given in 15.4.1. See
C15.2 for additional references that cannot be referenced directly by ASCE/SEI 7-05.
It is important to note that the provisions of an industry standard or document must not be
used unless the seismic ground accelerations and seismic coefficients are in conformance
with the requirements of 11.4. The values for total lateral force and total base overturning
moment from the reference documents must be taken greater than or equal to 80 percent
of the corresponding values determined by the seismic provisions of ASCE/SEI 7-05.
Nonbuilding Structures Supported by Other Structures
Provisions are given in 15.3 for the nonbuilding structures in Table 15.4-2 (i.e.,
nonbuilding structures that are not similar to buildings) that are supported by other
structures and that are not part of the primary seismic force-resisting system. The design
method depends on the weight of the nonbuilding structure relative to the weight of the
combined nonbuilding and supporting structure (see 15.3.1 and 15.3.2).
Structural Design Requirements
Specific design requirements for nonbuilding structures are given in 15.4. As noted
previously, provisions in referenced documents are amended by the requirements of this
For nonbuilding structures that are similar to buildings, the permitted structural systems,
design values and limitations are given in Table 15.4-1. Similar information is provided
in Table 15.4-2 for nonbuilding structures that are not similar to buildings. Requirements
on the determination of the base shear, vertical distribution of seismic forces, importance
factor and load combinations are contained in 15.4.1.
Additional provisions for rigid buildings, loads, fundamental period, drift limitations,
deflection limits and other requirements can be found in 15.4.2 through 15.4.8.
Nonbuilding Structures Similar to Buildings
Additional requirements are given in 15.5 for pipe racks, steel storage racks, electrical
power generating facilities, structural towers for tanks and vessels, and piers and
Nonbuilding Structures Not Similar to Buildings
Additional requirements are given in 15.6 for earth-retaining structures, stacks and
chimneys, amusement structures, special hydraulic structures, secondary containment
systems and telecommunication towers.
Tanks and Vessels
Comprehensive seismic design requirements are given in 15.7 for tanks and vessels. As
noted in C15.7, most, if not all, industry standards that contain seismic design
requirements are based on earlier seismic codes. Many of the provisions of 15.7 show
how to modify existing standards to get to the same force levels as ASCE/SEI 7-05.
A summary of the flowcharts provided in this chapter is given in Table 6.7. Included is a
description of the content of each flowchart.
All referenced section numbers and equations in the flowcharts are from ASCE/SEI 7-05
unless noted otherwise.
Table 6.7 Summary of Flowcharts Provided in Chapter 6
Section 6.6.1 Seismic Design Criteria
Flowchart 6.1
Consideration of Seismic Design
Summarizes conditions where the seismic
requirements of ASCE/SEI 7-05 must be considered
and need not be considered.
Flowchart 6.2
Seismic Ground Motion Values
Provides step-by-step procedure on how to
determine the design spectral accelerations for a site
in accordance with 11.4.
Flowchart 6.3
Site Classification Procedure for
Seismic Design
Provides step-by-step procedure on how to
determine the Site Class of a site in accordance with
Chapter 20.
Flowchart 6.4
Seismic Design Category
Provides step-by-step procedure on how to
determine the seismic design category of a structure.
Flowchart 6.5
Design Requirements for SDC A
Summarizes the seismic design requirements for
structures assigned to SDC A.
Table 6.7 Summary of Flowcharts Provided in Chapter 6 (continued)
Section 6.6.2 Seismic Design Requirements for Building Structures
Flowchart 6.6
Diaphragm Flexibility
Provides methods on how to determine whether a
diaphragm is flexible, rigid or semi-rigid.
Flowchart 6.7
Permitted Analytical Procedures
Summarizes analytical procedures that are permitted
in determining design seismic forces for building
Flowchart 6.8
Equivalent Lateral Force Procedure
Provides step-by-step procedure on how to
determine the design seismic forces and their
distribution based on the requirements of this
Flowchart 6.9
Alternate Simplified Design
Provides step-by-step procedure on how to
determine the design seismic forces and their
distribution based on the requirements of this
Section 6.6.3 Seismic Design Requirements for Nonstructural Components
Flowchart 6.10
Seismic Demands on Nonstructural
Provides step-by-step procedure on how to
determine design seismic forces on nonstructural
Section 6.6.4 Seismic Design Requirements for Nonbuilding Structures
Flowchart 6.11
Seismic Design Requirements for
Nonbuilding Structures
Provides step-by-step procedure on how to
determine design seismic forces on nonbuilding
structures that are similar to buildings and that are
not similar to buildings.
Seismic Design Criteria
FLOWCHART 6.1
Consideration of Seismic Design Requirements
Is the structure a detached
one- or two-family dwelling?
Determine S S in accordance
with 11.4.1*
Is S S < 0.4 ?
Determine the SDC in accordance
with 11.6 (see Flowchart 6.4)
Is the structure
assigned to SDC
A, B or C?
Seismic requirements of
ASCE/SEI 7-05 must be
Structure is exempt from the
seismic requirements of
ASCE/SEI 7-05 (11.1.2).
Is the number of stories less
than or equal to two and will
the structure be designed and
constructed per the IRC?
* Values of S1 may be obtained from the USGS website
for a particular site.
FLOWCHART 6.1
Consideration of Seismic Design Requirements (11.1.2)
Is the structure an agricultural
storage structure intended only
for incidental human occupancy?
Structure is exempt from the
seismic requirements of
ASCE/SEI 7-05 (11.1.2).
Does the structure require special
consideration with respect to response
characteristics and environment that are
not addressed in Chapter 15 and for which
other regulations provide seismic criteria?†
Seismic requirements of
ASCE/SEI 7-05 must be
Examples of such structures are vehicular bridges,
electrical transmission towers, hydraulic structures,
buried utility lines and nuclear reactors.
FLOWCHART 6.2
Seismic Ground Motion Values
Determine S S and S1 from Figs. 22-1
through 22-14 (11.4.1)*
Determine ground motion values using
site-specific ground motion procedures
in accordance with Chapter 21 (11.4.7).
Is S S ≤ 0.15 and S1 ≤ 0.04 ?
Structure is permitted to be assigned
to SDC A and must only comply with
11.7 (11.4.1; see Flowchart 6.5).
Is the structure seismically
isolated or does it have damping
systems on sites with S1 ≥ 0.6 ?
* Values of SS and S1 may be obtained from the USGS website
(http://earthquake.usgs.gov/research/hazmaps/design/) for a
particular site.
Determine ground motion from a
ground motion hazard analysis in
accordance with 21.2 (11.4.7).
FLOWCHART 6.2
Seismic Ground Motion Values (11.4)
Determine the site class of the soil in
accordance with 11.4.2 and Chapter 20
(11.4.2; see Flowchart 6.3)
Is the site classified as Site Class F?
Determine ground motion values
using a site response analysis in
accordance with 21.1 (11.4.7).**
Determine S MS and S M 1 by Eqs. 11.4-1 and 11.4-2, respectively:
S MS = Fa S S
S M 1 = Fv S1
where site coefficients Fa and Fv are given in Tables 11.4-1 and 11.4-2,
respectively (11.4.3).†
Determine S DS and S D1 by Eqs. 11.4-3 and 11.4-4, respectively:†
S DS = 2 S MS / 3
S D1 = 2S M 1 / 3
** A site response analysis in accordance with 21.1 is required for structures on Site Class F
sites unless the exception in 20.3.1(1) is satisfied for structures with periods T 0.5 sec.
Where the simplified design procedure of 12.14 is used, only the values of Fa and SDS
must be determined in accordance with 12.14.8.1 (11.4.3, 11.4.4).
FLOWCHART 6.3
Site Classification Procedure for Seismic Design
(Chapter 20)
Is site-specific geotechnical
data available to 100 feet
below the surface of the site?
Soil properties are permitted to be estimated
by the registered design professional
preparing the soil investigation report based
on known geological conditions (20.1)
Site Class D may be used unless the
authority having jurisdiction determines
Site Class E or F is present (20.1).
Are any of the four
conditions listed under
20.3.1 satisfied?
Use Site Class F and perform a siteresponse analysis in accordance
with 21.1 (20.3.1, 20.2).
FLOWCHART 6.3
Site Classification Procedure for Seismic Design (Chapter 20)
Does the soil profile have a total
thickness of soft clay > 10 ft?*
Use Site Class E (20.3.2).
Is there rock present within
the soil profile at the site?
Is there less than or equal to 10
feet of soil between the rock
surface and the bottom of spread
footings or a mat foundation?
Determine the shear wave velocity of the soil
vs in accordance with 20.3.4 and 20.3.5 and
classify the soil as either Site Class A or B
based on the criteria in Table 20.3-1 for v s .
* A soft clay layer is defined by su < 500 psf, w > 40 percent, and PI > 20 (20.3.2).
FLOWCHART 6.3
Site Classification Procedure for Seismic Design (Chapter 20)
Determine average shear wave
velocity vs for the top 100 feet
Determine average field
penetration resistance N for
the top 100 feet (20.3.3)**
Determine average standard penetration resistance N ch
for cohesionless soil layers (PI < 20) in the top 100 ft
and average undrained shear strength su for cohesive
soil layers (PI > 20) in the top 100 feet (20.3.3)**, †
Classify the soil as either Site Class C, D or E
based on the criteria in Table 20.3-1 for v s ,
N , N ch or su .
** Values of v s , N and su are computed in accordance with 20.4 where soil profiles
contain distinct soil and rock layers (20.3.3).
Where the N ch and su criteria differ, the site shall be assigned to the category with the
softer soil [20.3.3(3)].
FLOWCHART 6.4
Seismic Design Category
Determine S S and S1 from Figs. 22-1
through 22-14 (11.4.1)*
Is S S ≤ 0.15 and S1 ≤ 0.04 ?
Determine the Occupancy
Category from IBC Table 1604.5
Is S1 ≥ 0.75 ?
Structure is permitted to be assigned
to SDC A and must only comply with
11.7 (11.4.1; see Flowchart 6.5).
Is the Occupancy Category I, II
or III?
Is S1 ≥ 0.75 ?
Structure is assigned to SDC F.**
Structure is assigned to SDC E.**
* Values of SS and S1 may be obtained from the USGS website (http://earthquake.usgs.gov/research/hazmaps/design/)
for a particular site.
** A structure assigned to SDC E or F shall not be located where there is a known potential for an active fault to cause
rupture of the ground surface at the structure (11.8).
FLOWCHART 6.4
Seismic Design Category (11.6)
Is the simplified design procedure
of 12.14 permitted to be used?
Are all 4 conditions
of 11.6 satisfied?
Determine S DS = 2 S MS / 3 by
Eq. 11.4-4 (see Flowchart 6.2)
Determine S DS = 2 Fa S S / 3
(12.14.8.1) †
Determine the SDC from Table 11.6-1
based on S DS and the Occupancy
Determine S DS and S D1 from Flowchart 6.2
Determine the SDC as the more severe of the two
SDCs from Tables 11.6-1 and 11.6-2 based on the
Occupancy Category and S DS and S D1 , respectively.
Short-period site coefficient Fa is permitted to be taken as 1.0 for rock sites, 1.4 for soil sites or may be
determined in accordance with 11.4.3. Rock sites have no more than 10 feet of soil between the rock
surface and the bottom of spread footing or mat foundation. Mapped spectral response acceleration SS is
determined in accordance with 11.4.1 and need not be taken larger than 1.5 (12.14.8.1).
FLOWCHART 6.5
Design Requirements for SDC A
Determine the seismic force Fx applied at each floor
level by Eq. 11.7-1: Fx = 0.01wx where wx = portion
of the total dead load of the structure located or assigned
to level x (11.7.2)*
Provide load path connections and connection to
supports in accordance with 11.7.3 and 11.7.4,
Anchor any concrete or masonry walls to roof and
floors in accordance with 11.7.5.
* These forces are applied simultaneously at all levels in one direction. The structure is analyzed for
the effects of these forces applied independently in each of two orthogonal directions (11.7.2). The
effects from these forces on the structure and its components shall be taken as E and combined
with the effects of other loads in accordance with 2.3 or 2.4 (11.7.1).
Seismic Design Requirements for Building Structures
FLOWCHART 6.6
Diaphragm Flexibility
Are the vertical elements in the structure
steel or composite steel and concrete
braced frames or concrete, masonry,
steel or composite shear walls?
Are the diaphragms constructed
of untopped steel decking or
wood structural panels?
Diaphragms are permitted to
be idealized as flexible
Is the building a one- or two-family
residential building of light-frame
construction with diaphragms of
wood structural panels or untopped
steel decks?
FLOWCHART 6.6
Diaphragm Flexibility (12.3)
Are the diaphragms concrete slabs
or concrete filled metal deck with
span-to-depth ratios of 3 or less in
structures that have no horizontal
Diaphragms are permitted to
be idealized as rigid
Is the computed maximum in-plane
deflection of the diaphragm under
lateral load more than 2 times the
average story drift of adjoining vertical
elements of the seismic force-resisting
system as shown in Figure 12.3-1?*
The structural analysis must
include consideration of the
stiffness of the diaphragm (semirigid modeling assumption).**
Diaphragms are permitted to
be idealized as flexible
* The loading used for this calculation shall be that prescribed in 12.8.
** IBC 1602 provides definitions for rigid and flexible diaphragms only; semi-rigid diaphragms are
not defined in the IBC. See IBC 1613.6.1 for modifications made to 12.3.1.
FLOWCHART 6.7
Permitted Analytical Procedures*
Determine the SDC of the structure from Flowchart 6.4**
Is the structure assigned to
SDC B or C?
Determine the Occupancy Category from IBC Table 1604.5
Is the structure an Occupancy
Category I or II building of lightframe construction that is less
than or equal to three stories?
Is the structure an Occupancy
Category I or II building that is
less than or equal to two stories?
* The simplified alternative structural design method of 12.14 may be used for simple bearing wall
or building frame systems that satisfy the 12 limitations in 12.14.1.1.
** This flowchart is applicable to buildings assigned to SDC B and higher. See Flowchart 6.5 for
design requirements for SDC A.
FLOWCHART 6.7
Permitted Analytical Procedures (12.6)
The following analysis procedures can be used:
Equivalent lateral force procedure (12.8)
Modal response spectrum analysis (12.9)
Seismic response history procedures (Chapter 16)
Determine S DS and S D1 from Flowchart 6.2
Determine TS = S D1 / S DS
Determine structure fundamental period T
Is T < 3.5TS ?
FLOWCHART 6.7
Permitted Analytical Procedures (12.6)
The following analysis procedures can be used:
Modal response spectrum analysis (12.9)
Seismic response history procedures (Chapter 16)
Is the structure regular?
Does the structure possess only Type 2, 3, 4 or
5 horizontal irregularities of Table 12.3-1 or
only Type 4, 5a or 5b vertical irregularities of
Table 12.3-2?
FLOWCHART 6.8
Equivalent Lateral Force Procedure
Determine S S , S1 , S DS , S D1 and the SDC from
Flowchart 6.4
Determine the response modification coefficient R from
Table 12.2-1 for the appropriate structural system based
on SDC
Determine the importance factor I from Table 11.5-1
based on the Occupancy Category
Determine the fundamental period of the
structure T by a substantiated analysis that
considers the structural properties and
deformational characteristics of the
Determine the approximate fundamental
period of the structure Ta by Eq. 12.8-7:
Ta = Ct hnx where values of approximate
period parameters Ct and x are given in
Table 12.8-2*,**
Determine the approximate fundamental
period of the structure Ta by Eq. 12.8-7:
Ta = Ct hnx where values of approximate
period parameters Ct and x are given in
Table 12.8-2*,**
* hn height in feet above the base to the highest level of the structure.
** Alternate equations for Ta are given in 12.8.2.1 for concrete or steel moment resisting frames and
masonry or concrete shear wall structures.
FLOWCHART 6.8
Equivalent Lateral Force Procedure (12.6)
Determine upper limit on calculated
period = CuTa †
Is T > CuTa ?
T from substantiated
analysis may be used
Use T = CuTa
Determine TL from Figures 22-15
through 22-20
Is T > TL ?
Cu = coefficient for upper limit on calculated period given in Table 12.8-1.
FLOWCHART 6.8
Equivalent Lateral Force Procedure (12.6)
Determine C s by Eqs. 12.8-3 and
12.8-2:‡ Cs = D1 ≤ DS
§R· §R·
T¨ ¸ ¨ ¸
©I¹ ©I¹
Is C s < larger of
0.044 S DS I and 0.01?
Cs =
C s = larger of
0.044 S DS I and
S D1
≤ DS
§R· §R·
T¨ ¸ ¨ ¸
©I¹ ©I¹
Is S1 ≥ 0.6 ?
Cs =
S D1
≤ DS
§R· §R·
T¨ ¸ ¨ ¸
©I¹ ©I¹
Is C s < larger of
0.044 S DS I , 0.01,
0.5S1 /( R / I ) ?
C s = larger of 0.044 S DS I ,
0.01, and 0.5S1 /( R / I )
For regular structures five stories or less in height and having a period T less than or equal to 0.5 sec,
Cs is permitted to be calculated using a value of 1.5 for S S (12.8.1.3).
FLOWCHART 6.8
Equivalent Lateral Force Procedure (12.6)
Determine C s by Eq. 12.8-4:
S T
C s = D1 L
T 2¨ ¸
Is C s < larger of
0.044 S DS I and 0.01?
Cs =
C s = larger of
0.044 S DS I and
S D1TL
T 2¨ ¸
Is S1 ≥ 0.6 ?
Cs =
S D1TL
T 2¨ ¸
Is C s < larger of
0.044 S DS I , 0.01,
0.5S1 /( R / I ) ?
C s = larger of 0.044 S DS I ,
0.01, and 0.5S1 /( R / I )
FLOWCHART 6.8
Equivalent Lateral Force Procedure (12.6)
Determine effective seismic
weight W in accordance with
Determine base shear V by
Eq. 12.8-1: V = C sW
Is T ≥ 2.5 sec?
Is T ≤ 0.5 sec?
Exponent related to structure
period k = 2 (12.8.3)
Exponent related to structure
period k = 2 (12.8.3)
Exponent related to structure
period k = 1 (12.8.3)
Exponent related to structure
period k = 0.75 + 0.5T (12.8.3)
Determine lateral seismic force Fx at level x by
Eqs. 12.8-11 and 12.8-12: Fx =
wx hxk
i =1
wi hik
where wx = portion of W located at level x
FLOWCHART 6.8
Equivalent Lateral Force Procedure (12.6)
Determine seismic design story
shear Vx by Eq. 12.8-13:
Vx =
¦ Fi
Is the diaphragm flexible
in accordance with 12.3?
Determine inherent torsional moment
M t resulting from eccentricity between
the locations of the center of mass and
the center of rigidity
Distribute Vx to the vertical
elements of the seismic forceresisting system in accordance
with 12.8.4.1
Determine accidental torsional moment
M ta in accordance with 12.8.4.2
Is the structure assigned to SDC C, D, E,
or F and does it possess Type 1a or 1b
torsional irregularity per Table 12.3-1?
FLOWCHART 6.8
Equivalent Lateral Force Procedure (12.6)
Distribute Vx to the vertical elements of
the seismic force-resisting system
considering the relative lateral stiffness of
the vertical resisting elements and the
diaphragm, including M t + M ta
Determine the torsional amplification factor
§ δ
Ax by Eq. 12.8-14: Ax = ¨ max ¸ ≤ 3
¨ 1.2δ avg ¸
where δ max and δ avg are defined in 12.8.4.3
′ = Ax M ta
Determine M ta
Distribute Vx to the vertical elements of the
seismic force-resisting system considering the
relative lateral stiffness of the vertical
resisting elements and the diaphragm,
including M t + M ta
Design structure to resist overturning effects
caused by the seismic forces Fx (12.8.5)
Determine the deflection amplification factor
Cd from Table 12.2-1
Determine the deflection δ x at levels x by
C δ
Eq. 12.8-15: δ x = d xe where δ xe are the
deflections at level x based on an elastic analysis
of the structure subjected to the seismic forces
Fx ††
It is permitted to determine the įxe using seismic design forces based
on the computed fundamental period of the structure without the
upper limit CuTa specified in 12.8.2 (12.8.6.2).
FLOWCHART 6.8
Equivalent Lateral Force Procedure (12.6)
Determine the design story drift Δ as the
difference of the deflections δ x at the center of
mass at the top and bottom of the story under
consideration (see Figure 12.8-2)
Check that the allowable story drifts Δ a given in
Table 12.12-1 are satisfied at each story
Ensure that the other applicable drift and
deformation requirements of 12.12 are satisfied
Determine stability coefficient θ at each story by Eq. 12.8-16:
Px Δ
where Px = total vertical design load at and
V x hsx Cd
above level x ( Px is determined using load factors no greater
than 1.0) and hsx = story height below level x
Is θ ≤ 0.1 ?
Determine θ max by Eq. 12.8-17:
θ max =
where β = ratio of shear
Cd β
demand to shear capacity for the story‡‡
P-delta effects need not be
considered on the structure
ȕ can conservatively be taken as 1.0. Where P-delta effects
are included in an automated analysis, the value of θ
computed by Eq. 12.8-16 is permitted to be divided by
(1 + θ) before checking Eq. 12.8-17 (12.8.7).
FLOWCHART 6.8
Equivalent Lateral Force Procedure (12.6)
Is θ > θ max ?
Structure is potentially
unstable and must be
redesigned so that θ ≤ θ max .
Determine displacements
and member forces including
P-delta effects from a
rational analysis.
Determine P-delta effects
by multiplying first-order
displacements and member
forces by 1.0 /(1 − θ) .
FLOWCHART 6.9
Alternate Simplified Design Procedure
Are the 12 limitations
of 12.14.1.1 satisfied?
Use another analytical
procedure in Chapter 12.
Determine S S , S DS and the SDC from Flowchart 6.4
Determine the response modification coefficient R from
Table 12.14-1 for the appropriate structural system based on
Determine effective seismic weight W in accordance with
Determine base shear V by Eq. 12.14-11: V =
F = 1.0 for one-story buildings
= 1.1 for two-story buildings
= 1.2 for three-story buildings
S DS = 2 Fa S S / 3
Fa = 1.0 for rock sites
= 1.4 for soil sites, or
= value determined in accordance with 11.4.3
FLOWCHART 6.9
Alternate Simplified Design Procedure (12.14)
Determine lateral seismic force Fx at level x by
Eq. 12.14-12: Fx = x V where wx = portion of
W located at level x
Determine seismic design story shear Vx by
Eq. 12.14-13: Vx =
¦ Fi
Is the diaphragm flexible in
accordance with 12.14.5?
Determine inherent torsional moment M t
resulting from eccentricity between the
locations of the center of mass and the
center of rigidity (12.14.8.3.2.1)
Distribute Vx to the vertical elements
of the seismic force-resisting system
based on tributary area (12.14.8.3.1)
Distribute Vx to the vertical elements of
the seismic force-resisting system based on
relative stiffness of the vertical elements
and the diaphragm, including M t
FLOWCHART 6.9
Alternate Simplified Design Procedure (12.14)
Design structure to resist overturning effects
caused by the seismic forces Fx (12.14.8.4)
Foundations of structures shall be designed for
not less than 75 percent of the foundation
overturning design moment.
Structural drift need not be calculated. If
drift is required for other design
requirements, it shall be taken as 1
percent of building height unless it is
computed to be less (12.14.8.5).
Seismic Design Requirements for Nonstructural Components
FLOWCHART 6.10
Seismic Demands on Nonstructural Components
Is the weight of the nonstructural
component ≥ 25 percent of the
effective seismic weight W?
Determine S DS , S D1 and the SDC
from Flowchart 6.4*
Component shall be classified as a
nonbuilding structure and shall be
designed in accordance with 15.3.2
Determine component amplification factor ap from
Table 13.5-1 for architectural components or Table 13.6-1
for mechanical and electrical components
Determine component response modification factor Rp
from Table 13.5-1 for architectural components or
Table 13.6-1 for mechanical and electrical components
Nonstructural components shall be assigned to the same SDC as the
structure that they occupy or to which they are attached (13.1.2).
FLOWCHART 6.10
Seismic Demands on Nonstructural Components (13.3)
Determine component importance
factor I p in accordance with 13.1.3
Determine horizontal seismic design force F p applied at component’s
center of gravity by Eqs. 13.3-1, 13.3-2 and 13.3-3:**
0.3S DS I pW p ≤ F p =
0.4a p S DSW p §
¨1 + 2 ¸ ≤ 1.6 S DS I pW p
§ Rp ·
¨ Ip ¸
where W p = component operating weight
z = height in structure of point of attachment of component with
respect to the base.†
h = average roof height of structure with respect to the base
F p shall be applied independently in at least two orthogonal horizontal directions in combination with service
loads. For vertically cantilevered systems, F p shall be assumed to act in any horizontal direction, and the
component shall be designed for a concurrent vertical force ± 0.2 S DS W p . Redundancy factor ρ is permitted to
be taken equal to 1.0 and the overstrength factor Ω o does not apply (13.3.1).
For items at or below the base, z shall be taken as zero. The value of z / h need not exceed 1.0.
Seismic Design Requirements for Nonbuilding Structures
FLOWCHART 6.11
Seismic Design Requirements for Nonbuilding Structures*
(Chapter 15)
Is the nonbuilding structure
similar to buildings?
Is the nonbuilding structure supported
by another structure and not part of the
primary seismic force-resisting system?
Is the weight of the nonbuilding
structure less than 25 percent of the
combined weight of the nonbuilding
structure and the supporting structure?
Determine the period T of
the nonbuilding structure in
accordance with 15.4.4
Nonbuilding structures are permitted to be
designed in accordance with reference
documents listed in Chapter 23, subject to the
amendments in 15.4 (15.2). Tanks and vessels
(15.7) are not considered here.
Determine the design seismic forces of the
nonbuilding structure in accordance with
Chapter 13 where the values of Rp and ap are
determined in accordance with 13.1.5. Design
the supporting structure in accordance with
Chapter 12 if it is a building structure or in
accordance with 15.5 if it is a nonbuilding
structure similar to buildings.
FLOWCHART 6.11
Seismic Design Requirements for Nonbuilding Structures*
Determine S DS , S D1 and the SDC from
Flowchart 6.4
Determine the importance factor I in
accordance with 15.4.1.1
Determine the period T of the nonbuilding
structure in accordance with 15.4.4
Is the period of the nonbuilding
structure T < 0.06 sec?
Determine the response modification
coefficient R, the overstrength factor Ω o
and the deflection amplification factor Cd
from Table 15.4-1
Determine total design lateral seismic base
shear V by Eq. 15.4-5: V = 0.30S DS WI
where W = nonbuilding structure operating
weight defined in 15.4.3
Select an appropriate structural analysis
procedure in accordance with 12.6 and
determine the design seismic forces as
required (15.1.3; see Flowchart 6.7)
Distribute V over the height of the
nonbuilding structure in accordance with
12.8.3 (15.4.2; see Flowchart 6.8)
Satisfy the requirements of 15.4.5
and 15.5 where applicable.
FLOWCHART 6.11
Seismic Design Requirements for Nonbuilding Structures*
Determine S DS , S D1 and the SDC from
Flowchart 6.4
Determine the importance factor I in
accordance with 15.4.1.1
Determine the period T of the nonbuilding
structure in accordance with 15.4.4
Is the period of the nonbuilding
structure T < 0.06 sec?
Determine the response modification
coefficient R, the overstrength factor Ω o
and the deflection amplification factor Cd
from Table 15.4-2
Determine total design lateral seismic base
shear V by Eq. 15.4-5: V = 0.30S DS WI
where W = nonbuilding structure operating
weight defined in 15.4.3
Select an appropriate structural analysis
procedure in accordance with 15.1.3 and
determine the design seismic forces as
required, subject to 15.4.1(2)
Distribute V over the height of the
nonbuilding structure in accordance with
12.8.3 (15.4.2; see Flowchart 6.8)
Satisfy the requirements of 15.4.5
and 15.6 where applicable.
FLOWCHART 6.11
Seismic Design Requirements for Nonbuilding Structures*
Is the period of the nonbuilding
structure T < 0.06 sec?
Determine the design seismic forces of the
nonbuilding structure based on a combined
model of the nonbuilding structure and
supporting structure. Design the combined
structure in accordance with 15.5 using an R
value of the combined system as the lesser R
value of the nonbuilding structure or the
supporting structure.
Determine the design seismic forces of the
nonbuilding structure in accordance with
Chapter 13 where the value of Rp shall be
taken as the R value of the nonbuilding
structure in accordance with Table 15.4-2 and
ap shall be taken as 1.0. Design the supporting
structure in accordance with Chapter 12 if it is
a building structure or in accordance with 15.5
if it is a nonbuilding structure similar to
buildings. The R-value of the combined
system is permitted to be taken as the R-value
of the supporting system.
The following examples illustrate the IBC and ASCE/SEI 7 requirements for seismic
design loads.
Example 6.1 – Residential Building, Seismic Design Category
A typical floor plan and elevation of a 12-story residential building is depicted in
Figure 6.6. Given the design data below, determine the Seismic Design Category (SDC).
Soil classification:
Structural system:
Charleston, SC (Latitude: 32.74°, Longitude: -79.93°)
Site Class D
Residential occupancy where less than 300 people congregate in
one area
Cast-in-place reinforced concrete
Building frame system
Step 1: Determine the seismic ground motion values from Flowchart 6.2.
1. Determine the mapped accelerations S S and S1 .
In lieu of using Figures 22-1 and 22-2, the mapped accelerations are determined
by inputting the latitude and longitude of the site into the USGS Ground Motion
Parameter Calculator. The output is as follows: S S = 1.37 and S1 = 0.34 .
2. Determine the site class of the soil.
The site class of the soil is given in the design data as Site Class D.
3. Determine soil-modified accelerations S MS and S M 1 .
Site coefficients Fa and Fv are determined from Tables 11.4-1 and 11.4-2,
For Site Class D and S S > 1.25 : Fa = 1.0
For Site Class D and 0.3 < S1 < 0.4 : Fv = 1.72 from linear interpolation
Member sizes:
• Slab: 9 in.
• Columns: 24 x 24 in.
• Walls: 12 in. thick
Superimposed dead loads:
• Roof: 10 psf
• Floors: 20 psf (includes 10 psf
for partitions per 12.7.2)
• Glass curtain wall: 8 psf
12 @ 10′-0″ = 120′-0″
Figure 6.6 Typical Floor Plan and Elevation of 12-story Residential Building
S MS = 1.0 × 1.37 = 1.37
S M 1 = 1.72 × 0.34 = 0.59
4. Determine design accelerations S DS and S D1 .
From Eqs. 11.4-3 and 11.4-4:
S DS =
× 1.37 = 0.91
S D1 =
× 0.59 = 0.39
Step 2: Determine the SDC from Flowchart 6.4.
1. Determine if the building can be assigned to SDC A in accordance with 11.4.1.
Since S S = 1.37 > 0.15 and S1 = 0.34 > 0.04 , the building cannot be
automatically assigned to SDC A.
2. Determine the Occupancy Category from IBC Table 1604.5.
For a residential occupancy where less than 300 people congregate in one area,
the Occupancy Category is II.
3. Since S1 < 0.75 , the building is not assigned to SDC E or F.
4. Check if all four conditions of 11.6 are satisfied.
Check if the approximate period Ta is less than 0.8TS .
Use Eq. 12.8-7 with approximate period parameters for “other structural
Ta = Ct hnx = 0.02(120)0.75 = 0.73 sec
where Ct and x are given in Table 12.8-2.
TS = S D1 / S DS = 0.39 / 0.91 = 0.43 sec
0.73 sec > 0.8 × 0.43 = 0.34 sec
Since this condition is not satisfied, the SDC cannot be determined by Table
11.6-1 alone (11.6).
5. Determine the SDC from Tables 11.6-1 and 11.6-2.
From Table 11.6-1, with S DS > 0.50 and Occupancy Category II, the SDC is D.
From Table 11.6-2, with S D1 > 0.20 and Occupancy Category II, the SDC is D.
Therefore, the SDC is D for this building.
Example 6.2 – Residential Building, Permitted Analytical Procedure
For the 12-story residential building in Example 6.1, determine the analytical procedure
that can be used in calculating the seismic forces.
Use Flowchart 6.7 to determine the permitted analytical procedure.
1. Determine the SDC from Flowchart 6.4.
It was determined in Step 2 of Example 6.1 that the SDC is D.
2. Determine S DS and S D1 from Flowchart 6.2.
The design accelerations were determined in Step 1, item 4 of Example 6.1:
S DS = 0.91 and S D1 = 0.39.
3. Determine TS .
TS = S D1 / S DS was determined in Step 2, item 4 of Example 6.1 as 0.43 sec.
4. Determine the fundamental period of the building T.
It was determined in Step 2, item 4 of Example 6.1 that T = Ta = 0.73 sec.
5. Check if T < 3.5TS .
T = 0.73 sec < 3.5TS = 3.5 × 0.43 = 1.5 sec
6. Determine if the structure is regular or not.
a. Determine if the structure has any horizontal structural irregularities in
accordance with Table 12.3-1.
i. Torsional irregularity
In accordance with Table 12.3-1, Type 1a torsional irregularity and Type 1b
extreme torsional irregularity for rigid or semirigid diaphragms exist where
the ratio of the maximum story drift at one end of a structure to the average
story drifts at two ends of the structure exceeds 1.2 and 1.4, respectively. The
story drifts are to be determined using code-prescribed forces, including
accidental torsion. In this example, the floors and roof are cast-in-place
reinforced concrete slabs, which are considered to be rigid diaphragms
At this point in the analysis, it is obviously not evident which method is
required to be used to determine the prescribed seismic forces. In lieu of using
a more complicated higher order analysis, the equivalent lateral force
procedure may be used to determine the lateral seismic forces. These forces
are applied to the building and the subsequent analysis yields the story drifts
Δ , which are used in determining whether Type 1a or 1b torsional irregularity
exists. The results from the equivalent lateral force procedure will be needed
if it is subsequently determined that a modal analysis is required (see 12.9.4).
Use Flowchart 6.8 to determine the lateral seismic forces from the equivalent
lateral force procedure.
a) The design accelerations and the SDC have been determined in
Example 6.1.
b) Determine the response modification coefficient R from Table 12.2-1.
The walls in this building frame system must be special reinforced
concrete shear walls, since the building is assigned to SDC D (system B5
in Table 12.2-1). In this case, R = 6. Note that height of the building,
which is 120 ft, is less than the limiting height of 160 ft for this type of
system in SDC D.8
c) Determine the importance factor I from Table 11.5-1.
For Occupancy Category II, I = 1.0.
d) Determine the period of the structure T.
The increased building height limit of 12.2.5.4 is not considered.
It was determined in Step 2, item 4 of Example 6.1 that the approximate
period of the structure Ta, which is permitted to be used in the equivalent
lateral force procedure, is equal to 0.73 sec.
e) Determine long-period transition period TL from Figure 22-15.
For Charleston, SC, TL = 8 sec > Ta = 0.73 sec.
f) Determine seismic response coefficient C s .
The seismic response coefficient C s is determined by Eq. 12.8-3:
Cs =
S D1
= 0.09
T ( R / I ) 0.73(6 / 1.0)
The value of C s need not exceed that from Eq. 12.8-2:
C s = DS =
= 0.15
R / I 6 / 1.0
Also, C s must not be less than the larger of 0.044S DS I = 0.04 (governs)
and 0.01 (Eq. 12.8-5).
Thus, the value of C s from Eq. 12.8-3 governs.
g) Determine effective seismic weight W in accordance with 12.7.2.
The member sizes and superimposed dead loads are given in Figure 6.6
and the effective weights at each floor level are given in Table 6.8. The
total weight W is the summation of the effective dead loads at each level.
h) Determine seismic base shear V.
Seismic base shear is determined by Eq. 12.8-1:
V = C sW = 0.09 × 19,920 = 1,793 kips
i) Determine exponent related to structure period k.
Since 0.5 sec < T = 0.73 sec < 2.5 sec, k is determined as follows:
k = 0.75 + 0.5T = 1.12
Table 6.8 Seismic Forces and Story Shears
w x (kips)
h x (ft)
w x hx
Lateral force,
Fx (kips)
Story Shear,
V x (kips)
j) Determine lateral seismic force Fx at each level x.
Fx is determined by Eqs. 12.8-11 and 12.8-12. A summary of the lateral
forces Fx and the story shears V x is given in Table 6.8.
Three-dimensional analyses were performed independently in the N-S and EW directions for the seismic forces in Table 6.8 using a commercial computer
program. In the model, rigid diaphragms were assigned at each floor level.
The stiffness properties of the shear walls were input assuming cracked
sections (12.7.3): I eff = 0.5 I g where I g is the gross moment of inertia of the
section. In accordance with 12.8.4.2, the center of mass was displaced each
way from its actual location a distance equal to 5 percent of the building
dimension perpendicular to the applied forces to account for accidental torsion
in seismic design.
A summary of the elastic displacements δ xe at each end of the building in
both the N-S and E-W directions due to the code-prescribed forces in
Table 6.8 is given in Table 6.9 at all floor levels. Also provided in the table
are the story drifts Δ at each end of the building in both directions.
According to Table 12.3-1, a torsional irregularity occurs where maximum
story drift at one of the structure is greater than 1.2 times the average of the
story drifts at the two ends of the structure. The average story drift Δ avg and
the ratio of the maximum story drift to the average story drift Δ max / Δ avg are
also provided in Table 6.9.
Table 6.9 Lateral Displacements and Story Drifts due to Seismic Forces
N-S Direction
E-W Direction
ǻ avg
ǻ max
ǻ avg
ǻ avg
ǻ max
ǻ avg
For example, at the 12th story in the N-S direction:
Δ1 = 10.79 − 9.63 = 1.16 in.
Δ 2 = 5.72 − 5.12 = 0.60 in.
Δ avg =
1.16 + 0.60
= 0.88 in.
Δ max 1.16
= 1.32 > 1.2
Δ avg 0.88
Therefore, a Type 1a torsional irregularity exists at all floor levels in the
N-S direction.9
According to 12.8.4.3, where torsional irregularity exists at floor level x, the
accidental torsional moments M ta must be increased by the torsional
amplification factor Ax given by Eq. 12.8-14:
§ δ
Ax = ¨ max
¨ 1.2δ avg
A Type 1b extreme torsional irregularity does not exist since the ratio of maximum drift to average drift is
less than 1.4 (see Table 12.3-1).
For example, at the 12th story in the N-S direction:
» = 1.19 > 1.0
A12 = «
«1.2§ 10.79 + 5.72 · »
«¬ ¨©
ii. Re-entrant corner irregularity
According to Table 12.3-1, a re-entrant corner irregularity exists where both
plan projections of the structure beyond a re-entrant corner are greater than
15 percent of the plan dimension in a given direction.
Using Table 6.2:
Projection b = 56 ft > 0.15a = 0.15 × 112 = 16.8 ft
Projection d = 70 ft > 0.15c = 0.15 × 120 = 18.0 ft
Therefore, a Type 2 re-entrant corner irregularity exists.
iii. Diaphragm discontinuity irregularity
This type of irregularity does not exist, since the area of any of the openings is
much less than 50 percent of the area of the diaphragm. Also, the diaphragm
has the same effective stiffness on all of the floor levels.
iv. Out-of-plane offsets irregularity
There are no out-of-plane offsets of the shear walls, so this irregularity does
not exist.
v. Nonparallel systems irregularity
This discontinuity does not exist, since all of the shear walls are parallel to a
major orthogonal axis of the building.
b. Determine if the structure has any vertical structural irregularities in accordance
with Table 12.3-2.
By inspection, none of the vertical irregularities defined in Table 12.3-2 exist for
this building (also see Table 6.3).
In summary, the building is not regular and has the following horizontal irregularities:
Type 1a torsional irregularity and Type 2 re-entrant corner irregularity.
7. Determine the permitted analytical procedure from Table 12.6-1.
The structure is irregular with T < 3.5 TS. If the structure had only a Type 2 re-entrant
corner irregularity, the equivalent lateral force procedure could be used to analyze the
structure. However, since a Type 1a torsional irregularity also exists, the equivalent
lateral force procedure is not permitted; either a modal response spectrum analysis
(12.9) or a seismic response history procedure (Chapter 16) must be utilized.10
Example 6.3 – Office Building, Seismic Design Category
Typical floor plans and elevations of a seven-story office building are depicted in Figure
6.7. Given the design data below, determine the Seismic Design Category (SDC).
Soil classification:
Structural system:
Memphis, TN (Latitude: 35.13°, Longitude: -90.05°)
Site Class D
Office occupancy where less than 300 people congregate in one
Structural steel
Moment-resisting frame and building frame systems
Step 1: Determine the seismic ground motion values from Flowchart 6.2.
1. Determine the mapped accelerations S S and S1 .
In lieu of using Figures 22-1 and 22-2, the mapped accelerations are determined
by inputting the latitude and longitude of the site into the USGS Ground Motion
Parameter Calculator. The output is as follows: S S = 1.35 and S1 = 0.37 .
2. Determine the site class of the soil.
The site class of the soil is given in the design data as Site Class D.
See the reference sections in Table 12.3-1 that must be satisfied for these types of irregularities in
structures assigned to SDC D. Design forces shall be increased 25 percent for connections of diaphragms
to vertical elements and to collectors and for connection of collectors to the vertical elements (12.3.3.4).
Members that are not part of the seismic force-resisting system must satisfy the deformational
compatibility requirements of 12.12.4.
Member sizes:
• Metal deck: 3 in. deck + 2.5 in.
lightweight concrete (39 psf)
Superimposed dead loads:
• Roof: 10 psf
• Floors: 20 psf (includes 10 psf
for partitions per 12.7.2).
• Glass curtain wall: 8 psf
— — — Braces
Moment connections
Floors 2 – 7
Floor 1 (El. 18′-0″)
Figure 6.7 Typical Floor Plans and Elevations of Seven-story Office Building
6 @ 13′-0″ = 78′-0″
6 @ 13′-0″ = 78′-0″
North or South Elevation
East or West Elevation
Figure 6.7 Typical Floor Plans and Elevations of Seven-story Office Building (continued)
3. Determine soil-modified accelerations S MS and S M 1 .
Site coefficients Fa and Fv are determined from Tables 11.4-1 and 11.4-2,
For Site Class D and S S > 1.25 : Fa = 1.0
For Site Class D and 0.3 < S1 < 0.4 : Fv = 1.66 from linear interpolation
S MS = 1.0 × 1.35 = 1.35
S M 1 = 1.66 × 0.37 = 0.61
4. Determine design accelerations S DS and S D1 .
From Eqs. 11.4-3 and 11.4-4:
S DS =
× 1.35 = 0.90
S D1 =
× 0.61 = 0.41
Step 2: Determine the SDC from Flowchart 6.4.
1. Determine if the building can be assigned to SDC A in accordance with 11.4.1.
Since S S = 1.35 > 0.15 and S1 = 0.37 > 0.04 , the building cannot be
automatically assigned to SDC A.
2. Determine the Occupancy Category from IBC Table 1604.5.
For a business occupancy where less than 300 people congregate in one area, the
Occupancy Category is II.
3. Since S1 < 0.75 , the building is not assigned to SDC E or F.
4. Check if all four conditions of 11.6 are satisfied.
Check if the approximate period Ta is less than 0.8TS .
In the N-S direction, the concentrically braced steel frames fall under “other
structural systems” in Table 12.8-2. Using Eq. 12.8-7, Ta is determined as
Ta = Ct hnx = 0.02(96)0.75 = 0.61 sec
In the E-W direction, values of Ct and x for steel moment-resisting frames are
used from Table 12.8-2:
Ta = Ct hnx = 0.028(96 )0.8 = 1.1 sec
TS = S D1 / S DS = 0.41 / 0.90 = 0.46 sec
0.8 × 0.46 = 0.37 sec is less than the approximate periods in both the N-S and EW directions.
Since this condition is not satisfied, the SDC cannot be determined by Table 11.61 alone (11.6).
5. Determine the SDC from Tables 11.6-1 and 11.6-2.
From Table 11.6-1, with S DS > 0.50 and Occupancy Category II, the SDC is D.
From Table 11.6-2, with S D1 > 0.20 and Occupancy Category II, the SDC is D.
Therefore the SDC is D for this building.
Example 6.4 – Office Building, Permitted Analytical Procedure
For the seven-story office building in Example 6.3, determine the analytical procedure
that can be used in calculating the seismic forces.
Use Flowchart 6.6 to determine the permitted analytical procedure.
1. Determine the SDC from Flowchart 6.4.
It was determined in Step 2 of Example 6.3 that the SDC is D.
2. Determine S DS and S D1 from Flowchart 6.2.
The design accelerations were determined in Step 1, item 4 of Example 6.3:
S DS = 0.90 and S D1 = 0.41.
3. Determine TS .
It was determined in Step 2, item 4 of Example 6.3 that TS = S D1 / S DS = 0.46 sec.
4. Determine the fundamental period of the building T.
The periods T were determined in Step 2, item 4 of Example 6.3 as 0.61 sec in the
N-S direction and 1.1 sec in the E-W direction.
5. Check if T < 3.5TS .
3.5TS = 3.5 × 0.46 = 1.6 sec, which is greater than the periods in both directions.
6. Determine if the structure is regular or not.
a. Determine if the structure has any horizontal structural irregularities in
accordance with Table 12.3-1.
i. Torsional irregularity
In accordance with Table 12.3-1, Type 1a torsional irregularity and Type 1b
extreme torsional irregularity for rigid or semirigid diaphragms exist where
the ratio of the maximum story drift at one end of a structure to the average
story drifts at two ends of the structure exceeds 1.2 and 1.4, respectively. The
story drifts are to be determined using code-prescribed forces, including
accidental torsion. In this example, the floor and roof are metal deck with
concrete, which is considered to be a rigid diaphragm (12.3.1.2).
At this point in the analysis, it is obviously not evident which method is
required to be used to determine the prescribed seismic forces. In lieu of using
a more complicated higher order analysis, the equivalent lateral force
procedure may be used to determine the lateral seismic forces. These forces
are applied to the building and the subsequent analysis yields the story drifts
Δ , which are used in determining whether Type 1a or 1b torsional irregularity
exists. The results from the equivalent lateral force procedure will be needed
if it is subsequently determined that a modal analysis is required (see 12.9.4).
Use Flowchart 6.8 to determine the lateral seismic forces from the equivalent
lateral force procedure:
a) The design accelerations and the SDC have been determined in
Example 6.3.
b) Determine the response modification coefficient R from Table 12.2-1.
In the N-S direction, special steel concentrically braced frames are
required, since the building is assigned to SDC D (system B3 in
Table 12.2-1). In this case, R = 6. Note that height of the building, which
is 96 ft, is less than the limiting height of 160 ft for this type of system in
SDC D.11
The increased building height limit of 12.2.5.4 is not considered.
In the E-W direction, special steel moment frames are required (system C1
in Table 12.2-1). In this case, R = 8, and there is no height limit.
c) Determine the importance factor I from Table 11.5-1.
For Occupancy Category II, I = 1.0.
d) Determine the period of the structure T.
It was determined in Step 2, item 4 of Example 6.3 that the approximate
period of the structure Ta, which is permitted to be used in the equivalent
lateral force procedure, is 0.61 sec in the N-S direction and 1.1 sec in the
E-W direction.
e) Determine long-period transition period TL from Figure 22-15.
For Memphis, TN, TL = 12 sec, which is greater than the periods in both
f) Determine seismic response coefficients C s in both directions.
N-S direction:
The seismic response coefficient C s is determined by Eq. 12.8-3:
Cs =
S D1
= 0.11
T ( R / I ) 0.61(6 / 1.0)
The value of C s need not exceed that from Eq. 12.8-2:
= 0.15
C s = DS =
R / I 6 / 1.0
Also, C s must not be less than the larger of 0.044S DS I = 0.04 (governs)
and 0.01 (Eq. 12.8-5).
Thus, the value of C s from Eq. 12.8-3 governs in the N-S direction.
E-W direction:
Cs =
S D1
= 0.05
T ( R / I ) 1.1(8 / 1.0)
The value of C s need not exceed that from Eq. 12.8-2:
= 0.11
C s = DS =
R / I 8 / 1.0
Also, C s must not be less than the larger of 0.044S DS I = 0.04 (governs)
and 0.01 (Eq. 12.8-5).
Thus, the value of C s from Eq. 12.8-3 governs in the E-W direction.
g) Determine effective seismic weight W in accordance with 12.7.2.
The member sizes and superimposed dead loads are given in Figure 6.7
and the effective weights at each floor level are given in Tables 6.10 and
6.11. The total weight W is the summation of the effective dead loads at
each level.
h) Determine seismic base shear V.
Seismic base shear is determined by Eq. 12.8-1 in both the N-S and E-W
N-S direction: V = C sW = 0.11 × 9,960 = 1,096 kips
E-W direction: V = C sW = 0.05 × 9,960 = 498 kips
i) Determine exponent related to structure period k in both directions.
Since 0.5 sec < T < 2.5 sec in both directions, k is determined as follows:
N-S direction: k = 0.75 + 0.5T = 1.06
E-W direction: k = 0.75 + 0.5T = 1.30
j) Determine lateral seismic force Fx at each level x.
Lateral forces Fx are determined by Eqs. 12.8-11 and 12.8-12.
A summary of the lateral forces Fx and the story shears V x are given in
Table 6.10 and 6.11 for the N-S and E-W directions, respectively.
Table 6.10 Seismic Forces and Story Shears in the N-S Direction
w x (kips)
h x (ft)
w x hx
Lateral force,
Fx (kips)
Story Shear,
V x (kips)
Table 6.11 Seismic Forces and Story Shears in the E-W Direction
w x (kips)
h x (ft)
w x hx
Lateral force,
Fx (kips)
Story Shear,
V x (kips)
Three-dimensional analyses were performed independently in the N-S and EW directions for the seismic forces in Table 6.10 and 6.11 using a commercial
computer program. In the model, rigid diaphragms were assigned at each floor
level. In accordance with 12.8.4.2, the center of mass was displaced each way
from its actual location a distance equal to 5 percent of the building dimension
perpendicular to the applied forces to account for accidental torsion in seismic
A summary of the elastic displacements δ xe at each end of the building in
both the N-S and E-W directions due to the code-prescribed forces in
Tables 6.10 and 6.11 is given in Table 6.12 at all floor levels.
According to Table 12.3-1, a torsional irregularity occurs where maximum
story drift at one of the structure is greater than 1.2 times the average of the
story drifts at the two ends of the structure. The average story drift Δ avg and
the ratio of the maximum story drift to the average story drift Δ max / Δ avg are
also provided in Table 6.12.
Table 6.12 Lateral Displacements and Story Drifts due to Seismic Forces
N-S Direction
E-W Direction
ǻ avg
ǻ max
ǻ avg
ǻ avg
ǻ max
ǻ avg
For example, at the first story in the N-S direction:
Δ1 = 0.46 in.
Δ 2 = 0.18 in.
Δ avg =
0.46 + 0.18
= 0.32 in.
Δ max 0.46
= 1.44 > 1.4
Δ avg 0.32
Therefore, a Type 1b extreme torsional irregularity exists at the first
story in the N-S direction.
According to 12.8.4.3, where torsional irregularity exists at floor level x, the
accidental torsional moments Mta must be increased by the torsional
amplification factor Ax given by Eq. 12.8-14:
§ δ
Ax = ¨ max
¨ 1.2δ avg
At the first story in the N-S direction:
» = 1.44 > 1.0
A1 = «
«1.2§¨ 0.46 + 0.18 ·¸ »
«¬ ©
¹ »¼
Therefore, the accidental torsional moment at the first story is:12
( M ta )1 = A1F1e
= 1.44 × 71 × (0.05 × 180) = 920 ft - kips
ii. Re-entrant corner irregularity
By inspection, this irregularity does not exist.
iii. Diaphragm discontinuity irregularity
This irregularity does not exist in this building when opening sizes for typical
elevators and stairs are present.
iv. Out-of-plane offsets irregularity
In the first story, the seismic force-resisting system consists of special steel
concentrically braced frames along column lines 1 and 7. Above the first
floor, there is a 30-ft offset of the braced frames, which occur along column
lines 2 and 6.
Therefore, a Type 4 out-of-plane offset irregularity exists.
Note that the forces from the braced frames along column lines 2 and 6 must
be transferred through the structure to the braced frames along column lines 1
and 7, respectively.
v. Nonparallel systems irregularity
This discontinuity does not exist, since all of the braced frames and momentresisting frames are parallel to a major orthogonal axis of the building.
b. Determine if the structure has any vertical structural irregularities in accordance
with Table 12.3-2.
i. Stiffness–Soft Story Irregularity
A soft story is defined in Table 12.3-2 based on the relative lateral stiffness of
stories in a building. In general, it is not practical to determine story stiffness.
Instead, this type of irregularity can be evaluated using drift ratios due to the
code-prescribed lateral forces.13 A soft story exists when one of the following
conditions are satisfied:
It is assumed that the center of mass and center of rigidity are at the same location in this building.
Story displacements based on the code-prescribed lateral forces can be used to evaluate soft stories when
the story heights are equal.
δ − δ1e
Soft story irregularity: 0.7 1e > 2e
1 ª δ − δ1e δ3e − δ 2e δ 4e − δ3e º
0.8 1e > « 2e
h1 3 ¬ h2
δ − δ1e
Extreme soft story irregularity: 0.6 1e > 2e
1 ª δ − δ1e δ3e − δ 2e δ 4e − δ3e º
0.7 1e > « 2e
h1 3 ¬ h2
Check for a soft story in the first story:
In the N-S direction, the displacements of the center of mass at the first,
second, third and fourth floors are δ1e = 0.31 in., δ 2e = 0.51 in.,
δ3e = 0.69 in., and δ 4e = 0.91 in.
δ − δ1e 0.51 − 0.31
0.7 1e = 0.7
= 0.0010 < 2e
= 0.0013
18 × 12
13 × 12
1 ª δ − δ1e δ3e − δ 2e δ 4e − δ3e º
0.8 1e = 0.8
= 0.0011 < « 2e
18 × 12
3 ¬ h2
1 ª 0.51 − 0.31 0.69 − 0.51 0.91 − 0.69 º
= 0.0013
3 «¬ 13 × 12
13 × 12
13 × 12 »¼
In the E-W direction, the displacements of the center of mass at the first,
second, third and fourth floors are δ1e = 0.61 in., δ 2e = 1.49 in.,
δ3e = 2.68 in., and δ 4e = 3.75 in.
δ − δ1e 1.49 − 0.61
0.7 1e = 0.7
= 0.0020 < 2e
= 0.0056
18 × 12
13 × 12
1 ª δ − δ1e δ3e − δ 2e δ 4e − δ3e º
0.8 1e = 0.8
= 0.0023 < « 2e
18 × 12
3 ¬ h2
1 ª1.49 − 0.61 2.68 − 1.49 3.75 − 2.68 º
= 0.0067
3 «¬ 13 × 12
13 × 12
13 × 12 »¼
Therefore, a soft story irregularity does not exist in the first story.
ii. Weight (mass) irregularity.
Check the weight ratio of the first and second stories: 2,037/1,381 = 1.48 <
1.50. Thus, this irregularity is not present.
iii. Vertical geometric irregularity.
A vertical geometric irregularity is considered to exist where the horizontal
dimension of the seismic force-resisting system in any story is 1.3 times that
in an adjacent story.
In this case, the setbacks at the first floor level must be investigated for the
moment-resisting frames along column lines A and E:
Width of floor 1/Width of floor 2 = 180/120 = 1.5 > 1.3
Thus, a Type 3 vertical geometric irregularity exists.
iv. In-plane discontinuity in vertical lateral force-resisting element irregularity.
There are no in-plane offsets of this type, so this irregularity does not exist.
v. Discontinuity in lateral strength-weak story irregularity.
This type of irregularity exists where a story lateral strength is less than 80
percent of that in the story above. The story strength is considered to be the
total strength of all seismic-resisting elements that share the story shear for the
direction under consideration.
E-W Direction
Determine whether a weak story exists in the first story in the E-W direction.
In this case, the story strength is equal to the sum of the column shears in the
moment-resisting frames in that story when the member moment capacity is
developed by lateral loading. It is assumed in this example that the same
column and beam sections are used in the moment-resisting frames in the first
and second stories.
Assume the following nominal flexural strengths:14
Columns: M nc = 550 ft-kips
Beams: M nb = 525 ft-kips
The assumed nominal flexural strengths of the columns and beams are based on preliminary member
sizes and are provided for illustration purposes only.
First story shear strength
Corner columns A1/E1 and A7/E7 are checked for strong column-weak
beam considerations:
2M nc = 2 × 550 = 1,100 ft-kips > M nb = 525 ft-kips
The maximum shear force that can develop in each exterior column is
based on the moment capacity of the beam (525/2 = 263 ft-kips), since it is
less than the moment capacity of the column (550 ft-kips) at the top of the
V1 = V7 =
263 + 550
= 45 kips
Interior columns A2 through A6/E2 through E6 are checked for strong
column-weak beam considerations:
2M nc = 2 × 550 = 1,100 ft-kips > 2 M nb = 1,050 ft-kips
The maximum shear force that can develop in each interior column is
based on the moment capacity of the beam (525 ft-kips), since it is less
than the moment capacity of the column (550 ft-kips) at the top of the
V2 = V3 = V4 = V5 = V6 =
525 + 550
= 60 kips
Total first story strength = 2(V1 + V2 + V3 + V4 + V5 + V6 + V7 ) = 780 kips
Second story shear strength
V1 = V7 =
263 + 263
= 41 kips
V2 = V3 = V4 = V5 = V6 =
525 + 525
= 81 kips
Total second story strength = 2(V1 + V2 + V3 + V4 + V5 + V6 + V7 ) =
974 kips
780 kips > 0.80 × 974 = 779 kips
At the bottom of the column, it is assumed that the full moment capacity of the column can be developed.
Therefore, a weak story irregularity does not exist in the first story in the E-W
N-S Direction
Assuming the same beam, column and brace sizes in the first and second
floors, the shear strengths of these floors are essentially the same, and no
weak story irregularity exists in the N-S direction.
In summary, the building is not regular and has the following irregularities: horizontal
Type 1b extreme torsional irregularity, horizontal Type 4 out-of-plane offsets
irregularity and vertical Type 3 vertical geometric irregularity.
7. Determine the permitted analytical procedure from Table 12.6-1.
The structure is irregular with T < 3.5TS. If the structure had only a Type 4 out-ofplane offsets irregularity, the equivalent lateral force procedure could be used to
analyze the structure. However, since a Type 1b extreme torsional irregularity and a
Type 3 vertical geometric irregularity also exist, the equivalent lateral force procedure
is not permitted; either a modal response spectrum analysis (12.9) or a seismic
response history procedure (Chapter 16) must be utilized.16
Example 6.5 – Office Building, Allowable Story Drift
For the seven-story office building in Example 6.3, check the story drift limits in both the
N-S and E-W directions. For illustration purposes, use the lateral deflections determined
by the equivalent lateral force procedure.
1. Drift limits in N-S direction.
To check drift limits, the deflections determined by Eq. 12.8-15 must be used:
C δ
δ x = d xe
See the reference sections in Tables 12.3-1 and 12.3-2 that must be satisfied for these types of
irregularities in structures assigned to SDC D. Columns B2, C2, D2, B6, C6 and D6 must be designed to
resist the load combinations with overstrength factor of 12.4.3.2, since they support the discontinued
braced frames along column lines 2 and 6 (12.3.3.2). Design forces shall be increased 25 percent for
connections of diaphragms to vertical elements and to collectors and for connection of collectors to the
vertical elements (12.3.3.4). Members that are not part of the seismic force-resisting system must satisfy
the deformational compatibility requirements of 12.12.4.
The maximum story displacements δ xe in the N-S direction are summarized in
Table 6.12. For special steel concentrically braced frames, the deflection
amplification factor Cd is equal to 5 from Table 12.2-1.
A summary of the displacements at each floor level in the N-S direction is given in
Table 6.13.
The interstory drifts Δ computed from the δ x are also given in the table. The drift at
story level x is determined by subtracting the design earthquake displacement at the
bottom of the story from the design earthquake displacement at the top of the story:
Δ = δ x − δ x −1
Table 6.13 Lateral Displacements and Story Drifts due to Seismic Forces in the N-S
The design story drifts Δ shall not exceed the allowable story drift Δ a given in
Table 12.12-1. For Occupancy Category II and “all other structures,” Δ a = 0.020hsx
where hsx is the story height below level x.
For the 18-ft story height, Δ a = 0.020 × 18 × 12 = 4.32 in. > 2.30 in.
For the 13-ft story heights, Δ a = 0.020 × 13 × 12 = 3.12 in., which is greater than the
values of Δ at floor levels 2 through 7.
Thus, drift limits are satisfied in the N-S direction.
2. Drift limits in the E-W direction.
The maximum story displacements δ xe in the N-S direction are summarized in
Table 6.12. For special steel moment frames, the deflection amplification factor Cd
is equal to 5.5 from Table 12.2-1.
A summary of the displacements at each floor level in the E-W direction is given in
Table 6.14. The interstory drifts Δ computed from the δ x are also given in the table.
Table 6.14 Lateral Displacements and Story Drifts due to Seismic Forces in the E-W
In accordance with 12.12.1.1, design story drifts Δ must not exceed Δ a / ρ for
seismic force-resisting systems comprised solely of moment frames in structures
assigned to SDC D, E or F where ρ is the redundancy factor determined in
accordance with 12.3.4.2.
Due to the Type 1b extreme torsional irregularity, ρ must be equal to 1.3. Therefore,
for the 18-ft story height, Δ a / ρ = 0.020 × 18 × 12 / 1.3 = 3.32 in. < 3.41 in.
For the 13-ft story heights, Δ a / ρ = 0.020 × 13 × 12 / 1.3 = 2.40 in., which is less than
the design drifts at stories 2 through 6.
Thus, drift limits are not satisfied in the N-S direction. Increasing member sizes may
not be sufficient to reduce the design drift; including additional members in the
seismic force-resisting system may be needed to control drift. This, in turn, may help
reduce the torsional effects.
Example 6.6 – Office Building, P-delta Effects
For the seven-story office building in Example 6.3, determine the P-delta effects in both
the N-S and E-W directions.
For illustration purposes, use the lateral deflections determined by the equivalent lateral
force procedure.
Assume a 10 psf live load on the roof and a 50 psf live load on the floors.
1. P-delta effects in the N-S direction.
In lieu of automatically considering P-delta effects in a computer analysis, the
following procedure can be used to determine whether P-delta effects need to be
considered in accordance with 12.8.7.
P-delta effects need not be considered when the stability coefficient θ determined by
Eq. 12.8-16 is less than or equal to 0.10:
Px Δ
Vx hsxCd
= total unfactored vertical design load at and above level x
= design story drift occurring simultaneously with V x
= seismic shear force acting between level x and x–1
= story height below level x
= deflection amplification factor in Table 12.2-1
The stability coefficient θ must not exceed θ max determined by Eq. 12.8-17:
θmax =
≤ 0.25
β Cd
Where β is the ratio of shear demand to shear capacity between level x and x–1,
which may be taken equal to 1.0 when it is not calculated.
The P-delta calculations for the N-S direction are shown in Table 6.15. It is clear that
P-delta effects need not be considered at any of the floor levels. Note that θ max is
equal to 0.1000 in the N-S direction using β = 1.0.
Table 6.15 P-delta Effects in the N-S Direction
h sx
2. P-delta effects in the E-W direction.
The P-delta calculations for the E-W direction are shown in Table 6.16. Note that
θ max is equal to 0.0909 in the E-W direction using β = 1.0, and, since θ is greater
than θ max at levels 2 through 4, the structure is potentially unstable and needs to be
redesigned. It was determined in Example 6.4 that the shear capacity of the second
floor is equal to 780 kips. Thus, β = 474 / 780 = 0.61 , and θ max = 0.5 /(0.61 × 5.5)
= 0.15 . Assuming the same shear capacities at levels 3 and 4, θ max = 0.16 at level 3
and θ max = 0.18 at level 4. Therefore, the structure is still potentially unstable.
Table 6.16 P-delta Effects in the E-W Direction
h sx
Example 6.7 – Health Care Facility, Diaphragm Design Forces
Determine the diaphragm design forces for the three-story health care facility depicted in
Figure 6.8 given the design data below.
Soil classification:
Structural system:
St. Louis, MO (Latitude: 38.63°, Longitude: -90.20°)
Site Class C
Health care facility without surgery or emergency treatment
Cast-in-place concrete
Moment-resisting frames in both directions
In order to determine the diaphragm design forces in accordance with 12.10.1.1, the
design seismic forces must be determined at each floor level.
Assuming that the building is regular, the equivalent lateral force procedure can be used
to determine the design seismic forces (see Table 12.6-1).
Structural system: conventionally reinforced
concrete moment frames in both directions
Reinforced concrete parapet (typ.)
3 @ 8′-4″
= 25′-0″
North/South Elevation
Figure 6.8 Plan and Elevation of Health Care Facility
Step 1: Determine the seismic ground motion values from Flowchart 6.2.
1. Determine the mapped accelerations S S and S1 .
In lieu of using Figures 22-1 and 22-2, the mapped accelerations are determined
by inputting the latitude and longitude of the site into the USGS Ground Motion
Parameter Calculator. The output is as follows: S S = 0.58 and S1 = 0.17 .
2. Determine the site class of the soil.
The site class of the soil is given in the design data as Site Class C.
3. Determine soil-modified accelerations S MS and S M 1 .
Site coefficients Fa and Fv are determined from Tables 11.4-1 and 11.4-2,
For Site Class C and 0.5 < S S < 0.75 : Fa = 1.17 from linear interpolation
For Site Class C and 0.1 < S1 < 0.2 : Fv = 1.63 from linear interpolation
S MS = 1.17 × 0.58 = 0.68
S M 1 = 1.63 × 0.17 = 0.28
4. Determine design accelerations S DS and S D1 .
From Eqs. 11.4-3 and 11.4-4:
S DS =
× 0.68 = 0.45
S D1 =
× 0.28 = 0.19
Step 2: Determine the SDC from Flowchart 6.4.
1. Determine if the building can be assigned to SDC A in accordance with 11.4.1.
Since S S = 0.58 > 0.15 and S1 = 0.17 > 0.04 , the building cannot be
automatically assigned to SDC A.
2. Determine the Occupancy Category from IBC Table 1604.5.
For a health care facility, the Occupancy Category is III.
3. Since S1 < 0.75 , the building is not assigned to SDC E or F.
4. Check if all four conditions of 11.6 are satisfied.
Check if the approximate period Ta is less than 0.8TS .
From Eq. 12.8-7 for a concrete moment-resisting frame:
Ta = Ct hnx = 0.016(25.0)0.9 = 0.29 sec
where Ct and x are given in Table 12.8-2.
TS = S D1 / S DS = 0.19 / 0.45 = 0.42 sec
0.29 sec < 0.8 × 0.42 = 0.34 sec
The fundamental period used to calculate the design drift is taken as 0.29 sec,
which is less than TS = 0.42 sec.
Equation 12.8-2 will be used to determine the seismic response coefficient
Cs .
Since the roof and the floors are cast-in-place concrete, the diaphragms are
considered to be rigid.
Since all four conditions are satisfied, the SDC can be determined by Table 11.6-1
alone (11.6).
5. Determine the SDC from Table 11.6-1.
From Table 11.6-1, with 0.33 < S DS = 0.45 < 0.50 and Occupancy Category III,
the SDC is C.17
Step 3: Determine the design seismic forces of the equivalent lateral force procedure
from Flowchart 6.8.
1. The design accelerations and the SDC have been determined in Steps 1 and 2
2. Determine the response modification coefficient R from Table 12.2-1.
The moment-resisting frames in this building must be intermediate reinforced
concrete moment frames, since the building is assigned to SDC C (system C6 in
Table 12.2-1). In this case, R = 5. There is no height limit for this system in
SDC C.
3. Determine the importance factor I from Table 11.5-1.
For Occupancy Category III, I = 1.25.
4. Determine the period of the structure T.
It was determined in Step 2, item 4 above that the approximate period of the
structure Ta, which is permitted to be used in the equivalent lateral force
procedure, is equal to 0.29 sec.
If Table 11.6-2 were also used, the SDC would also be C.
5. Determine long-period transition period TL from Figure 22-15.
For St. Louis, MO, TL = 12 sec > Ta = 0.29 sec
6. Determine seismic response coefficient C s .
The value of C s must be determined by Eq. 12.8-2 (see Step 2, item 4):
Cs = DS =
= 0.11
R / I 5 / 1.25
7. Determine effective seismic weight W in accordance with 12.7.2.
The effective weights at each floor level are given in Table 6.17. The total weight
W is the summation of the effective dead loads at each level.
8. Determine seismic base shear V.
Seismic base shear is determined by Eq. 12.8-1:
V = CsW = 0.11 × 16,320 = 1,795 kips
9. Determine exponent related to structure period k.
Since T < 0.5 sec, k = 1.0.
10. Determine lateral seismic force Fx at each level x.
Fx is determined by Eqs. 12.8-11 and 12.8-12. A summary of the lateral forces
Fx and the story shears V x is given in Table 6.17.
Table 6.17 Seismic Forces and Story Shears
w x (kips)
h x (ft)
w x hx
Lateral force,
Fx (kips)
Story Shear,
V x (kips)
Step 4: Determine the diaphragm design seismic forces using Eq. 12.10-1.
¦ Fi
Diaphragm design force Fpx =
i= x
¦ wi
w px
i= x
where wi = weight tributary to level i and w px = weight tributary to the diaphragm
at level x.
Minimum Fpx = 0.2 S DS Iw px = 0.2 × 0.45 × 1.25 × w px = 0.1125w px
Maximum F px = 0.4 S DS Iw px = 0.2250 w px
Assuming that the exterior walls are primarily glass, which weigh significantly less
than the diaphragm weight at each level, the weight that is tributary to each
diaphragm is identical to the weight of the structure at that level (i.e., wpx = wx).
A summary of the diaphragm forces is given in Table 6.18.
Table 6.18 Design Seismic Diaphragm Forces
Σw x
* Minimum value governs.
ΣFx / Σw x
w px
Example 6.8 – Health Care Facility, Nonstructural Component
Determine the design seismic force on the parapet of the health care facility in
Example 6.7.
Use Flowchart 6.10 to determine the seismic force on the parapet.
1. Determine S DS , S D1 and the SDC.
The design accelerations and the SDC are determined in Example 6.7.
The parapet is assigned to SDC C, which is the same SDC as the building to which it
is attached (13.1.2).
2. Determine the component amplification factor a p and the component response
modification factor R p from Table 13.5-1 for architectural components.
Assuming that the parapet is not braced, a p = 2.5 and R p = 2.5 from Table 13.5-1.
3. Determine component importance factor I p in accordance with 13.1.3.
Since the parapet does not meet any of the three criteria that require I p = 1.5, then
I p = 1 .0 .
4. Determine the horizontal seismic design force Fp by Eq. 13.3-1.
Fp =
0.4a p S DSW p §
¨1 + 2 ¸
§ Rp ·
¨ Ip ¸
Assuming that the thickness of the parapet is 8 in. and that normal weight concrete is
utilized, W p = 8 × 150 / 12 = 100 psf.
Since the parapet is attached to the top of the structure, z / h = 1 .
Fp =
0.4 × 2.5 × 0.45 × 100
(1 + 2) = 54 psf
§ 2.5 ·
© 1.0 ¹
Minimum Fp = 0.3S DS I pW p = 13.5 psf
Maximum Fp = 1.6 S DS I pW p = 72.0 psf
The 54 psf seismic load is applied to the parapet as shown in Figure 6.9.
54 psf
243 plf
M = 243 x 4.5/2 = 547 ft-lbs/ft
Figure 6.9 Design Seismic Force on Parapet
Example 6.9 – Residential Building, Vertical Combination of Structural
Determine the design seismic forces on the residential building depicted in Figure 6.10
given the design data below.
Soil classification:
Structural systems:
Philadelphia, PA (Latitude: 39.92°, Longitude: -75.23°)
Site Class D
Residential occupancy where less than 300 people congregate in
one area
Light-frame wood bearing walls with shear panels rated for shear
resistance and cast-in-place reinforced concrete building frame
system with ordinary reinforced concrete shear walls
Step 1: Determine the seismic ground motion values from Flowchart 6.2.
Using the USGS Ground Motion Parameter Calculator, S S = 0.27 and S1 = 0.06 .
Using Tables 11.4-1 and 11.4-2, the soil-modified accelerations are S MS = 0.44 and
S M 1 = 0.15 .
10″ reinforced concrete wall (typ.)
First Floor Plan (El. +10′-0″)
4 @ 8′-0″
= 32′-0″
Light-framed wood walls
Reinforced concrete
building frame
North or South Elevation
Figure 6.10 First Floor Plan and Elevation of Residential Building
Design accelerations: S DS =
× 0.44 = 0.29 and S D1 = × 0.15 = 0.10
Step 2: Determine the SDC from Flowchart 6.4.
From IBC Table 1604.5, the Occupancy Category is II.
From Table 11.6-1 with 0.167 < S DS < 0.33 and Occupancy II, the SDC is B.
From Table 11.6-2 with 0.067 < S D1 < 0.133 and Occupancy Category II, the SDC is
Therefore, the SDC is B for this building.
Step 3: Determine the response modification coefficients R in accordance with
12.2.3.1 for vertical combinations of structural systems.
The vertical combination of structural systems in this building is a flexible wood
frame upper portion above a rigid concrete lower portion. Thus, a two-stage
equivalent lateral force procedure is permitted to be used provided the design of the
structure complies with the four criteria listed in 12.2.3.1:
a. The stiffness of lower portion must be at least 10 times the stiffness of the upper
It can be shown that the stiffness of the lower portion of this structure is more
than 10 times that of the upper portion. O.K.
b. The period of the entire structure shall not be greater than 1.1 times the period of
the upper portion considered as a separate structure fixed at the base.
Determine the period of the upper portion by Eq. 12.8-7 using the approximate
period coefficients in Table 12.8-2 for “all other structural systems”:
Ta = Ct hnx = 0.02(32)0.75 = 0.27 sec
Determine the period of the lower portion in the N-S direction by Eqs. 12.8-9 and
12.8-10 for concrete shear wall structures:
Ta =
100 x § hn ·
¨¨ ¸¸
CW =
AB i =1 © hi ¹ ª
§ hi · º
«1 + 0.83¨ ¸ »
¨D ¸ »
© i¹ ¼
AB = area of base of structure = 118 × 80 = 9,440 sq ft
hn = height of lower portion of building = 10 ft
hi = height of shear wall i = 10 ft
Ai = area of shear wall i =
× 30 = 25 sq ft
Di = length of shear wall i = 30 ft
CW =
Ta =
4 × 100
= 0.97
9,440 ª
§ ·
«1 + 0.83¨ ¸ »
© 30 ¹ »¼
× 10 = 0.02 sec
The period of the lower portion in the E-W direction is equal to 0.01 sec.
The period of the combined structure is approximately 0.28 sec.18
0.28 sec < 1.1 × 0.27 = 0.30 sec O.K.
c. The flexible upper portion shall be designed as a separate structure using the
appropriate values of R and ρ .
From Table 12.2-1 for a bearing wall system with light-framed walls with wood
structural panels rated for shear resistance (system A13), R = 6.5 with no height
limitation for SDC B.
For SDC B, ρ = 1.0 (12.3.4.1).
d. The rigid lower portion shall be designed as a separate structure using the
appropriate values of R and ρ . Amplified reactions from the upper portion are
applied to the lower portion where the amplification factor is equal to the ratio of
the R / ρ of the upper portion divided by R / ρ of the lower portion and must be
greater than or equal to one.
From Table 12.2-1 for a building frame system with ordinary reinforced concrete
shear walls (system B6), R = 5 with no height limitation for SDC B.
For SDC B, ρ = 1.0 (12.3.4.1).
Amplification factor = (6.5/1)/(5/1) = 1.3.
Therefore, a two-stage equivalent lateral force procedure is permitted to be used.
The period of the combined structure was obtained from a commercial computer program.
Step 4: Determine the design seismic forces on the upper and lower portions of the
structure using the equivalent lateral force procedure.
1. Use Flowchart 6.8 to determine the lateral seismic forces on the flexible upper
portion of the structure.
a. The design accelerations and the SDC have been determined in Steps 1 and 2
above, respectively.
b. Determine the response modification coefficient R from Table 12.2-1.
The response modification coefficient was determined in Step 3 as 6.5.
c. Determine the importance factor I from Table 11.5-1.
For Occupancy Category II, I = 1.0.
d. Determine the period of the structure T.
It was determined in Step 3 that the approximate period of the structure Ta =
0.27 sec.
e. Determine long-period transition period TL from Figure 22-15.
For Philadelphia, PA, TL = 6 sec > Ta = 0.27 sec.
f. Determine seismic response coefficient C s .
The seismic response coefficient C s is determined by Eq. 12.8-3:
Cs =
S D1
= 0.06
T ( R / I ) 0.27(6.5 / 1.0)
The value of C s need not exceed that from Eq. 12.8-2:
Cs = DS =
= 0.05
R / I 6.5 / 1.0
Also, C s must not be less than the larger of 0.044S DS I = 0.013 (governs) and
0.01 (Eq. 12.8-5).
Thus, the value of C s from Eq. 12.8-2 governs.
g. Determine effective seismic weight W in accordance with 12.7.2.
The effective weights at each floor level are given in Table 6.19. The total
weight W is the summation of the effective dead loads at each level.
h. Determine seismic base shear V.
Seismic base shear is determined by Eq. 12.8-1:
V = CsW = 0.05 × 761 = 38 kips
i. Determine exponent related to structure period k.
Since T = 0.27 sec < 0.5 sec, k = 1.0.
Table 6.19 Seismic Forces and Story Shears on Flexible Upper Portion
w x (kips)
h x (ft)
w x hx
Lateral force,
Fx (kips)
Story Shear,
V x (kips)
j. Determine lateral seismic force Fx at each level x.
Fx is determined by Eqs. 12.8-11 and 12.8-12. A summary of the lateral
forces Fx and the story shears V x is given in Table 6.19.
2. Use Flowchart 6.8 to determine the lateral seismic forces on the rigid lower
portion of the structure in the N-S direction.
a. The design accelerations and the SDC have been determined in Steps 1 and 2
above, respectively.
b. Determine the response modification coefficient R from Table 12.2-1.
It was determined in Step 3 that the response modification coefficient R = 5.
c. Determine the importance factor I from Table 11.5-1.
For Occupancy Category II, I = 1.0.
d. Determine the period of the structure T.
It was determined in Step 3 that the approximate period of the structure
Ta = 0.02 sec in the N-S direction.
e. Determine long-period transition period TL from Figure 22-15.
For Philadelphia, PA, TL = 6 sec > Ta = 0.02 sec.
f. Determine seismic response coefficient C s .
The seismic response coefficient C s is determined by Eq. 12.8-3:
Cs =
S D1
= 1.0
T ( R / I ) 0.02(5 / 1.0)
The value of C s need not exceed that from Eq. 12.8-2:
= 0.06
Cs = DS =
R / I 5 / 1.0
Also, C s must not be less than the larger of 0.044S DS I = 0.013 (governs) and
0.01 (Eq. 12.8-5).
Thus, the value of C s from Eq. 12.8-2 governs.
g. Determine effective seismic weight W in accordance with 12.7.2.
The effective weight W is equal to 1,458 kips for the lower portion.
h. Determine seismic base shear V.
Seismic base shear is determined by Eq. 12.8-1:
V = CsW = 0.06 × 1,458 = 88 kips
i. Determine total lateral seismic forces on the lower portion.
For the one-story lower portion, the total seismic force is equal to the lateral
force due to the base shear of the lower portion plus the amplified seismic
force from the upper portion:
Vtotal = 88 + (1.3 × 38) = 137 kips
Since the base shear is independent of the period, the total seismic force in the
E-W direction is also equal to 137 kips.
The distribution of the lateral seismic forces in the upper and lower parts of the
structure is shown in Figure 6.11.
15 kips
11 kips
8 kips
4 kips
38 kips
49 kips
88 kips
137 kips
Figure 6.11 Distribution of Lateral Seismic Forces in the Upper and Lower Portions of
the Structure
Example 6.10 – Warehouse Building, Design of Roof Diaphragm,
Collectors, and Wall Panels
For the one-story warehouse illustrated in Figure 6.12, determine (1) design seismic
forces on the diaphragm, including diaphragm shear forces in both directions, (2) design
seismic forces on the steel collector beam in the N-S direction, and (3) out-of-plane
design seismic forces on the precast concrete wall panels, given the design data below.
Soil classification:
Structural system:
San Francisco, CA (Latitude: 37.75°, Longitude: -122.43°)
Site Class D
Less than 300 people congregate in one area and the building is not
used to store hazardous or toxic materials
Building frame system with intermediate precast concrete shear
Weight of roof framing = 15 psf
7″ precast concrete wall (typ.)
5 @ 37′-0″ = 185′-0″
HSS column (typ.)
Open-web joist girders (typ.)
2X wood subpurlins with ½″ wood
Open-web joist purlins @ 8′-0″
Steel beam collector (typ.)
8 @ 32′-0″ = 256′-0″
Roof slope = 1/2 in./ft
Figure 6.12 Plan and Elevation of Warehouse Building
Part 1: Determine design seismic forces on the diaphragm
Step 1: Determine the seismic ground motion values from Flowchart 6.2.
Using the USGS Ground Motion Parameter Calculator, S S = 1.51 and S1 = 0.76 .
Using Tables 11.4-1 and 11.4-2, the soil-modified accelerations are S MS = 1.51 and
S M 1 = 1.14 .
Design accelerations: S DS =
× 1.51 = 1.01 and S D1 = × 1.14 = 0.76
Step 2: Determine the SDC from Flowchart 6.4.
From IBC Table 1604.5, the Occupancy Category is II.
Since S1 > 0.75 , the SDC is E for this building (11.6).
Step 3: Use Flowchart 6.8 to determine the seismic base shear using the equivalent
lateral force procedure.
1. Check if equivalent lateral force procedure can be used (see Flowchart 6.6).
a. The building has a Type 2 re-entrant corner irregularity, since 37.0 ft >
0.15 × 185.0 = 27.8 ft and 192.0 ft > 0.15 × 256.0 = 38.4 ft (see Table 12.3-1
and Table 6.2).
b. Determine the period in the N-S direction by Eqs. 12.8-9 and 12.8-10 for
concrete shear wall structures:
Ta =
100 x § hn ·
¨¨ ¸¸
CW =
AB i =1 © hi ¹ ª
§ · º
«1 + 0.83¨ hi ¸ »
¨D ¸ »
© i¹ ¼
AB = area of base of structure = 40,256 sq ft
hn = average height of building = 22.67 ft
hi = average height of shear wall i = 22.67 ft
Ai = area of shear wall i: A1 =
= 21.6 sq ft, A3 =
× 185 = 107.9 sq ft, A2 = × 37
× 148 = 86.3 sq ft
Di = length of shear wall i: D1 = 185 ft, D2 = 37 ft, D3 = 148 ft
100 °°
CW =
40,256 ° ª
§ 22.67 · º ª
§ 22.67 · º ª
§ 22.67 · º °
° «1 + 0.83¨ 185 ¸ » «1 + 0.83¨ 37 ¸ » «1 + 0.83¨ 148 ¸ » °
¹ »¼ «¬
¹ »¼ «¬
¹ »¼ °¿
°¯ «¬
(106.6 + 16.5 + 84.7 ) = 0.52
Ta =
× 22.67 = 0.06 sec
c. Determine the period in the E-W direction.
100 °°
CW =
40,256 ° ª
§ 22.67 · º °
§ 22.67 · º ª
§ 22.67 · º ª
° «1 + 0.83¨ 64 ¸ » «1 + 0.83¨ 192 ¸ » «1 + 0.83¨ 256 ¸ » °
¹ »¼ °
¹ »¼ «¬
¹ »¼ «¬
¯° «¬
(33.8 + 110.7 + 148.3) = 0.73
Ta =
× 22.67 = 0.05 sec
3.5 Ts = 3.5 × 0.76 / 1.01 = 2.63 sec > period in both directions
In accordance with Table 12.6-1, the equivalent lateral force procedure is
permitted to be used for this irregular structure with a Type 2 re-entrant corner
irregularity and with T < 3.5Ts .
2. Use Flowchart 6.8 to determine the lateral seismic forces in the N-S and E-W
a. The design accelerations and the SDC have been determined in Steps 1 and 2
above, respectively.
b. Determine the response modification coefficient R from Table 12.2-1.
For a building frame system with intermediate precast concrete shear walls
(system B9), R = 5. Note that the average building height of 22.67 ft is less
than the 45 ft height limit for this system assigned to SDC E (see footnote k in
Table 12.2-1).
c. Determine the importance factor I from Table 11.5-1.
For Occupancy Category II, I = 1.0.
d. Determine the period of the structure T.
It was determined above that Ta = 0.06 sec in the N-S direction and
Ta = 0.05 sec in the E-W direction.
e. Determine long-period transition period TL from Figure 22-15.
For San Francisco, CA, TL = 12 sec > Ta in both directions.
f. Determine seismic response coefficient C s .
The value of C s from Eq. 12.8-2 is:
= 0.20
C s = DS =
R / I 5 / 1.0
Also, C s must not be less than the larger of 0.044S DS I = 0.013 and 0.044
(Eq. 12.8-5). or the value obtained by Eq. 12.8-6 (governs), since S1 > 0.6 :
Cs =
0.5S1 0.5 × 0.76
= 0.08
¨ ¸
¨ ¸
Thus, the value of C s from Eq. 12.8-2 governs.
g. Determine effective seismic weight W in accordance with 12.7.2.
The effective weight W is equal to the weight of the roof framing plus the
weight of the walls tributary to the roof:19
The weight of the walls is conservatively calculated assuming that there are no openings in the walls.
Weight of roof framing = 0.015 × 40,256 = 604 kips
Weight of walls =
× 0.15 ×
× [2(256 + 185)] = 875 kips
W = 604 + 875 = 1,479 kips
h. Determine seismic base shear V.
Seismic base shear is determined by Eq. 12.8-1 and is the same in both the NS and E-W directions, since the governing Cs is independent of the period:
V = C sW = 0.20 × 1,479 = 296 kips
3. Determine the design seismic forces in the diaphragm in both directions by
Eq. 12.10-1.
¦ Fi
Diaphragm design force Fpx = i =n x
¦ wi
w px
i= x
where wi = weight tributary to level i and w px = weight tributary to the
diaphragm at level x.
Since this is a one-story building, Eq. 12.10-1 reduces to F px = 0.20w px
Minimum F px = 0.2 S DS Iw px = 0.2 × 1.01 × 1.0 × w px = 0.20 w px
Maximum F px = 0.4S DS Iw px = 0.40w px
The wood sheathing is permitted to be idealized as a flexible diaphragm in
accordance with 12.3.1.1. Seismic forces are computed from the tributary weight
of the roof and the walls oriented perpendicular to the direction of analysis.20
N-S direction
Uniform diaphragm loads wN1 and wN 2 are computed as follows (see
Figure 6.13):
Walls parallel to the direction of the seismic forces are typically not considered in the tributary weight,
since these walls do not obtain support from the diaphragm in the direction of the seismic force.
wN1 = 0.20 × 15 × 185 + 0.20 × 87.5 × 2 ×
= 952 plf
wN 2 = 0.20 × 15 × 148 + 0.20 × 87.5 × 2 ×
= 841 plf
Also shown in Figure 6.13 is the shear diagram for the diaphragm.
= 165 plf
= 545 plf
841 plf
952 plf
80.7 kips
30.5 kips
30.5 kips
80.7 kips
Figure 6.13 Design Seismic Forces and Shear Forces in the Diaphragm in the N-S
E-W direction
Uniform diaphragm loads wE1 and wE 2 are computed as follows (see
Figure 6.14):
wE1 = 0.20 × 15 × 64 + 0.20 × 87.5 × 2 ×
= 589 plf
wE 2 = 0.20 × 15 × 256 + 0.20 × 87.5 × 2 ×
= 1,165 plf
Also shown in Figure 6.14 is the shear diagram for the diaphragm.
589 plf
10.9 kips
= 170 plf
10.9 kips
1,165 plf
= 337 plf
86.2 kips
Figure 6.14 Design Seismic Forces and Shear Forces in the Diaphragm in the E-W
4. Determine connection forces between the diaphragm and the shear walls.
Since the building has a Type 2 horizontal irregularity, the diaphragm connection
design forces must be increased by 25 percent in accordance with 12.3.3.4. Thus,
F px = 1.25 × 0.20 w px = 0.25w px . This force increase applies to the row of
diaphragm nailing that transfers the above diaphragm shears directly to the shear
walls (and to the collectors) and to the bolts between the ledger beams and the
shear walls.
86.2 kips
Part 2: Determine design seismic forces on the steel collector beam in the N-S direction
From the diaphragm shear diagram in Figure 6.13, the maximum collector load is equal
to 30.5(148/185) + 80.7 = 105.1 kips tension or compression.
The uniform axial load can be approximated by dividing the maximum load by the length
of the collector: 105,100/148 = 710 plf.
The uniform axial load can be used to determine the axial force in any of the beams at
any point along their length. For example, at the midspan of the northernmost collector
beam, the axial force is equal to 710 × (148 − 37 / 2) / 1,000 = 92 kips tension or
compression. In accordance with 12.3.3.4, this force must be increased by 25 percent
unless the collector is designed for the load combinations with overstrength factor of
12.4.3.2 (see 12.10.2.1).
The collector beams are subsequently designed for the combined effects of gravity and
axial loads due to the design seismic forces.
Part 3: Determine out-of-plane design seismic forces on the precast concrete wall
1. Solid wall panels
According to 12.11.1, structural walls shall be designed for a force normal to the
surface equal to 0.4SDSI times the weight of the wall. The minimum normal force is
10 percent of the weight of the wall.
For a solid precast concrete wall panel:
Weight W p = (7 / 12) × 0.15 × 22.67 = 2.0 kips/ft
0.1 × 2.0 = 0.2 kips/ft
0.4 × 1.01 × 1.0 × 2.0 = 0.8 kips/ft (governs)
This force governs over the three minimum anchorage forces specified in 12.11.2 as
Distributed load = 0.8/22.67 = 0.04 kips/ft/ft width of wall
This uniformly distributed load is applied normal to the wall in either direction.
Anchorage of the precast walls to the flexible diaphragms must develop the out-ofplane force given by Eq. 12.11-1:
F p = 0.8S DS IW p = 1.6 kips/ft
Note that the 25 percent increase in the design force for diaphragm connections is not
applied to out-of-plane wall anchorage force to the diaphragm (12.3.3.4).
2. Wall panels with openings
A typical wall panel on the east or west faces of the building is shown in Figure 6.15.
12′ × 14′ opening
3′ × 7′ opening
Figure 6.15 Typical Precast Wall Panel on East and West Faces
In lieu of a more rigorous analysis, the pier width between the two openings is
commonly defined as a design strip. The total weight used in determining the out-ofplane design seismic force is taken as the weight of the design strip plus the weight of
the wall tributary to the design strip above each adjacent opening (see Figure 6.16):
W p1 =
× 150 × 4 = 350 plf
W p2 =
× 150 × 6 = 525 plf
W p3 =
× 150 × 1.5 = 131 plf
The out-of-plane seismic forces are determined by 12.11.1:
F p1 = 0.4 S DS IW p1 = 0.4 × 1.01 × 1.0 × 350 = 141.4 plf from 0 ft to 20 ft
F p 2 = 0.4 S DS IW p1 = 0.4 × 1.01 × 1.0 × 525 = 212.1 plf from 14 ft to 20 ft
F p3 = 0.4 S DS IW p1 = 0.4 × 1.01 × 1.0 × 131 = 52.9 plf from 7 ft to 20 ft
Figure 6.16 Design Strip and Tributary Weights
The wall is designed for the combination of axial load from the gravity forces and
bending and shear from the out-of-plane seismic forces.
Example 6.11 – Retail Building, Simplified Design Method
For the one-story retail building illustrated in Figure 6.17, determine the seismic base
shear using the simplified alternative structural design criteria of 12.14.
Soil classification:
Structural system:
Seattle, WA (Latitude: 47.60°, Longitude: -122.33°)
Site Class C
Business occupancy
Bearing wall system with special reinforced masonry shear walls
Step 1: Determine the seismic ground motion values from Flowchart 6.2.
Using the USGS Ground Motion Parameter Calculator, S S = 1.47 and S1 = 0.50 .
8″ CMU (typ.)
8″ reinforced concrete slab
North Elevation
South Elevation
Figure 6.17 Plan and Elevations of One-story Retail Building
Using Tables 11.4-1 and 11.4-2, the soil-modified accelerations are S MS = 1.47 and
S M 1 = 0.65 .
Design accelerations: S DS =
× 1.47 = 0.98 and S D1 = × 0.65 = 0.43
Step 2: Determine if the simplified method of 12.14 can be used for this building.
The simplified method is permitted to be used if the following 12 limitations are met:
1. The structure shall qualify for Occupancy I or II in accordance with Table 1-1.
From Table 1-1, the Occupancy Category is II. O.K.
2. The Site Class shall not be E or F.
The Site Class is C in accordance with the design data. O.K.
3. The structure shall not exceed three stories in height.
The structure is one story. O.K.
4. The seismic force-resisting system shall be either a bearing wall system or
building frame system in accordance with Table 12.14-1.
The seismic force-resisting system is a bearing wall system. O.K.
5. The structure has at least two lines of lateral resistance in each of the two major
axis directions.
Masonry shear walls are provided along two lines at the perimeter in both
directions. O.K.
6. At least one line of resistance shall be provided on each side of the center of mass
in each direction.
The center of mass is approximately located at the geometric center of the
building and walls are provided on all four sides at the perimeter. O.K.
7. For structures with flexible diaphragms, overhangs beyond the outside line of
shear walls or braced frames shall satisfy: a ≤ d / 5 .
The diaphragm in this building is rigid, so this limitation is not applicable.
8. For buildings with diaphragms that are not flexible, the distance between the
center of rigidity and the center of mass parallel to each major axis shall not
exceed 15 percent of the greatest width of the diaphragm parallel to that axis.
Assume that the center of mass is at the geometric center of this building.21 This
limitation is satisfied with respect to the east-west direction, since the center of
rigidity and center of mass are on the same line due to the symmetrical layout of
the walls on the east and west elevations.
The center of rigidity must be located in the north-south direction, since the north
and south walls are not identical. By inspection, the center of rigidity is located
The exact location of the center of mass should be computed where it is anticipated to be offset from the
geometric center of the building.
closer to the south wall, since the stiffness of that wall is greater than the stiffness
of the north wall.
To locate the center of rigidity, the stiffnesses of the north and south walls must
be determined. Assuming that the piers are fixed at the top and bottom ends, the
stiffnesses (or rigidities) of the walls and piers can be determined by the
Total displacement of pier or wall i: δi = δ fi + δ vi
§ hi ·
¨¨ ¸¸
δ fi = displacement due to bending = © i ¹
§h ·
3¨¨ i ¸¸
δ vi = displacement due to shear = © i ¹
where hi = height of pier or wall
" i = length of pier or wall
t = thickness of pier or wall
E = modulus of elasticity of pier or wall
Stiffness of pier or wall ki =
In lieu of a more rigorous analysis, the stiffness of a wall with openings is
determined as follows: first, the deflection of the wall is obtained as though it
were a solid wall with no openings. Next, the deflection of a solid strip of wall
that contains the openings is subtracted from the total deflection. Finally, the
deflection of each pier surrounded by the openings is added back.
Table 6.20 contains a summary of the stiffness calculations for the north wall.
Similar calculations for the south wall are given in Table 6.21. The pier
designations are provided in Figure 6.18.
North wall stiffness = 0.635 Et
South wall stiffness = 0.818 Et
East and west wall stiffness = 1.65 Et .
Table 6.20 Stiffness Calculations for North Wall
δvi Et = 3(hi / " i )
δ i Et
k i /Et
hi (ft)
" i (ft)
δ fi Et = (hi / " i )
Table 6.21 Stiffness Calculations for South Wall
δvi Et = 3( hi / " i )
δ i Et
k i /Et
δ i Et
hi (ft)
" i (ft)
δ fi Et = (hi / " i )
North Wall
South Wall
Figure 6.18 Pier Designations for Stiffness Calculations
The location of the center of rigidity in the north-south direction can be
determined from the following equation:
yr =
¦ k i yi
¦ ki
where yi is the distance from a reference point to wall i.
Using the centerline of the south wall as the reference line (see Figure 6.19),
7.625 ·
0.635 Et ¨ 60 −
12 ¹
= 25.9 ft
yr =
0.635 Et + 0.818Et
7.625 ·
e1 = ¨ 30 −
¸ − 25.9 = 3.8 ft
2 × 12 ¹
0.15 × 60 = 9.0 ft > 3.8 ft O.K.
e1 = 3.8′
xr = 19.9′
Center of mass
yr = 25.9′
Center of rigidity
Figure 6.19 Locations of Center of Mass and Center of Rigidity
In addition, Eqs. 12.14-2A and 12.14-2B must be satisfied:
¦ k1i d12i +
i =1
¦ k1i d12i +
i =1
e1 · 2 m
¸¸b1 ¦ k1i
¦ 2j 2j
¹ i =1
j =1
e2 · 2 n
¸b2 ¦ k 2 j
¦ 2j 2j
b2 ¸¹ j =1
j =1
where the notation is defined in Figure 12.14-1 and 12.14.1.1.
¦ k1i d12i +
i =1
¦ k2 j d 22 j = (0.635Et × 33.52 ) + (0.818Et × 25.92 ) + (2 × 1.65Et × 19.92 )
j =1
= 2,568.2 Et
e · m
3.8 ·
2.5¨¨ 0.05 + 1 ¸¸b12 ¦ k1i = 2.5¨ 0.05 +
¸ × 60 × (0.635Et + 0.818 Et )
1 ¹ i =1
= 1,482.1Et < 2,568.2 Et O.K.
e2 · 2 n
0 ·
2.5¨ 0.05 + ¸¸b2 ¦ k 2 j = 2.5¨ 0.05 +
¸ × 40.5 × (2 × 1.65Et )
b2 ¹ j =1
40.5 ¹
= 676.6 Et < 2,568.2 Et O.K.
Thus, all conditions of the eighth limitation are satisfied.
Note that Eqs. 12.14-2A and 12.14-2B need not checked where the following
three conditions are met:
1. The arrangement of walls is symmetric about each major axis.
2. The distance between the two most separated wall lines is at least 90 percent
of the structure dimension perpendicular to that axis direction.
3. The stiffness along each of the lines of resistance considered in item 2 above
is at least 33 percent of the total stiffness in that direction.
In this example, only the second and third conditions are met.
9. Lines of resistance of the lateral force-resisting shall be oriented at angles of no
more than 15 degrees from alignment with the major orthogonal axes of the
The shear walls in both directions are parallel to the major axes. O.K.
10. The simplified design procedure shall be used for each major orthogonal
horizontal axis direction of the building.
The simplified design procedure is used in both directions (see Step 3). O.K.
11. System irregularities caused by in-plane or out-of-plane offsets of lateral forceresisting elements shall not be permitted.
This building does not have any irregularities. O.K.
12. The lateral load resistance of any story shall not be less than 80 percent of the
story above.
Since this is a one-story building, this limitation is not applicable. O.K.
Since all 12 limitations of 12.14.1.1 are satisfied, the simplified procedure may be
Step 3: Determine the seismic base shear from Flowchart 6.9.
1. Determine S S , S DS and the SDC from Flowchart 6.4.
From Step 1, S S = 1.47 and S DS = 0.98 .
According to 11.6, the SDC is permitted to be determined from Table 11.6-1
alone where the simplified design procedure is used.
For S DS > 0.50 and Occupancy Category II, the SDC is D.
2. Determine the response modification factor R from Table 12.14-1.
For SDC D, a bearing wall system with special reinforced masonry shear walls is
required (system A7). For this system, R = 5.
3. Determine the effective seismic weight W in accordance with 12.14.8.1.
Conservatively assume that the masonry walls are fully grouted and neglect any
wall openings. Also assume a 10 psf superimposed dead load on the roof.
Weight of masonry walls tributary to roof diaphragm
= 0.081 × × 2(60.0 + 40.5) = 98 kips
Weight of roof slab =
× 0.150 × 60 × 40.5 = 243 kips
Superimposed dead load = 0.010 × 60 × 40.5 = 24 kips
W = 98 + 243 + 24 = 365 kips
4. Determine base shear V by Eq. 12.14-11.
V =
1 × 0.98 × 365
= 72 kips
where F = 1 for a one-story building (see 12.14.8.1).
Since this a one-story building, story shear Vx = V .
5. Distribute story shear to the shear walls.
Since the building has a rigid diaphragm, the design story shear is distributed to
the shear walls based on the relative stiffness of the walls, including the effects
from the torsional moment Mt resulting from eccentricity between the locations of
the center of mass and the center of rigidity. Note that the simplified procedure
does not require accidental torsion (12.14.8.3.2.1).
For lateral forces in the N-S direction, there is no torsional moment, since there is
no eccentricity between the center of mass and center of rigidity in that direction.
The east and west walls have the same stiffness, so each wall must resist 72/2 =
36 kips.
For lateral forces in the E-W direction, the torsional moment is equal to
72 × 3.8 = 274 ft-kips.
The total lateral force to be resisted by the north and south shear walls can be
determined from the following equation:
V1i =
Vx +
¦ k1i
i =1
d1i k1i
k1i d12i +
For the north shear wall:
k 2 j d 22 j
j =1
V11 =
33.5 × 0.635Et
0.635 Et
× 72 +
× 274
2,568.2 Et
0.635Et + 0.818Et
= (0.437 × 72) + (0.0083 × 274)
= 31.5 + 2.3 = 33.8 kips
For the south shear wall:
V12 =
25.9 × 0.818Et
0.818 Et
× 72 −
× 274
2,568.2 Et
0.635Et + 0.818Et
= (0.563 × 72) − (0.0082 × 274)
= 40.5 − 2.3 = 38.2 kips
The east and west shear walls are subjected to a shear force due to the torsional
moment for lateral forces in the east-west direction, but that force is less than the
36 kip force that is required for lateral forces in the N-S direction.
Example 6.12 – Nonbuilding Structure
Determine the seismic base shear for the nonbuilding illustrated in Figure 6.20 using
(1) 2L4x4x1/2 braces and (2) 2L4x4x1/4 braces, given the design data below.
Soil classification:
Structural system:
Phoenix, AZ (Latitude: 33.42°, Longitude: -112.05°)
Site Class D
Ordinary steel concentrically braced frame
Part 1: Determine seismic base shear using 2L4x4x1/2 braces
Determine the seismic base shear from Flowchart 6.11.
This nonbuilding structure is similar to buildings and the appropriate design requirements
from Chapter 15 are used to determine the seismic base shear.
W10 column (typ.)
W18 beam (typ.)
Storage bin = 30 kips
2L4x4 brace (typ.)
Figure 6.20 Plan and Elevation of Nonbuilding Structure
1. Determine S DS , S D1 and the SDC from Flowchart 6.4.
Using the USGS Ground Motion Parameter Calculator, S S = 0.18 and S1 = 0.06 .
Using Tables 11.4-1 and 11.4-2, the soil-modified accelerations are S MS = 0.28 and
S M 1 = 0.15 .
Design accelerations: S DS =
× 0.28 = 0.19 and S D1 = × 0.15 = 0.10
From IBC Table 1604.5, the Occupancy Category is I, assuming that the contents of
the storage bin are not hazardous and that the structure represents a low hazard to
human life in the event of failure.
From Table 11.6-1, for 0.167 < S DS < 0.33 , the SDC is B.
From Table 11.6-2, for 0.067 < S D1 < 0.133 , the SDC is B.
Therefore, the SDC is B for this nonbuilding structure.
2. Determine the importance factor I in accordance with 15.4.1.1.
Based on Occupancy Category I, the importance factor I is equal to 1.0 from
Table 11.5-1.
3. Determine the period T in accordance with 15.4.4.22
In lieu of a more rigorous analysis, Eq. 15.4-6 is used to determine the period T:
¦ wi δi2
T = 2π i =1
g ¦ f i δi
i =1
where įi are the elastic deflections due to the forces fi, which represent any lateral
force distribution in accordance with the principles of structural mechanics.
For this one-story nonbuilding structure, this equation reduces to
T = 2π
where k is the lateral stiffness of the structure.
The stiffness can be obtained by applying a unit horizontal load to the top of the
frame. This load does not produce any forces in the columns. Assuming that the
elastic shortening of the beams is negligible, only the braces in a given direction
contribute to the stiffness of the frame.
From statics, the force in one of the four braces due to a horizontal load of 1 applied
to the top of the frame is equal to 0.5592. The horizontal deflection δ due to this unit
load can be obtained from the following equation from the virtual work method:
δ = ¦ u 2 L / AE
where u = force in a brace due to the virtual (unit) load = 0.5592
L = length of a brace =
6 2 + 12 2 = 13.4 ft = 161 in.
The approximate fundamental period equations in 12.8.2.1 are not permitted to be used to determine the
period of a nonbuilding structure (15.4.4).
A = area of a 2L4x4x1/2 brace = 7.49 sq in.
E = modulus of elasticity = 29,000 ksi
4 × 0.5592 2 × 161
= 0.0009 in.
7.49 × 29,000
The stiffness k =
= 1,079 kips/in.
Therefore, the period T is
T = 2π
= 0.05 sec23
386 × 1,079
4. Determine the base shear V.
Since the period is less than 0.06 sec, use Eq. 15.4-5 to determine V:
V = 0.30S DS WI = 0.30 × 0.19 × 30 × 1.0 = 1.7 kips
Part 2: Determine seismic base shear using 2L4x4x1/4 braces
The calculations are similar to those in Part 1, except the stiffness and the period of the
structure are different due to the use of lighter braces.
4 × 0.5592 2 × 161
= 0.0018 in.
3.87 × 29,000
Stiffness k =
T = 2π
= 556 kips/in.
= 0.07 sec > 0.06 sec
386 × 556
Therefore, the base shear V can be determined by the equivalent lateral force procedure
The weight of the steel framing is negligible.
Determine seismic response coefficient C s .
The value of C s from Eq. 12.8-2 is:
= 0.13
C s = DS =
R / I 1.5 / 1.0
where the seismic response coefficient R = 1.5 from Table 15.4-1 for an ordinary steel
concentrically braced frame with unlimited height, which is permitted to be designed by
AISC 360, Specification for Structural Steel Buildings (i.e., without any special seismic
Also, C s must not be less than the larger of 0.044S DS I = 0.008 and 0.01 (governs)
(Eq. 12.8-5).
Thus, the value of C s from Eq. 12.8-2 governs.
V = C sW = 0.13 × 30 = 3.9 kips
CHAPTER 7
All structures and portions of structures located in flood hazard areas must be designed
and constructed to resist the effects of flood hazards and flood loads (IBC 1612.1). Flood
hazards may include erosion and scour whereas flood loads include flotation, lateral
hydrostatic pressures, hydrodynamic pressures (due to moving water), wave impact and
debris impact.
In cases where a building or structure is located in more than one flood zone or is
partially located in a flood zone, the entire building or structure must be designed and
constructed according to the requirements of the more restrictive flood zone.
The following sections address the hazards and loads that need to be considered for
structures located in flood hazard areas.
By definition, a flood hazard area is the greater of the following two areas:
1. The area within a floodplain subject to a 1-percent or greater chance of flooding
in any year.
2. The area designated as a flood hazard area on a community’s flood hazard map,
or otherwise legally designated.
The first of these two areas is typically acquired from Flood Insurance Rate Maps
(FIRMs), which are prepared by the Federal Emergency Management Agency (FEMA)
through the National Flood Insurance Program (NFIP).1 FIRMs show flood hazard areas
along bodies of water where there is a risk of flooding by a base flood, i.e., a flood having
a 1-percent chance of being equaled or exceeded in any given year.2
The NFIP is a voluntary program whose goal is to reduce the loss of life and the damage caused by floods,
to help victims recover from floods and to promote an equitable distribution of costs among those who are
protected by flood insurance and the general public. Conducting flood hazard studies and providing
FIRMs and Flood Insurance Studies (FISs) for participating communities are major activities undertaken
by the NFIP.
The term “100-year flood” is a misnomer. Contrary to popular belief, it is not the flood that will occur
once every 100 years, but the flood elevation that has a 1-percent chance of being equaled or exceeded in
any given year. The “100-year flood” could occur more than once in a relatively short period of time. The
flood elevation that has a 1-percent chance of being equaled or exceeded in any given year is the standard
used by most government agencies and the NFIP for floodplain management and to determine the need for
flood insurance.
In addition to showing the extent of flood hazards, the maps also show base flood
elevations (the heights to which flood waters will rise during passage or occurrence of the
base flood) and floodways.
Floodways (which are channels of a river, creek or other watercourse) and adjacent land
areas must be kept clear of encroachments so that the base flood can discharge without
increasing the water surface elevations by more than a designated height.3
Some local jurisdictions develop and subsequently adopt flood hazard maps that are more
extensive than FEMA maps. In such cases, flood design and construction requirements
must be satisfied in the areas delineated by the more extensive maps. Thus, a design flood
is a flood associated with the greater of the area of a base flood or the area legally
designated as a flood hazard area by a community.
The NFIP divides flood hazard areas into flood hazard zones beginning with the letter
“A” or “V.” “A” zones are those areas within inland or coastal floodplains where highvelocity wave action is not expected during the base flood. In contrast, “V” zones are
those areas within a coastal floodplain where high-velocity wave action can occur during
the base flood event. Table 7.1 contains general descriptions of these zones. Such zone
designations are contained in FIRMs and essentially indicate the magnitude and severity
of flood hazards.
The concept of a Coastal A Zone is introduced in Chapter 5 of ASCE/SEI 7-05 and in
ASCE/SEI 24-05, Flood Resistant Design and Construction, to facilitate application of
load combinations in Chapter 2 of ASCE/SEI 7.4 In general, a Coastal A Zone is an area
located within a flood hazard area that is landward of a V Zone or landward of an open
coast without mapped V Zones (such as the shorelines of the Great Lakes). Wave forces
and erosion potential should be taken into consideration when designing a structure for
the effects of flood loads in such zones.
To be classified as a Coastal A Zone, the principal source of flooding must be from
astronomical tides, storm surges, seiches or tsunamis and not from riverine flooding.
Additionally, stillwater flood depths must be greater than or equal to 1.9 ft and breaking
wave heights must be greater than or equal to 1.5 ft during the base flood conditions.5
Designated heights are found in floodway data tables in FISs. Also, included in FISs are the FIRM, the
Flood Boundary and Floodway Map (FBFM), the base flood elevation (BFE) and supporting technical
IBC 1612.4 references Chapter 5 of ASCE/SEI 7-05 and ASCE/SEI 24-05 for the design and construction
of buildings and structures located in flood hazard areas. The requirements in these documents are covered
in the next section of this publication. The NFIP regulations do not differentiate between Coastal and NonCoastal A Zones.
See Section 4.1.1 of ASCE/SEI 24. Stillwater depth is the vertical distance between the ground and the
stillwater elevation, which is the elevation that the surface of water would assume in the absence of waves.
The stillwater elevation is referenced to the North American Vertical Datum (NAVD), the National
Geodetic Vertical Datum (NGVD) or other datum and it is documented in FIRMs.
Table 7.1 FEMA Flood Hazard Zones (Flood Insurance Zones)6
Moderate to Low Risk Areas
These zones identify areas outside of the flood hazard area.
• Shaded Zone X identify areas that have a 0.2-percent probability of
being equaled or exceeded during any given year.
• Unshaded Zone X identify areas where the annual exceedance
probability of flooding is less than 0.2 percent.
High Risk Areas
A, AE, A1-30,
These zones identify areas of flood hazard that are not within the Coastal
A99, AR, AO
High Hazard Area.
and AH
High Risk − Coastal Areas
These zones identify the Coastal High Hazard Area, which extends from
offshore to the inland limit of a primary frontal dune along an open coast
V, VE and V1- and any other portion of the flood hazard zone that is subject to highV30
velocity wave action from storms or seismic sources and to the effects of
severe erosion and scour. Such zones are generally based on wave
heights (3 ft or greater) or wave runup depths (3 ft or greater).
* Zone B on older FIRMs corresponds to shaded Zone X on more recent FIRMs. Zone C on older FIRMs
corresponds to unshaded Zone X on more recent FIRMs.
The principal sources of flooding in Non-Coastal A Zones are runoff from rainfall,
snowmelt or a combination of both.
It is recommended to check with the local building official for the most current
information on flood hazard areas prior to designing a structure in a flood-prone area.
According to IBC 1612.4, the design and construction of buildings and structures located
in flood hazard areas shall be in accordance with Chapter 5 of ASCE/SEI 7-05 and
ASCE/SEI 24-05, Flood Resistant Design and Construction. Section 1.6 of ASCE/SEI 24
requires that design flood loads and their combination with other loads be determined by
ASCE/SEI 7.
The provisions of ASCE/SEI 24 are intended to meet or exceed the requirements of the
NFIP. Figures 1-1 and 1-2 in ASCE/SEI 24 illustrate the application of the standard and
the application of the sections in the standard, respectively.
Comprehensive definitions for each zone can be found on the FEMA Map Service Center website
The provisions contained in IBC Appendix G, Flood-Resistant Construction, are intended
to fulfill the floodplain management and administrative requirements of NFIP that are not
included in the IBC. IBC appendix chapters are not mandatory unless they are
specifically referenced in the adopting ordinance of the jurisdiction.
Other provisions related to construction in flood hazard areas worth noting are found in
IBC Chapter 18. Section 1804.4 prohibits grading in flood hazard areas unless the
specific requirements given in the section are met. Section 1805.1.2.1 requires raised
floor buildings in flood hazard areas to have the finished grade elevation under the floor
such as at a crawl space to be equal to or higher than outside finished grade on at least
one side. The exception permits under floor spaces in Group R-3 residential buildings to
comply with the FEMA technical bulletin FEMA/FIA-TB-11, Crawlspace Construction
for Buildings Located in Special Flood Hazard Areas.7 This bulletin provides guidance
on crawlspace construction and gives the minimum NFIP requirements for crawlspaces
constructed in Special Flood Hazard Areas.
Flood Loads
Flood waters can create the following loads, which are referenced in ASCE/SEI 5.4:
Hydrostatic loads (ASCE/SEI 5.4.2)
Hydrodynamic loads (ASCE/SEI 5.4.3)
Wave loads (ASCE/SEI 5.4.4)
Impact loads (ASCE/SEI 5.4.5)
Determination of these loads is based on the design flood, which is defined in Section 7.2
of this publication. The design flood elevation (DFE) is the elevation of the design flood
including wave height. For communities that have adopted minimum NFIP requirements,
the DFE is identical to the base flood elevation (BFE). The DFE exceeds the BFE in
communities that have adopted requirements that exceed minimum NFIP requirements.8
The equations in Table 7.2 can be used to determine flood loads in accordance with
Chapter 5 of ASCE/SEI 7. Figure 7.1 illustrates the relationships among the various flood
parameters that are used in the equations. Additional information on each type of load
Loads on walls that are required by ASCE/SEI 24 to break away (i.e., breakaway walls)
are given in ASCE/SEI 5.3.3. The minimum design load must be the largest of the
following loads: (1) wind load in accordance with ASCE/SEI Chapter 6, (2) seismic load
in accordance with ASCE/SEI Chapter 11 through 23, and (3) 10 psf. The maximum
permitted collapse load is 20 psf, unless the design meets the conditions of ASCE 5.3.3.
FEMA/FIA-TB-11 is available from FEMA at http://www.fema.gov/library/viewRecord.do?id=1724.
Communities that have chosen to exceed minimum NFIP requirements typically require a specified
freeboard above the BFE.
Table 7.2 Flood Loads
Load Type
Vertical upward
V ≤ 10 ft/sec
V > 10 ft/sec
Space behind
Breaking the vertical
wall is dry
loads on Free water
exists behind
the vertical
Breaking wave loads on
nonvertical walls
Breaking wave loads
from obliquely incident
5-4, 5-2, 5-3
where d h = (aV 2 / 2 g )
Fdyn = aρV 2 d s w / 2
FD = γ wC D DH b2 / 2
where H b = 0.78d s
d s = 0.65( BFE − G )
Pmax = C p γ w d s + 1.2 γ w d s
5-5, 5-6
Ft = 1.1C p γ w d s2 + 2.4γ w d s2
Pmax = C p γ w d s + 1.2 γ w d s
5-5, 5-7
Ft = 1.1C p γ w d s2 + 1.9γ w d s2
Fnv = Ft sin 2 α
Foi = Ft sin 2 α
Fsta = γ w d s2 w / 2
Fbouy = γ w × Volume
Fdyn = γ w d s d h w
Breaking wave loads on
vertical pilings and
πWVb C I CO C D C B Rmax
2 g (Δt )
F = C D ρAV 2 / 2
γ w = unit weight of water, which is equal to 62.4 pcf for fresh water and 64.0 pcf for salt water.
V = water velocity in ft/sec; see ASCE/SEI C5.4.3 for equations on how to determine V.
Additional information on these equations is given in ASCE/SEI Chapter 5 and in this section.
** In communities that participate in the NFIP, it is required that buildings in V Zones be elevated above the
BFE on an open foundation; thus, hydrostatic loads are not applicable. It is also required that the
foundation walls of buildings in A Zones be equipped with openings that allow flood water to enter so that
internal and external hydrostatic pressure will equalize.
Wave crest (= BFE)
Hb = 0.78ds
Wave trough
BFE = Base Flood Elevation
DFE = Design Flood Elevation
ds = Design stillwater flood depth
G = Ground elevation
GS = Lowest eroded ground elevation adjacent to structure
Hb = Breaking wave height
Figure 7.1 Flood Parameters
Hydrostatic Loads
Hydrostatic loads occur when stagnant or slowly moving (velocity less than 5 ft/sec; see
ASCE/SEI C5.4.2) water comes into contact with a building or building component. The
water can be above or below the ground surface.
Hydrostatic loads are commonly subdivided into lateral loads, vertical downward loads
and vertical upward loads (uplift or buoyancy). The hydrostatic pressure at any point on
the surface of a structure or component is equal in all directions and acts perpendicular to
the surface.
Lateral hydrostatic pressure is equal to zero at the surface of the water and increases
linearly to γ s d s at the stillwater depth d s where γ s is the unit weight of water. The total
force Fsta on the width w of a vertical element acts at the point that is two-thirds below
the stillwater surface of the water. See the second footnote in Table 7.2 for more
information on lateral hydrostatic pressure in V Zones and A Zones.
Buoyant forces on a building can be of concern where the actual stillwater flood depth
exceeds the design stillwater flood depth. Such forces are also of concern for tanks and
swimming pools. The buoyant force Fbouy is calculated based on the volume of flood
water displaced by a submerged object.
Hydrodynamic Loads
Hydrodynamic loads are caused by water moving at a moderate to high velocity above
the ground level. Similar to wind loads, the loads produced by moving water include an
impact load on the upstream face of a building, drag forces along the sides and suction on
the downstream face.
For a water velocity less than or equal to 10 ft/sec, ASCE 5.4.3 permits the dynamic
effects of moving water to be converted into an equivalent hydrostatic load. This is
accomplished by increasing the DFE by an equivalent surcharge depth d h calculated by
ASCE/SEI Eq. (5-1) (see Table 7.2). In Eq. (5-1), V is the average velocity of water,
which can be estimated by ASCE/SEI Eqs. (C5-1) and (C5-2), g is the acceleration due to
gravity (32.2 ft/sec2), and a is the drag coefficient (or shape factor) that must be greater
than or equal to 1.25.9 It is assumed in this case that the total force Fdyn on the width w
of a vertical element acts at the point that is two-thirds below the stillwater surface of the
For a water velocity greater than 10 ft/sec, basic concepts of fluid mechanics must be
utilized to determine loads imposed by moving water. The equation in Table 7.2 can be
used to determine the total load Fdyn in such cases. In this equation, ρ = the mass
density of water = γ w / g and A = surface area normal to the water flow = wd s . The
recommended value of the drag coefficient a is 2.0 for square or rectangular piles and is
1.2 for round piles.10 In this case, Fdyn is assumed to act at the stillwater mid-depth
(halfway between the ground surface and the stillwater elevation).
Wave Loads
Wave loads result from water waves propagating over the surface of the water and
striking a building or other object. The following loads must be accounted for when
designing buildings and other structures for wave loads:
Waves breaking on any portion of a building or structure
Uplift forces caused by shoaling waves beneath a building or structure (or any
portion thereof)
Wave runup striking any portion of a building or structure
Guidelines on how to determine a are given in ASCE/SEI C5.4.3.
See Table 11.2 in Coastal Construction Manual, Third Edition, FEMA 55, 2000, for values of a that can
be used for larger obstructions.
Wave-induced drag and inertia forces
Wave-induced scour at the base of a building or structure, or at its foundation
The effects of nonbreaking waves and broken waves can be determined using the
procedures in ASCE 5.4.2 and 5.4.3 for hydrostatic and hydrodynamic loads,
Of the wave loads noted above, the loads from breaking waves are the highest. Thus, this
load is used as the design wave load where applicable.
Wave loads must be included in the design of buildings and other structures located in
both V Zones (wave heights equal to or greater than 3 ft) and A Zones (wave heights less
than 3 ft). Since present NFIP mapping procedures do not designate V Zones in all areas
where wave heights greater than 3 ft can occur during base flood conditions, it is
recommended that historical flood damages be investigated near a site to determine
whether or not wave forces can be significant.
ASCE 5.4.4 permits three methods to determine wave loads: (1) analytical procedures
(ASCE 5.4.4), (2) advanced numerical modeling procedures and (3) laboratory test
procedures. The analytical procedures of ASCE 5.4.4 for breaking wave loads are
discussed next.
Breaking wave loads on vertical pilings and columns. The net force FD resulting from
a breaking wave acting on a rigid vertical pile or column is determined by ASCE/SEI
Eq. (5-4) (see ASCE/SEI 5.4.4.1 and Table 7.2). In this equation,
C D = drag coefficient for breaking waves
= 1.75 for round piles or columns
= 2.25 for square or rectangular piles or columns
D = pile or column diameter for circular sections
= 1.4 times the width of the pile or column for rectangular or square sections
H b = breaking wave height (see Figure 7.1)
= 0.78d s = 0.51( BFE − G )
This load is assumed to act at the stillwater elevation.
Breaking wave loads on vertical walls. Two cases are considered in ASCE/SEI 5.4.4.2.
In the first case, a wave breaks against a vertical wall of an enclosed dry space (i.e., the
space behind the vertical wall is dry). Equations (5-5) and (5-6) give the maximum
pressure and net force, respectively, resulting from waves that are normally incident to
the wall (the direction of the wave approach is perpendicular to the face of the wall). The
hydrostatic and dynamic pressure distributions are illustrated in ASCE/SEI Figure 5-1.
In the second case, a wave breaks against a vertical wall where the stillwater level on
both sides of the wall are equal.11 The maximum combined wave pressure is still
computed by Eq. (5-5), and the net breaking wave force Ft is computed by Eq. (5-7).
ASCE/SEI Figure 5-2 illustrates the pressure distributions in this case.
Values of the dynamic pressure coefficient C p are given in Table 5-1 based on the
building category. ASCE/SEI C5.4.4.2 contains information on the probabilities of
exceedance that correspond to the different building categories listed in the table.
Breaking wave loads on nonvertical walls. ASCE/SEI Eq. (5-8) can be used to
determine the horizontal component of a breaking wave load Fnv on a wall that is not
vertical. The angle α is the vertical angle between the nonvertical surface of a wall and
the horizontal.
Breaking wave loads from obliquely incident waves. Maximum breaking wave loads
occur where a wave strikes perpendicular to a surface. Equation (5-9) can be used to
determine the horizontal component of an obliquely incident breaking wave force Foi
where the angle α is the horizontal angle between the direction of the wave approach and
a vertical surface.12
Impact Loads
Impact loads occur where objects carried by moving water strike a building or structure.
Normal impact loads result from isolated impacts of normally encountered objects while
special impact loads result from large objects such as broken up ice floats and
accumulated debris. These two types of impact loads are commonly considered in the
design of buildings and similar structures.
The recommended method for calculating normal impact loads on buildings is given in
ASCE/SEI C5.4.5. Equation (C5-3) can be used to determine the impact force F. The
parameters and coefficients in this equation are discussed in that section.
ASCE/SEI C5.4.5 also contains Eq. (C5-4), which can be used to determine the drag
force due to debris accumulation (i.e., special impact load). Additional methods to predict
such loads can be found in the references provided at the end of that section.
It is assumed that objects are at or near the water surface level when they strike a
building. Thus, the loads determined by Eqs. (C5-3) and (C5-4) are usually assumed to
act at the stillwater flood level; in general, these loads should be applied horizontally at
the most critical location at or below the DFE.
This can occur, for example, where a wave breaks against a wall equipped with openings (such as flood
vents) that allow flood waters to equalize on both sides.
In coastal areas, it is usually assumed that the direction of wave approach is perpendicular to the
shoreline. Therefore, Eq. (5-9) provides a method for reducing breaking wave loads on vertical surfaces
that are not parallel to the shoreline.
The following sections contain examples that illustrate the flood load provisions of
IBC 1612, Chapter 5 in ASCE/SEI 7-05 and ASCE/SEI 24-05.
Example 7.1 – Residential Building Located in a Non-Coastal A Zone
The plan dimensions of a residential building are 40 ft by 50 ft. Determine the design
flood loads on the perimeter reinforced concrete foundation wall depicted in Figure 7.2
given the design data below.
Non-Coastal A Zone
Design stillwater elevation, d s :
1 ft-6 in.
Base flood elevation (BFE):
3 ft-0 in.
Reinforced concrete
foundation wall
ds = 1ƍ-6Ǝ
Figure 7.2 Reinforced Concrete Foundation Wall
In this example, it is assumed that the BFE and the DFE are equal.
This residential building is classified as an Occupancy Category II building in accordance
with IBC Table 1604.5.13 The elevation of the top of the lowest floor relative to the BFE
must be greater than or equal to 3 + 1 = 4 ft to satisfy the requirements of Section 2.3 of
ASCE/SEI 24 for Category II buildings located in Non-Coastal A Zones (see Table 2-1 of
ASCE/SEI 24).
Table 1-1 of ASCE/SEI 24 is the same as Table 1-1 of ASCE/SEI 7. Where the IBC is the legally
adopted building code, Table 1604.5 should be used to determine the occupancy category of the
The applicable flood loads are hydrostatic, hydrodynamic, breaking wave and impact.
Step 1: Determine water velocity V
Since the building is located in a Non-Coastal A Zone, it is appropriate to use the
lower bound average water velocity, which is given by ASCE/SEI Eq. (C5-1):
V = d s / 1 sec = 1.5 ft/sec
Step 2: Determine lateral hydrostatic load Fsta
The equation for the lateral hydrostatic load is given in Table 7.2. Assuming fresh
γ d 2 62.4 × 1.5 2
Fsta = w s =
= 70 lb/linear ft of foundation wall
This load acts at 1 ft-0 in. below the stillwater surface of the water (or, equivalently, 6 in.
above the ground surface).
Step 3: Determine hydrodynamic load Fdyn
According to ASCE/SEI 5.4.3, the dynamic effects of water can be converted into an
equivalent hydrostatic load where the velocity of water is less than 10 ft/sec.
The equivalent surcharge depth d h is determined by ASCE/SEI Eq. (5-1):
dh =
aV 2
In lieu of a more detailed analysis, the drag coefficient a is determined from
Table 11.2 of the Coastal Construction Manual, FEMA 55. Since the building will
not be completely immersed in water, a is determined by the ratio of longest plan
dimension of the building to d s : 50 / 1.5 = 33.3 . For a ratio of 33.3, a = 1.5.
Thus, d h =
1.5 × 1.5 2
= 0.05 ft
2 × 32.2
Using the equation in Table 7.2, the hydrodynamic load is:
Fdyn = γ w d s d h = 62.4 × 1.5 × 0.05 = 5 lb/linear ft of foundation wall
This load acts at 1 ft-0 in. below the stillwater surface of the water (or, equivalently,
6 in. above the ground surface).
Step 4: Determine breaking wave load Ft
Assuming that the dry-floodproofing requirements of Section 6.2 of ASCE/SEI 24 are
not met, flood vents must be installed in the foundation wall (see Section 2.6.1 of
ASCE/SEI 24).14 Thus, the breaking wave load is determined by ASCE/SEI Eq. (5-7),
which is applicable where free water exists behind the wall:
Ft = 1.1C p γ w d s2 + 1.9γ w d s2
Category II buildings are assigned a value of C p corresponding to a 1-percent
probability of exceedance, which is consistent with wave analysis procedures used by
FEMA (see ASCE/SEI C5.4.4.2). For a Category II building, C p = 2.8 from
ASCE/SEI Table 5-1. Thus,
Ft = (1.1 × 2.8 × 62.4 × 1.5 2 ) + (1.9 × 62.4 × 1.52 )
= 432 + 267 = 699 lb/linear ft of foundation wall
This load acts at the stillwater elevation, which is 1 ft-6 in. above the ground surface.
Step 5: Determine impact load F
Both normal and special impact loads are determined.
1. Normal impact loads
ASCE/SEI Eq. (C5-3) is used to determine normal impact loads:
πWVbC I CO C D C B Rmax
2 g (Δt )
Guidance on establishing the debris weight W is given in ASCE/SEI C5.4.5. It is
assumed in this example that W = 1000 lb.
It is also assumed that the velocity of the object Vb is equal to the velocity of the
water V. Thus, from Step 1 of this example, Vb = 1.5 ft/sec.
The design of the flood vents must satisfy the requirements of Section 2.6.2 of ASCE/SEI 24.
The importance coefficient C I is obtained from Table C5-1. For an Occupancy
Category II building, C I = 1.0.
The orientation coefficient CO = 0.8. This coefficient accounts for impacts that
are oblique to the structure.
The depth coefficient C D is obtained from Table C5-2 or, equivalently, from
Figure C5-1. For a stillwater depth of 1.5 ft in an A Zone, C D = 0.125.
The blockage coefficient C B is obtained from Table C5-3 or, equivalently, from
Figure C5-2. Assuming that there is no upstream screening and that the flow path
is wider than 30 ft, C B = 1.0.
The maximum response ratio for impulsive load Rmax is determined from
Table C5-4. Assume that the duration of the debris impact load is 0.03 sec (see
ASCE/SEI C5.4.5) and that the natural period of the building is 0.2 sec. The ratio
of the impact duration to the natural period of the building is 0.03/0.2 = 0.15.
From Table C5-4, Rmax = (0.4 + 0.8) / 2 = 0.6.
π × 1000 × 1.5 × 1.0 × 0.8 × 0.125 × 1.0 × 0.6
= 146 lb
2 × 32.2 × 0.03
This load acts at the stillwater flood elevation and can be distributed over an
appropriate width of the foundation wall.
2. Special impact loads
ASCE/SEI Eq. (C5-4) is used to determine special impact loads:
C D ρAV 2
Using a drag coefficient C D = 1 and assuming a projected area of debris
accumulation A = 1.5 × 50 = 75 sq ft, the impact force F is
§ 62.4 ·
1× ¨
¸ × 75 × 1.5
F= ©
= 164 lb
This load acts at the stillwater flood elevation and is uniformly distributed over
the width of the foundation wall.
The flood load effects determined above must be combined with the other applicable load
effects in accordance with IBC Sections 1605.2.2 and 1605.3.1.2, which reference
ASCE/SEI 7 for load combinations involving flood loads. The above flood load effects
are combined with other loads in accordance with the modified strength design load
combinations including flood loads in ASCE/SEI 2.3.3(2) or the modified allowable
stress design load combinations including flood loads in ASCE/SEI 2.4.2(2).
The design and construction of the foundation, including the foundation walls, must
satisfy the requirements of Section 1.5.3 of ASCE/SEI 24.
Example 7.2 – Residential Building Located in a Coastal A Zone
For the residential building described in Example 7.1, determine the design flood loads
on the reinforced concrete columns depicted in Figure 7.3 given the design data below.
Coastal A Zone15
Design stillwater elevation, d s :
3 ft-6 in.
Base flood elevation (BFE):
6 ft-0 in.
Lowest horizontal
structural member
12Ǝ × 12Ǝ reinforced
concrete column (typ.)
Reinforced concrete mat
Figure 7.3 Partial Elevation of Residential Building
In this example, it is assumed that the BFE and the DFE are equal.
The provided data satisfies the criteria of Coastal A Zones given in Section 4.1.1 of ASCE/SEI 24:
stillwater depth = 3.5 ft > 1.9 ft and wave height = 0.78 d s = 2.7 ft > 1.5 ft.
ds = 3ƍ-6Ǝ
This residential building is classified as an Occupancy Category II building in accordance
with IBC Table 1604.5. The elevation of the bottom of the lowest supporting horizontal
structural member of the lowest floor relative to the BFE must be greater than or equal to
6 + 1 = 7 ft to satisfy the requirements of Section 4.4 of ASCE/SEI 24 for Occupancy
Category II buildings located in Coastal A Zones (see Table 4-1 of ASCE/SEI 24).
The applicable flood loads are hydrodynamic, breaking wave and impact.
Step 1: Determine water velocity V
Since the building is located in a Coastal A Zone, it is appropriate to use the upper
bound average water velocity, which is given by ASCE/SEI Eq. (C5-2):
V = ( gd s ) 0.5 = (32.2 × 3.5) 0.5 = 10.6 ft/sec
Step 2: Determine hydrodynamic load Fdyn
Since the water velocity exceeds 10 ft/sec, it is not permitted to use an equivalent
hydrostatic load to determine the hydrodynamic load (ASCE/SEI 5.4.3).
Use the equation in Table 7.2 to determine the hydrodynamic load Fdyn on one
reinforced concrete column:
Fdyn =
aρV 2 d s w
Based on the recommendations in ASCE/SEI C5.4.3 and FEMA 55, the drag
coefficient a is taken as 2.0.
Assuming salt water, the hydrodynamic load is
§ 64.0 ·
§ 12 ·
2.0 × ¨
¸ × 10.6 × 3.5 × ¨ ¸
© 32.2 ¹
© 12 ¹ = 782 lb
Fdyn =
This load acts at 1 ft-9 in. below the stillwater surface of the water.
Step 3: Determine breaking wave load FD
The breaking wave load is determined by ASCE/SEI Eq. (5-4), which is applicable
for vertical pilings and columns:
γ wC D DH b2
FD =
According to ASCE/SEI 5.4.4.1, the drag coefficient C D is equal to 2.25 for square
The breaking wave height H b is determined by Eq. (5-2):
H b = 0.78d s = 0.78 × 3.5 = 2.7 ft
Therefore, the breaking wave load on one of the columns is
12 ·
64.0 × 2.25 × ¨1.4 × ¸ × 2.7 2
12 ¹
FD =
= 735 lb
This load acts at the stillwater elevation, which is 3 ft-6 in. above the ground surface.
Step 4: Determine impact load F
Both normal and special impact loads are determined.
1. Normal impact loads
ASCE/SEI Eq. (C5-3) is used to determine normal impact loads:
πWVb C I CO C D C B Rmax
2 g (Δt )
Guidance on establishing the debris weight W is given in ASCE/SEI C5.4.5. It is
assumed in this example that W = 1000 lb.
It is also assumed that the velocity of the object Vb is equal to the velocity of the
water V. Thus, from Step 1 of this example, Vb = 10.6 ft/sec.
The importance coefficient C I is obtained from Table C5-1. For an Occupancy
Category II building, C I = 1.0.
The orientation coefficient CO = 0.8. This coefficient accounts for impacts that
are oblique to the structure.
The depth coefficient C D is obtained from Table C5-2 or, equivalently, from
Figure C5-1. For a stillwater depth of 3.5 ft in an A Zone, C D = (0.75 + 0.5) / 2
= 0.63.
The blockage coefficient C B is obtained from Table C5-3 or, equivalently, from
Figure C5-2. Assuming that there is no upstream screening and that the flow path
is wider than 30 ft, C B = 1.0.
The maximum response ratio for impulsive load Rmax is determined from
Table C5-4. Assume that the duration of the debris impact load is 0.03 sec (see
ASCE/SEI C5.4.5) and that the natural period of the building is 0.2 sec. The ratio
of the impact duration to the natural period of the building is 0.03/0.2 = 0.15.
From Table C5-4, Rmax = (0.4 + 0.8) / 2 = 0.6.
π × 1000 × 10.6 × 1.0 × 0.8 × 0.63 × 1.0 × 0.6
= 5212 lb
2 × 32.2 × 0.03
This load acts at the stillwater flood elevation.
2. Special impact loads
ASCE/SEI Eq. (C5-4) is used to determine special impact loads:
C D ρAV 2
Using a drag coefficient C D = 1 and assuming a projected area of debris
accumulation A = 1 × 3.5 = 3.5 sq ft, the impact force F on one column is
§ 64.0 ·
1× ¨
¸ × 3.5 × 10.6
F= ©
= 391 lb
This load acts at the stillwater flood elevation.
The flood load effects determined above must be combined with the other applicable load
effects in accordance with IBC Sections 1605.2.2 and 1605.3.1.2, which reference
ASCE/SEI 7 for load combinations involving flood loads. The above flood load effects
are combined with other loads in accordance with the modified strength design load
combinations including flood loads in ASCE/SEI 2.3.3(1) or the modified allowable
stress design load combinations including flood loads in ASCE/SEI 2.4.2(1).16
The design and construction of the mat foundation must satisfy the requirements of
Section 4.5 of ASCE/SEI 24. The top of the mat foundation must be located below the
eroded ground elevation and must extend to a depth sufficient to provide the support to
prevent flotation, collapse, or permanent lateral movement under the design load
combinations (Section 1.5.3).
The design and construction of the reinforced concrete columns must satisfy the
requirements of ACI 318-08 (Section 4.5.7.3 of ASCE/SEI 24).
Example 7.3 – Residential Building Located in a V Zone
For the residential building described in Examples 7.1 and 7.2, determine the design
flood loads on 8-in.-diameter reinforced concrete piles given the design data below. The
partial elevation of the building is similar to that shown in Figure 7.3.
V Zone17
Design stillwater elevation, d s :
4 ft-6 in.
Base flood elevation (BFE):
7 ft-8 in.
In this example, it is assumed that the BFE and the DFE are equal.
This residential building is classified as an Occupancy Category II building in accordance
with IBC Table 1604.5. The elevation of the bottom of the lowest supporting horizontal
structural member of the lowest floor relative to the BFE must be greater than or equal to
7.67 + 1 = 8.67 ft to satisfy the requirements of Section 4.4 of ASCE/SEI 24 for Category
II buildings located in V Zones (see Table 4-1 of ASCE/SEI 24).
The applicable flood loads are hydrodynamic, breaking wave, and impact.
Step 1: Determine water velocity V
Since the building is located in a V Zone, it is appropriate to use the upper bound
average water velocity, which is given by ASCE/SEI Eq. (C5-2):
All of the flood loads calculated in this example will not occur on all of the columns at the same time.
See Section 11.6.12 of FEMA 55 for guidance on how to apply the flood loads.
The provided data satisfies the criteria of V Zones given in Section 4.1.1 of ASCE/SEI 24: stillwater
depth = 4.5 ft > 3.8 ft and wave height = 0.78 d s = 3.5 ft > 3.0 ft.
V = ( gd s ) 0.5 = (32.2 × 4.5) 0.5 = 12.0 ft/sec
Step 2: Determine hydrodynamic load Fdyn
Since the water velocity exceeds 10 ft/sec, it is not permitted to use an equivalent
hydrostatic load to determine the hydrodynamic load (ASCE/SEI 5.4.3).
Use the equation in Table 7.2 to determine the hydrodynamic load Fdyn on one
reinforced concrete pile:
Fdyn =
aρV 2 d s w
Based on the recommendations in ASCE/SEI C5.4.3 and FEMA 55, the drag
coefficient a is taken as 1.2.
Assuming salt water, the hydrodynamic load is
§ 64.0 ·
1.2 × ¨
¸ × 12.0 × 4.5 × ¨ ¸
© 32.2 ¹
© 12 ¹ = 515 lb
Fdyn =
This load acts at 2 ft-3 in. below the stillwater surface of the water.
Step 3: Determine breaking wave load FD
The breaking wave load is determined by ASCE/SEI Eq. (5-4), which is applicable
for vertical pilings and columns:
γ C DH b2
FD = w D
According to ASCE/SEI 5.4.4.1, the drag coefficient C D is equal to 1.75 for round
The breaking wave height H b is determined by Eq. (5-2):
H b = 0.78d s = 0.78 × 4.5 = 3.5 ft
Therefore, the breaking wave load on one of the piles is
64.0 × 1.75 × ¨ ¸ × 3.5 2
© 12 ¹
FD =
= 457 lb
This load acts at the stillwater elevation, which is 4 ft-6 in. above the ground surface.
Step 4: Determine impact load F
Both normal and special impact loads are determined.
1. Normal impact loads
ASCE/SEI Eq. (C5-3) is used to determine normal impact loads:
πWVb C I CO C D C B Rmax
2 g (Δt )
Guidance on establishing the debris weight W is given in ASCE/SEI C5.4.5. It is
assumed in this example that W = 1000 lb.
It is also assumed that the velocity of the object Vb is equal to the velocity of the
water V. Thus, from Step 1 of this example, Vb = 12.0 ft/sec.
The importance coefficient C I is obtained from Table C5-1. For an Occupancy
Category II building, C I = 1.0.
The orientation coefficient CO = 0.8. This coefficient accounts for impacts that
are oblique to the structure.
The depth coefficient C D is obtained from Table C5-2. For a V Zone, C D = 1.0 .
The blockage coefficient C B is obtained from Table C5-3 or, equivalently, from
Figure C5-2. Assuming that there is no upstream screening and that the flow path
is wider than 30 ft, C B = 1.0.
The maximum response ratio for impulsive load Rmax is determined from
Table C5-4. Assume that the duration of the debris impact load is 0.03 sec (see
ASCE/SEI C5.4.5) and that the natural period of the building is 0.2 sec. The ratio
of the impact duration to the natural period of the building is 0.03/0.2 = 0.15.
From Table C5-4, Rmax = (0.4 + 0.8) / 2 = 0.6.
π × 1000 × 12.0 × 1.0 × 0.8 × 1.0 × 1.0 × 0.6
= 9366 lb
2 × 32.2 × 0.03
This load acts at the stillwater flood elevation.
2. Special impact loads
ASCE/SEI Eq. (C5-4) is used to determine special impact loads:
C D ρAV 2
Using a drag coefficient C D = 1 and assuming a projected area of debris
accumulation A = 0.67 × 4.5 = 3.0 sq ft, the impact force F on one pile is
§ 64.0 ·
1× ¨
¸ × 3.0 × 12.0
32.2 ¹
F= ©
= 429 lb
This load acts at the stillwater flood level.
The flood load effects determined above must be combined with the other applicable load
effects in accordance with IBC Sections 1605.2.2 and 1605.3.1.2, which reference
ASCE/SEI 7 for load combinations involving flood loads. The above flood load effects
are combined with other loads in accordance with the modified strength design load
combinations including flood loads in ASCE/SEI 2.3.3(1) or the modified allowable
stress design load combinations including flood loads in ASCE/SEI 2.4.2(1).18
The design and construction of the foundation must satisfy the requirements of
Section 4.5 of ASCE/SEI 24. Requirements for pile foundations are given in
Section 4.5.5.
The design and construction of the reinforced concrete piles must satisfy the requirements
of ACI 318-09 (Section 4.5.5.8 of ASCE/SEI 24). Additional design provisions are given
in Section 4.5.6.
All of the flood loads calculated in this example will not occur on all of the piles at the same time. See
Section 11.6.12 of FEMA 55 for guidance on how to apply the flood loads. | {"url":"https://studylib.net/doc/27125995/david-a.-fanella---structural-load-determination-under-20...","timestamp":"2024-11-05T18:56:18Z","content_type":"text/html","content_length":"635527","record_id":"<urn:uuid:674b80e1-adca-4544-91de-ca69327dc685>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00207.warc.gz"} |
Database of Original & Non-Theoretical Uses of Topology
Ghrist Barcoded Video Frames. Application in Detecting Persistent Visual Scene Surface Shapes Captured in Videos (2019)
Arjuna P. H. Don, James F. Peters Abstract This article introduces an application of Ghrist barcodes in the study of persistent Betti numbers derived from vortex nerve complexes found in
triangulations of video frames. A Ghrist barcode (also called a persistence barcode) is a topology of data pic- tograph useful in representing the persistence of the features of changing shapes. The
basic approach is to introduce a free Abelian group representation of intersecting filled polygons on the barycenters of the triangles of Alexandroff nerves. An Alexandroff nerve is a maximal
collection of triangles of a common vertex in the triangulation of a finite, bounded planar region. In our case, the planar region is a video frame. A Betti number is a count of the number of
generators is a finite Abelian group. The focus here is on the persistent Betti numbers across sequences of triangulated video frames. Each Betti number is mapped to an entry in a Ghrist barcode. Two
main results are given, namely, vortex nerves are Edelsbrunner-Harer nerve complexes and the Betti number of a vortex nerve equals k + 2 for a vortex nerve containing k edges attached between a pair
of vortex cycles in the nerve. | {"url":"https://donut.topology.rocks/?q=tag%3A%22Barcode%22","timestamp":"2024-11-10T11:25:02Z","content_type":"text/html","content_length":"64497","record_id":"<urn:uuid:855944bc-d3bc-4405-8bfe-95fbde3d15b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00610.warc.gz"} |
[Solved] List the model assumptions for one-way AN | SolutionInn
Answered step by step
Verified Expert Solution
List the model assumptions for one-way ANOVA and briefly explain how to assess them.
List the model assumptions for one-way ANOVA and briefly explain how to assess them.
There are 3 Steps involved in it
Step: 1
The model assumption for oneway ANOVA are 1Independence the observation ...
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started
Recommended Textbook for
Authors: John J. Coyle, Robert A. Novak, Brian Gibson, Edward J. Bard
8th edition
9781305445352, 1133592961, 130544535X, 978-1133592969
More Books
Students also viewed these Mathematics questions
View Answer in SolutionInn App | {"url":"https://www.solutioninn.com/study-help/questions/list-the-model-assumptions-for-oneway-anova-and-briefly-explain-1005421","timestamp":"2024-11-02T11:18:14Z","content_type":"text/html","content_length":"103027","record_id":"<urn:uuid:37d76486-33f1-4f63-8ca0-211e9c3e5379>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00868.warc.gz"} |
17 research outputs found
The zoo of two-dimensional conformal models has been supplemented by a series of nonunitary conformal models obtained by cosetting minimal models. Some of them coincide with minimal models, some do
not have even Kac spectrum of conformal dimensions.Comment: LANDAU-93-TMP-6, 7 pages, plain TEX, misprints correcte
We consider a coset construction of minimal models. We define it rigorously and prove that it gives superconformal minimal models. This construction allows to build all primary fields of
superconformal models and to calculate their three-point correlation functions.Comment: 9 pages, LANDAU-92-TMP-
In this article an attempt is made to present very recent conceptual and computational developments in QFT as new manifestations of old and well establihed physical principles. The vehicle for
converting the quantum-algebraic aspects of local quantum physics into more classical geometric structures is the modular theory of Tomita. As the above named laureate to whom I have dedicated has
shown together with his collaborator for the first time in sufficient generality, its use in physics goes through Einstein causality. This line of research recently gained momentum when it was
realized that it is not only of structural and conceptual innovative power (see section 4), but also promises to be a new computational road into nonperturbative QFT (section 5) which, picturesquely
speaking, enters the subject on the extreme opposite (noncommutative) side.Comment: This is a updated version which has been submitted to Journal of Physics A, tcilatex 62 pages. Adress: Institut
fuer Theoretische Physik FU-Berlin, Arnimallee 14, 14195 Berlin presently CBPF, Rua Dr. Xavier Sigaud 150, 22290-180 Rio de Janeiro, Brazi
Recent ideas on modular localization in local quantum physics are used to clarify the relation between on- and off-shell quantities in particle physics; in particular the relation between on-shell
crossing symmetry and off-shell Einstein causality. Among the collateral results of this new nonperturbative approach are profound relations between crossing symmetry of particle physics and
Hawking-Unruh like thermal aspects (KMS property, entropy attached to horizons) of quantum matter behind causal horizons, aspects which hitherto were exclusively related with Killing horizons in
curved spacetime rather than with localization aspects in Minkowski space particle physics. The scope of this modular framework is amazingly wide and ranges from providing a conceptual basis for the
d=1+1 bootstrap-formfactor program for factorizable d=1+1 models to a decomposition theory of QFT's in terms of a finite collection of unitarily equivalent chiral conformal theories placed a
specified relative position within a common Hilbert space (in d=1+1 a holographic relation and in higher dimensions more like a scanning). The new framework gives a spacetime interpretation to the
Zamolodchikov-Faddeev algebra and explains its thermal aspects.Comment: In this form it will appear in JPA Math Gen, 47 pages tcilate | {"url":"https://core.ac.uk/search/?q=author%3A(Lashkevich%2C%20M.%20Yu.)","timestamp":"2024-11-03T00:43:56Z","content_type":"text/html","content_length":"116657","record_id":"<urn:uuid:d5faad28-058d-462f-8244-90d8bb17e55b>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00036.warc.gz"} |
Optimal quantum algorithm for polynomial interpolation
Title Optimal quantum algorithm for polynomial interpolation
Publication Journal Article
Year of 2016
Authors Childs, AM, van Dam, W, Hung, S-H, Shparlinski, IE
Journal 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016)
Volume 55
Pages 16:1--16:13
Date 2016/03/01
ISSN 1868-8969
ISBN Number 978-3-95977-013-2
We consider the number of quantum queries required to determine the coefficients of a degree-d polynomial over GF(q). A lower bound shown independently by Kane and Kutin and by Meyer and
Pommersheim shows that d/2+1/2 quantum queries are needed to solve this problem with bounded error, whereas an algorithm of Boneh and Zhandry shows that d quantum queries are sufficient.
We show that the lower bound is achievable: d/2+1/2 quantum queries suffice to determine the polynomial with bounded error. Furthermore, we show that d/2+1 queries suffice to achieve
Abstract probability approaching 1 for large q. These upper bounds improve results of Boneh and Zhandry on the insecurity of cryptographic protocols against quantum attacks. We also show that our
algorithm's success probability as a function of the number of queries is precisely optimal. Furthermore, the algorithm can be implemented with gate complexity poly(log q) with negligible
decrease in the success probability.
URL http://arxiv.org/abs/1509.09271
DOI 10.4230/LIPIcs.ICALP.2016.16 | {"url":"https://www.quics.umd.edu/publications/optimal-quantum-algorithm-polynomial-interpolation","timestamp":"2024-11-06T08:17:26Z","content_type":"text/html","content_length":"22103","record_id":"<urn:uuid:7ea1e9e2-fd95-4b55-8e38-fd008ef1f957>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00763.warc.gz"} |
6.1 Consumption Choices - Principles of Economics 2e | OpenStax
By the end of this section, you will be able to:
• Calculate total utility
• Propose decisions that maximize utility
• Explain marginal utility and the significance of diminishing marginal utility
Information on the consumption choices of Americans is available from the Consumer Expenditure Survey carried out by the U.S. Bureau of Labor Statistics. Table 6.1 shows spending patterns for the
average U.S. household. The first row shows income and, after taxes and personal savings are subtracted, it shows that, in 2015, the average U.S. household spent $48,109 on consumption. The table
then breaks down consumption into various categories. The average U.S. household spent roughly one-third of its consumption on shelter and other housing expenses, another one-third on food and
vehicle expenses, and the rest on a variety of items, as shown. These patterns will vary for specific households by differing levels of family income, by geography, and by preferences.
Average Household Income before Taxes $62,481
Average Annual Expenditures $48.109
Food at home $3,264
Food away from home $2,505
Housing $16,557
Apparel and services $1,700
Transportation $7,677
Healthcare $3,157
Entertainment $2,504
Education $1,074
Personal insurance and pensions $5,357
All else: alcohol, tobacco, reading, personal care, cash contributions, miscellaneous $3,356
Total Utility and Diminishing Marginal Utility
To understand how a household will make its choices, economists look at what consumers can afford, as shown in a budget constraint (or budget line), and the total utility or satisfaction derived from
those choices. In a budget constraint line, the quantity of one good is on the horizontal axis and the quantity of the other good on the vertical axis. The budget constraint line shows the various
combinations of two goods that are affordable given consumer income. Consider José's situation, shown in Figure 6.2. José likes to collect T-shirts and watch movies.
In Figure 6.2 we show the quantity of T-shirts on the horizontal axis while we show the quantity of movies on the vertical axis. If José had unlimited income or goods were free, then he could
consume without limit. However, José, like all of us, faces a budget constraint. José has a total of $56 to spend. The price of T-shirts is $14 and the price of movies is $7. Notice that the
vertical intercept of the budget constraint line is at eight movies and zero T-shirts ($56/$7=8). The horizontal intercept of the budget constraint is four, where José spends of all of his money on
T-shirts and no movies ($56/14=4). The slope of the budget constraint line is rise/run or –8/4=–2. The specific choices along the budget constraint line show the combinations of affordable
T-shirts and movies.
José wishes to choose the combination that will provide him with the greatest utility, which is the term economists use to describe a person’s level of satisfaction or happiness with his or her
Let’s begin with an assumption, which we will discuss in more detail later, that José can measure his own utility with something called utils. (It is important to note that you cannot make
comparisons between the utils of individuals. If one person gets 20 utils from a cup of coffee and another gets 10 utils, this does not mean than the first person gets more enjoyment from the coffee
than the other or that they enjoy the coffee twice as much. The reason why is that utils are subjective to an individual. The way one person measures utils is not the same as the way someone else
does.) Table 6.2 shows how José’s utility is connected with his T-shirt or movie consumption. The first column of the table shows the quantity of T-shirts consumed. The second column shows the
total utility, or total amount of satisfaction, that José receives from consuming that number of T-shirts. The most common pattern of total utility, in this example, is that consuming additional
goods leads to greater total utility, but at a decreasing rate. The third column shows marginal utility, which is the additional utility provided by one additional unit of consumption. This equation
for marginal utility is:
$MU=change in total utilitychange in quantityMU=change in total utilitychange in quantity$
Notice that marginal utility diminishes as additional units are consumed, which means that each subsequent unit of a good consumed provides less additional utility. For example, the first T-shirt
José picks is his favorite and it gives him an addition of 22 utils. The fourth T-shirt is just something to wear when all his other clothes are in the wash and yields only 18 additional utils. This
is an example of the law of diminishing marginal utility, which holds that the additional utility decreases with each unit added. Diminishing marginal utility is another example of the more general
law of diminishing returns we learned earlier in the chapter on Choice in a World of Scarcity.
The rest of Table 6.2 shows the quantity of movies that José attends, and his total and marginal utility from seeing each movie. Total utility follows the expected pattern: it increases as the
number of movies that José watches rises. Marginal utility also follows the expected pattern: each additional movie brings a smaller gain in utility than the previous one. The first movie José
attends is the one he wanted to see the most, and thus provides him with the highest level of utility or satisfaction. The fifth movie he attends is just to kill time. Notice that total utility is
also the sum of the marginal utilities. Read the next Work It Out feature for instructions on how to calculate total utility.
T-Shirts (Quantity) Total Utility Marginal Utility Movies (Quantity) Total Utility Marginal Utility
Table 6.3 looks at each point on the budget constraint in Figure 6.2, and adds up José’s total utility for five possible combinations of T-shirts and movies.
Point T-Shirts Movies Total Utility
P 4 0 81 + 0 = 81
Q 3 2 63 + 31 = 94
R 2 4 43 + 58 = 101
S 1 6 22 + 81 = 103
T 0 8 0 + 100 = 100
Calculating Total Utility
Let’s look at how José makes his decision in more detail.
Step 1. Observe that, at point Q (for example), José consumes three T-shirts and two movies.
Step 2. Look at Table 6.2. You can see from the fourth row/second column that three T-shirts are worth 63 utils. Similarly, the second row/fifth column shows that two movies are worth 31 utils.
Step 3. From this information, you can calculate that point Q has a total utility of 94 (63 + 31).
Step 4. You can repeat the same calculations for each point on Table 6.3, in which the total utility numbers are shown in the last column.
For José, the highest total utility for all possible combinations of goods occurs at point S, with a total utility of 103 from consuming one T-shirt and six movies.
Choosing with Marginal Utility
Most people approach their utility-maximizing combination of choices in a step-by-step way. This approach is based on looking at the tradeoffs, measured in terms of marginal utility, of consuming
less of one good and more of another.
For example, say that José starts off thinking about spending all his money on T-shirts and choosing point P, which corresponds to four T-shirts and no movies, as Figure 6.2 illustrates. José
chooses this starting point randomly as he has to start somewhere. Then he considers giving up the last T-shirt, the one that provides him the least marginal utility, and using the money he saves to
buy two movies instead. Table 6.4 tracks the step-by-step series of decisions José needs to make (Key: T-shirts are $14, movies are $7, and income is $56). The following Work It Out feature explains
how marginal utility can affect decision making.
Try Which Has Total Utility Marginal Gain and Loss of Utility, Compared with Previous Choice Conclusion
Choice 1: P 4 T-shirts and 0 movies 81 from 4 T-shirts + 0 from 0 movies = 81 – –
Choice 2: Q 3 T-shirts and 2 movies 63 from 3 T-shirts + 31 from 0 movies = 94 Loss of 18 from 1 less T-shirt, but gain of 31 from 2 more movies, for a net utility gain of 13 Q is preferred over P
Choice 3: R 2 T-shirts and 4 movies 43 from 2 T-shirts + 58 from 4 movies = 101 Loss of 20 from 1 less T-shirt, but gain of 27 from two more movies for a net utility gain of 7 R is preferred over Q
Choice 4: S 1 T-shirt and 6 movies 22 from 1 T-shirt + 81 from 6 movies = 103 Loss of 21 from 1 less T-shirt, but gain of 23 from two more movies, for a net utility gain of 2 S is preferred over R
Choice 5: T 0 T-shirts and 8 movies 0 from 0 T-shirts + 100 from 8 movies = 100 Loss of 22 from 1 less T-shirt, but gain of 19 from two more movies, for a net utility loss of 3 S is preferred over T
Decision Making by Comparing Marginal Utility
José could use the following thought process (if he thought in utils) to make his decision regarding how many T-shirts and movies to purchase:
Step 1. From Table 6.2, José can see that the marginal utility of the fourth T-shirt is 18. If José gives up the fourth T-shirt, then he loses 18 utils.
Step 2. Giving up the fourth T-shirt, however, frees up $14 (the price of a T-shirt), allowing José to buy the first two movies (at $7 each).
Step 3. José knows that the marginal utility of the first movie is 16 and the marginal utility of the second movie is 15. Thus, if José moves from point P to point Q, he gives up 18 utils (from the
T-shirt), but gains 31 utils (from the movies).
Step 4. Gaining 31 utils and losing 18 utils is a net gain of 13. This is just another way of saying that the total utility at Q (94 according to the last column in Table 6.3) is 13 more than the
total utility at P (81).
Step 5. Thus, for José, it makes sense to give up the fourth T-shirt in order to buy two movies.
José clearly prefers point Q to point P. Now repeat this step-by-step process of decision making with marginal utilities. José thinks about giving up the third T-shirt and surrendering a marginal
utility of 20, in exchange for purchasing two more movies that promise a combined marginal utility of 27. José prefers point R to point Q. What if José thinks about going beyond R to point S?
Giving up the second T-shirt means a marginal utility loss of 21, and the marginal utility gain from the fifth and sixth movies would combine to make a marginal utility gain of 23, so José prefers
point S to R.
However, if José seeks to go beyond point S to point T, he finds that the loss of marginal utility from giving up the first T-shirt is 22, while the marginal utility gain from the last two movies is
only a total of 19. If José were to choose point T, his utility would fall to 100. Through these stages of thinking about marginal tradeoffs, José again concludes that S, with one T-shirt and six
movies, is the choice that will provide him with the highest level of total utility. This step-by-step approach will reach the same conclusion regardless of José’s starting point.
We can develop a more systematic way of using this approach by focusing on satisfaction per dollar. If an item costing $5 yields 10 utils, then it’s worth 2 utils per dollar spent. Marginal utility
per dollar is the amount of additional utility José receives divided by the product's price. Table 6.5 shows the marginal utility per dollar for José's T shirts and movies.
$marginal utility per dollar=marginal utilitypricemarginal utility per dollar=marginal utilityprice$
If José wants to maximize the utility he gets from his limited budget, he will always purchase the item with the greatest marginal utility per dollar of expenditure (assuming he can afford it with
his remaining budget). José starts with no purchases. If he purchases a T-shirt, the marginal utility per dollar spent will be 1.6. If he purchases a movie, the marginal utility per dollar spent
will be 2.3. Therefore, José’s first purchase will be the movie. Why? Because it gives him the highest marginal utility per dollar and is affordable. Next, José will purchase another movie. Why?
Because the marginal utility of the next movie (2.14) is greater than the marginal utility of the next T-shirt (1.6). Note that when José has no T- shirts, the next one is the first one. José will
continue to purchase the next good with the highest marginal utility per dollar until he exhausts his budget. He will continue purchasing movies because they give him a greater "bang for the buck"
until the sixth movie which gives the same marginal utility per dollar as the first T-shirt purchase. José has just enough budget to purchase both. So in total, José will purchase six movies and
one T-shirt.
Quantity of T-Shirts Total Utility Marginal Utility Marginal Utility per Dollar Quantity of Movies Total Utility Marginal Utility Marginal Utility per Dollar
1 22 22 22/$14=1.6 1 16 16 16/$7=2.3
2 43 21 21/$14=1.5 2 31 15 15/$7=2.14
3 63 20 20/$14=1.4 3 45 14 14/$7=2
4 81 18 18/$14=1.3 4 58 13 13/$7=1.9
5 97 16 16/$14=1.1 5 70 12 12/$7=1.7
6 111 14 14/$14=1 6 81 11 11/$7=1.6
7 123 12 12/$14=1.2 7 91 10 10/$7=1.4
A Rule for Maximizing Utility
This process of decision making suggests a rule to follow when maximizing utility. Since the price of T-shirts is twice as high as the price of movies, to maximize utility the last T-shirt that José
chose needs to provide exactly twice the marginal utility (MU) of the last movie. If the last T-shirt provides less than twice the marginal utility of the last movie, then the T-shirt is providing
less “bang for the buck” (i.e., marginal utility per dollar spent) than José would receive from spending the same money on movies. If this is so, José should trade the T-shirt for more movies
to increase his total utility.
If the last T-shirt provides more than twice the marginal utility of the last movie, then the T-shirt is providing more “bang for the buck” or marginal utility per dollar, than if the money were
spent on movies. As a result, José should buy more T-shirts. Notice that at José’s optimal choice of point S, the marginal utility from the first T-shirt, of 22 is exactly twice the marginal
utility of the sixth movie, which is 11. At this choice, the marginal utility per dollar is the same for both goods. This is a tell-tale signal that José has found the point with highest total
We can write this argument as a general rule: If you always choose the item with the greatest marginal utility per dollar spent, when your budget is exhausted, the utility maximizing choice should
occur where the marginal utility per dollar spent is the same for both goods.
A sensible economizer will pay twice as much for something only if, in the marginal comparison, the item confers twice as much utility. Notice that the formula for the table above is:
The following Work It Out feature provides step by step guidance for this concept of utility-maximizing choices.
Maximizing Utility
The general rule, $MU1P1=MU2P2MU1P1=MU2P2$, means that the last dollar spent on each good provides exactly the same marginal utility. This is the case at point S. So:
Step 1. If we traded a dollar more of movies for a dollar more of T-shirts, the marginal utility gained from T-shirts would exactly offset the marginal utility lost from fewer movies. In other words,
the net gain would be zero.
Step 2. Products, however, usually cost more than a dollar, so we cannot trade a dollar’s worth of movies. The best we can do is trade two movies for another T-shirt, since in this example T-shirts
cost twice what a movie does.
Step 3. If we trade two movies for one T-shirt, we would end up at point R (two T-shirts and four movies).
Step 4. Choice 4 in Table 6.4 shows that if we move to point R, we would gain 21 utils from one more T-shirt, but lose 23 utils from two fewer movies, so we would end up with less total utility at
point R.
In short, the general rule shows us the utility-maximizing choice, which is called the consumer equilibrium.
There is another equivalent way to think about this. We can also express the general rule as the ratio of the prices of the two goods should be equal to the ratio of the marginal utilities. When we
divide the price of good 1 by the price of good 2, at the utility-maximizing point this will equal the marginal utility of good 1 divided by the marginal utility of good 2.
Along the budget constraint, the total price of the two goods remains the same, so the ratio of the prices does not change. However, the marginal utility of the two goods changes with the quantities
consumed. At the optimal choice of one T-shirt and six movies, point S, the ratio of marginal utility to price for T-shirts (22:14) matches the ratio of marginal utility to price for movies (of
Measuring Utility with Numbers
This discussion of utility began with an assumption that it is possible to place numerical values on utility, an assumption that may seem questionable. You can buy a thermometer for measuring
temperature at the hardware store, but what store sells a “utilimometer” for measuring utility? While measuring utility with numbers is a convenient assumption to clarify the explanation, the key
assumption is not that an outside party can measure utility but only that individuals can decide which of two alternatives they prefer.
To understand this point, think back to the step-by-step process of finding the choice with highest total utility by comparing the marginal utility you gain and lose from different choices along the
budget constraint. As José compares each choice along his budget constraint to the previous choice, what matters is not the specific numbers that he places on his utility—or whether he uses any
numbers at all—but only that he personally can identify which choices he prefers.
In this way, the step-by-step process of choosing the highest level of utility resembles rather closely how many people make consumption decisions. We think about what will make us the happiest. We
think about what things cost. We think about buying a little more of one item and giving up a little of something else. We choose what provides us with the greatest level of satisfaction. The
vocabulary of comparing the points along a budget constraint and total and marginal utility is just a set of tools for discussing this everyday process in a clear and specific manner. It is welcome
news that specific utility numbers are not central to the argument, since a good utilimometer is hard to find. Do not worry—while we cannot measure utils, by the end of the next module, we will
have transformed our analysis into something we can measure—demand. | {"url":"https://openstax.org/books/principles-economics-2e/pages/6-1-consumption-choices","timestamp":"2024-11-07T00:21:13Z","content_type":"text/html","content_length":"552502","record_id":"<urn:uuid:2798779c-6b46-4bb2-b07a-28bc6853e986>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00451.warc.gz"} |
Investigating the interplay between the coronal properties and the hard X-ray variability of active galactic nuclei with
Issue A&A
Volume 690, October 2024
Article Number A145
Number of page(s) 15
Section Extragalactic astronomy
DOI https://doi.org/10.1051/0004-6361/202450777
Published online 04 October 2024
A&A, 690, A145 (2024)
Investigating the interplay between the coronal properties and the hard X-ray variability of active galactic nuclei with NuSTAR
^1 INAF - Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monte Porzio Catone (Roma), Italy
^2 INAF - Istituto di Astrofisica e Planetologia Spaziali, Via del Fosso del Cavaliere 100, 00133 Roma, Italy
^3 Dipartimento di Fisica, Università degli Studi di Roma “Tor Vergata”, via della Ricerca Scientifica 1, 00133 Roma, Italy
^4 Dipartimento di Matematica e Fisica, Università degli Studi Roma Tre, Via della Vasca Navale 84, 00146 Roma, Italy
^5 Instituto de Estudios Astrofísicos, Facultad de Ingeniería y Ciencias, Universidad Diego Portales, Avenida Ejercito Libertador 441, Santiago, Chile
^6 Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871, PR China
^7 INAF - Osservatorio Astrofisico di Arcetri, Largo Enrico Fermi 5, 50125 Firenze, Italy
^8 CNRS, IPAG, Université Grenoble Alpes, 38000 Grenoble, France
^9 Space Science Data Center, Agenzia Spaziale Italiana, Via del Politecnico snc, 00133 Roma, Italy
^10 INAF - Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Via Gobetti, 93/3, 40129 Bologna, Italy
^11 ASI - Agenzia Spaziale Italiana, Via del Politecnico snc, 00133 Roma, Italy
Received: 18 May 2024
Accepted: 9 July 2024
Active galactic nuclei (AGN) are extremely variable in the X-ray band down to very short timescales. However, the driver behind the X-ray variability is still poorly understood. Previous results
suggest that the hot corona responsible for the primary Comptonized emission observed in AGN is expected to play an important role in driving the X-ray variability. In this work, we investigate the
connection between the X-ray amplitude variability and the coronal physical parameters; namely, the temperature (kT) and optical depth (τ). We present the spectral and timing analysis of 46 NuSTAR
observations corresponding to a sample of 20 AGN. For each source, we derived the coronal temperature and optical depth through X-ray spectroscopy and computed the normalized excess variance for
different energy bands on a timescale of 10 ks. We find a strong inverse correlation between kT and τ, with correlation coefficient of r<−0.9 and negligible null probability. No clear dependence
was found among the temperature and physical properties, such as the black hole mass or the Eddington ratio. We also see that the observed X-ray variability is not correlated with either the coronal
temperature or optical depth under the thermal equilibrium assumption, whereas it is anticorrelated with the black hole mass. These results can be interpreted through a scenario where the observed
X-ray variability could primarily be driven by variations in the coronal physical properties on a timescale of less than 10 ks; whereas we assume thermal equilibrium on such timescales in this work,
given the capability of the currently available hard X-ray telescopes. Alternatively, it is also possible that the X-ray variability is mostly driven by the absolute size of the corona, which depends
on the supermassive black hole mass, rather than resulting from any of its physical properties.
Key words: black hole physics / galaxies: active / galaxies: Seyfert / X-rays: galaxies
© The Authors 2024
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited.
This article is published in open access under the Subscribe to Open model. Subscribe to A&A to support open access publication.
1. Introduction
Active galactic nuclei (AGN) are bright extragalactic sources powered by the accretion of matter onto a supermassive black hole (SMBH). In brief, AGN emit light at all wavelengths and they are
characterized by a significantly loud X-ray emission (e.g., Padovani et al. 2017). The X-ray emission of AGN is produced by inverse Compton scattering of UV seed photons, emitted by the accretion
disk, off a hot relativistic electron plasma known as the corona (e.g., Haardt & Maraschi 1991, 1993). The typical shape of the X-ray spectrum of an AGN is that of a power law, characterized by a
photon index Γ, up to a characteristic energy, E[c], known as the cut-off energy where the power law breaks. The relation between the cut-off energy and the temperature can be approximated with E[c]
∼2−3kT (e.g., Petrucci et al. 2001), depending on the geometry of the corona and the optical depth, while the photon index is dependent on both the coronal temperature and the optical depth.
However, more complex relations with both the temperature and optical depth are needed when considering broader ranges of temperatures and optical depths (Middei et al. 2019).
Many models have been proposed for the coronal geometry, including slab (e.g., Haardt & Maraschi 1991), spherical (e.g., Frontera et al. 2003) or lamp-post coronal geometry (e.g., Miniutti & Fabian
2004). Details of its shape, location and size are yet largely unknown, though, since spectroscopy alone is not able to distinguish among different geometries, which can be probed with polarimetry
measurements (e.g., Ursini et al. 2022). Indeed, recent results with Imaging X-ray Polarimetry Explorer (e.g., Weisskopf et al. 2016) are starting to unveil the geometrical properties of the AGN
corona. Gianolli et al. (2023) measured the coronal X-ray polarization for the first time in the Seyfert galaxy NGC 4151, strongly suggesting a wedge or a slab above the accretion disk (e.g.,
Poutanen et al. 2018). For IC4329A (Ingram et al. 2023), a marginal detection for the X-ray polarimetry also suggests a wedge coronal geometry. Only upper limits were found for MCG-5-23-16 (Marinucci
et al. 2022; Tagliacozzo et al. 2023), although the results, combined with the inclination measurement obtained with XMM-Newton and NuSTAR (Serafinelli et al. 2023a), tentatively favors a wedge
geometry as well.
Several measurements of the cut-off energy have been undertaken using many hard X-ray instruments, like BeppoSAX (e.g., Dadina 2007), INTEGRAL (e.g., Molina et al. 2009; De Rosa et al. 2012), Swift
-BAT (e.g., Ricci et al. 2018) and NuSTAR (e.g., Fabian et al. 2015, 2017; Tortosa et al. 2018a), including obscured sources (e.g., Baloković et al. 2020; Serafinelli et al. 2023b). This task is far
from trivial, since the hard X-ray spectrum is also characterized by a reflection component, due to X-ray photons interacting with the surrounding environment, such as the accretion disk, the broad
line region or the torus, whose parameters are often degenerate with those of the continuum (see e.g. the review in Reynolds 2021. The cut-off energy is found in a large energy range, from E[c]∼23
keV (Kammoun et al. 2023) to E[c]∼750 keV (Matt et al. 2015), with average values around ∼100−200 keV (e.g., Ricci et al. 2018; Kamraj et al. 2022). Direct measurements of the coronal parameters
such as the electron temperature kT and the optical depth τ have also been extensively performed on many AGN, finding a tight correlation between the temperature and the optical depth (e.g., Tortosa
et al. 2018a; Kamraj et al. 2022).
The X-ray emission of AGN is well-known to be variable on several timescales, both in amplitude (e.g., Markowitz & Edelson 2004; Ponti et al. 2012; Vagnetti et al. 2016; Middei et al. 2017;
Serafinelli et al. 2020) and spectral shape (e.g., Sobolewska & Papadakis 2009; Serafinelli et al. 2017). Variability is found on very short timescales down to a few hours (e.g., Ponti et al. 2012),
and this suggests that the X-ray emitting region is compact (e.g., Mushotzky et al. 1993; De Marco et al. 2013), with a typical radius of R[c]∼10R[g] (Ursini et al. 2020a), also supported by
microlensing results (e.g., Chartas et al. 2009; Morgan et al. 2012. Moreover, the X-ray emission is variable in a wide range of energies, including very hard X-rays (E>10 keV) on both long (e.g.,
years, Soldi et al. 2014; Akylas et al. 2022; Papadakis & Binas-Valavanis 2024) and shorter timescales (e.g., hours, Rani et al. 2019; Akylas et al. 2022).
The X-ray variability of AGN provides crucial insight on the size of the central source, but its main driver is still poorly understood. We aim here to investigate the possible relation between the
variability of the X-ray emission coming from the corona and the physical properties of the corona itself with NuSTAR, which is able to study both coronal parameters and variability because of its
high sensitivity at hard X-rays (E=3−79 keV). We present a study of a sample of 20 nearby (z<0.2) Seyfert galaxies for which we study the coronal parameters using NuSTAR, in order to
investigate possible relations with the X-ray variability. In Sect. 2, we describe our sample, made up of sources with a wide range of coronal temperatures and optical depths, and the data reduction
of the available X-ray data. In Sect. 3 we describe the spectral analysis we performed. In Sect. 4 we investigate the X-ray variability through the computation of the excess variance in different
bands with NuSTAR. Finally our results are discussed in Sect. 5 and present a summary in Sect. 6. Throughout the paper, we adopt a standard ΛCDM cosmology, with H[0]=70 km s^−1 Mpc^−1, Ω[m]=0.3,
and Ω[Λ]=0.7.
2. Sample selection and data reduction
We selected our sample of AGN from the 70-Month Swift-BAT catalogue (Baumgartner et al. 2013). Ricci et al. (2017) computed many X-ray properties of the AGN in the catalog, such as the X-ray flux in
several bands, the photon index, and the cut-off energy. We select all sources where the cut-off has been measured, namely, excluding the ones with only lower limits. Out of 836 AGN of the whole
Swift-BAT sample, 165 satisfy this first condition. Among those AGN, we selected the ones with public NuSTAR observations as of 10th October 2021, for a total of 229 observations of 110 AGN. Not all
NuSTAR observations have sufficient statistics to compute the coronal parameters; therefore, we selected only those with enough NuSTAR counts. To this end, we considered the value of the X-ray flux
in the 20–50 keV, as reported by Ricci et al. (2017), and we converted such a flux to NuSTAR count rate in the same band using WebPimms^1, adopting an unabsorbed power law with Γ=1.8, which is
typical for Seyfert galaxies (e.g., Serafinelli et al. 2017). This count rate was multiplied by the sum of the exposures of each observation of every source and we selected all sources with at least
1500 counts per FPM module. We note that this selection might exclude very variable sources in which the NuSTAR count rate may exceed the expected count rate from the BAT flux, not easily spotted
with this criterion. A total of 34 sources are selected with these criteria. Six of these sources (IC4329A, MCG-5-23-16, MCG+8-11-11, NGC 5506, NGC 6814, and SWIFT J2127.4+5654) were already present
in the sample analyzed by Tortosa et al. (2018a). Out of completeness, we decided to include two more sources from Tortosa et al. (2018a) that are not included in our selection, Ark 564 (not detected
in BAT) and GRS 1734-292, to consider all non-jetted nearby (z≤0.2) AGN from their sample. This resulted in a total of 36 AGN.
Even though our sample might possibly be biased towards the low tail of the high energy cut-off distributions, this selection provides the best spectral signal-to-noise ratio (S/N) needed to derive
coronal physical properties with high confidence level. Furthermore, excluding the sources with only lower limits for the cut-off energy also excludes AGN with additional spectral complexities, such
as multiple ionized absorption gas layers, which could introduce systematic errors in the measure of kT and τ due to an inaccurate continuum fit. For these sources, a simultaneous spectroscopy with a
low energy (E<3 keV) bandpass is recommended.
Table 2.
Coronal parameters of our sample derived with the models described in Sect. 3.
We reduced the NuSTAR observations with NUPIPELINE, available through HEASOFT v6.30, which is part of the NUSTARDAS software package, calibration files (CALDB), updated as recently as March 31, 2022.
We extracted the FPMA and FPMB source spectra and light curves from a circular region with 60″ radius, centered on the source, using the NUPRODUCTS command. The background spectra are extracted using
two source-free regions of 40″ each.
3. Spectral analysis
We performed a quick analysis of 36 selected AGN to verify whether the coronal temperature is also constrained with NuSTAR. All fits included Galactic absorption extracted from HI4PI Collaboration
(2016) (see Appendix B for all details). We considered three models. Model A consists of a continuum plus ionized reflection model, using XILLVERCP, which is the combination of the reflection model
XILLVER (García & Kallman 2010; García et al. 2011) and the continuum model NTHCOMP (Zdziarski et al. 1996). The XSPEC equation is:
$ztbabs ∗ xillvercp , model A .$
For simplicity, the inclination and iron abundance are fixed to ι=30°, and A[Fe]=1, respectively. The reflection fraction ℛ is left free to vary, as XILLVERCP models both continuum and
reflection. Model B is adopted when the Fe Kα region is not properly modeled by model A because of the presence of a broad emission line, interpreted as due to the presence of a relativistic
reflection component (e.g., Reynolds 2021). In such cases, we also included a second reflector, using RELXILLCP, which is the convolution of RELLINE (Dauser et al. 2014) and XILLVERCP for the
reflection component, and NTHCOMP for the continuum. In this scenario XILLVERCP models reflection from material located farther out from the black hole, while RELLXILLCP models reflection from the
innermost parts of the accretion disk (e.g., Serafinelli et al. 2023a). The XSPEC equation is:
$ztbabs ∗ ( xillvercp + relxillcp ) , model B .$
We assumed a single emissivity index of −3 for the accretion disk and we assumed that RELXILLCP only models reflection (ℛ=−1). Unless otherwise specified (see Appendix B), we fixed an inner radius
of R[in]=10 R[g] and outer radius of R[out]=400 R[g]. We assumed a Schwarzschild black hole (a=0), while the XILLVERCP parameters are the same as model A, including a free-to-vary reflection
fraction. Finally, model C was adopted when the reflection component was due to a single neutral reflector, and the model with one or two ionized reflection components does not fit the data
satisfactorily. In this case, the reflection component was modeled with BORUS, which models the reflection from spherical distribution of neutral material with polar cutouts^2 (see Baloković et al.
2018, for details). Thus, we added a NTHCOMP component for the continuum, since BORUS does not have an in-built continuum one and we tied all the reflection parameters in BORUS that describe the
continuum to that of the NTHCOMP. The XSPEC equation is:
$ztbabs ∗ ( nthcomp + borus 12 ) model C .$
Since the physical properties of the neutral reflector are not the main goal of this paper, we always assumed a Compton thick reflector (log N[H,refl]/cm^−2=24.5), with a covering factor C[f]=
0.5 and reflector assumed on the line of sight. All three models include a neutral absorption component modeled with ZTBABS whenever required (see Appendix B for details).
We did not find any relevant variability of the coronal parameters kT or Γ, when considering different observation epochs of the same source. This is consistent with the recent results obtained by
Pal & Stalin (2023), where evidence of variations in the coronal parameters was found in less than 5% of their AGN sample. Therefore, during the fit, we kept kT and τ tied between different
observations of the same AGN. We were able to constrain the coronal parameters in 20 sources; however, we excluded the 16 sources for which we only obtained lower limits for either the coronal
temperature or the optical depth. The final sample is shown in Table 1, where black hole masses (M[BH]), bolometric luminosities (L[bol]), and Eddington ratios (λ[Edd]) have been taken from the
catalog presented in Koss et al. (2022); the only exception is Ark 564, which is absent from their list. Koss et al. (2022) selected masses preferably from reverberation mapping (when available)
followed by single-epoch measurements from Hα or Hβ lines and, ultimately, from the host galaxy velocity dispersion. For Ark 564, we considered the reverberation mapping measurement in Peterson et
al. (2004), M[BH]=3.2×10^6 M[⊙], with a bolometric luminosity computed by applying the bolometric correction proposed by Marconi et al. (2004) to the unabsorbed X-ray luminosity in the 2–10 keV
energy band, along with an Eddington ratio computed as λ[Edd]=L[bol]/L[Edd], where L[Edd]=1.26×10^38M/M[⊙] erg s^−1 is the Eddington luminosity. All the values are reported in Table 1.
As we are interested in exploring different coronal geometries and the interplay between temperature and optical depth, we separated the continuum and the reflection by including a Comptonization
model as continuum, COMPTT (Magdziarz & Zdziarski 1995), which is capable of assuming slab-shaped and spherical-shaped coronae. In models A and B, we simply replaced NTHCOMPT with COMPTT by setting
the reflection parameter to a fixed value of −1^3; whereas in model C, we explicitly removed the external NTHCOMP component and replaced it with COMPTT. We started with the baseline models used for
the first round of fits. All the parameters of the reflection, with the exception of the normalization, are frozen to the results of the first round of fits, as we are mainly interested in the
coronal parameters. We fix the parameter approx in COMPTT to 0.5 to model a slab geometry, and to 1.5 to model a spherical coronal shape. We checked the consistency between the two continuum models
by comparing the best-fit values of the temperature obtained before the addition of COMPTT and those obtained after, assuming a spherical geometry, the only one allowed by NTHCOMP. We find consistent
kT results at 90% confidence level in all cases. Furthermore, we also simulated spectra in a large range of temperatures and exposures with NTHCOMP. We fit the simulated spectra with COMPTT, assuming
a spherical geometry, such as the one considered in NTHCOMP model, also finding good agreement at least 90% confidence level.
We report the values obtained for kT and τ for both geometries in Table 2, while details on the fits are available in Appendix B.
4. Variability analysis
Fig. 1.
Excess variance in the soft (3–10 keV) versus the excess variance in the full (3–79 keV) NuSTAR bands (left). All excess variances are normalized at 10 ks. The best-fit line is fully consistent
with the bisector at 90% confidence level, with an angular coefficient of 0.96±0.05. The correlation coefficient is r=0.95 with a negligible or null probability. Excess variance in the soft
band vs. excess variance in the hard (10–40 keV) X-ray band (right). The best-fit angular coefficient is 0.7±0.1 with a correlation coefficient r=0.90 and a negligible probability of finding
the correlation by chance.
A straightforward estimator of the X-ray variability is the normalized excess variance (e.g., Vaughan et al. 2003), defined as:
$σ NXS 2 = 1 N μ 2 ∑ i = 1 N [ ( x i − μ ) 2 − σ i 2 ] ,$(1)
where x[i] are the values of the X-ray amplitude of every time bin, μ is the mean value of the amplitude, N is the number of points, and σ[i] is the photometric error on the X-ray amplitude. The
excess variance of a random process is the integral of the power spectral density (PSD) over all frequencies between 0 and infinity. However, with real data it is limited by f[min]=1/t[max] and f
[max]=1/t[min], where t[min]=2Δt, with Δt being the time bin of the light curves we use (in our case 1 ks), and t[max] is the length of the observation segment (10 ks, see below). We note that
the excess variance is not a good estimator when a large range of redshifts is considered, since same-length light curves in the observer frame represent different rest-frame lengths at different
redshifts (Vagnetti et al. 2016); at the same time, we would be looking at a different energy range. However, our sample is limited at z<0.2, hence, we were able to avoid these biases among the
excess variance.
Table 3.
NuSTAR excess variances in the energy bands 3–79, 3–10, and 10–40 keV.
Since the excess variance is a quadratic sum over the number of points of a light curve, two conditions should be satisfied in order to properly compare these quantities over different sources. First
of all, they must have the same binning, which is ensured in our case by how the light curves were prepared. Indeed, every NuSTAR light curve is binned at 1 ks. The other condition is that the light
curves should be equally long, which is not satisfied in our sample as the exposures are different in the observations of different sources. To avoid this bias, we normalize every excess variance to
the same length. Given that the smallest NuSTAR exposure in our sample is Δt∼17 ks, we decide to study the variability on a timescale lower than that, namely: 10 ks. In order to normalize the
excess variance, we divided the light curve in intervals of 10 ks and discarded the leftover points up to a maximum of 9 ks. We computed σ[NXS]^2 for each interval and we adopted the average value
between all intervals as the excess variance of the whole light curve.
Every excess variance value computed for each interval is associated with an error given by Vaughan et al. (2003):
$err ( σ NXS 2 ) = ( 2 N σ err 2 ¯ μ 2 ) 2 + ( σ err 2 ¯ N 2 F var μ ) 2 ,$(2)
where σ[err]^2=∑[i]σ[i]^2/Nμ^2 and $F var = σ NXS 2$ is the fractional variability.
Following Ponti et al. (2012), we considered the following three cases. When only one interval was present for the considered light curve (Δt<20 ks, one observation in our sample), the error
associated with the excess variance is given by Eq. (2), computed in the single interval available. If the number of intervals is between 2 and 9 (i.e. 20≤Δt<100 ks, 33 observations), we
calculated Eq. (2) for each interval and took its average value as the global error on the whole light curve. Finally, when the number of intervals exceeded 10 (i.e., Δt≥100 ks, 12 observations),
the standard deviation of the excess variance is adopted as error. We caution that the distribution of σ[NXS]^2 is Gaussian for an ideal number of intervals n≳20 (e.g., Allevato et al. 2013).
However, given that this condition is only satisfied for a total of four observations of two sources, we opted for the less conservative approach described above.
We computed the excess variance in the full (3–79 keV), soft (3–10 keV) and hard (10–40 keV) NuSTAR bands, which are listed in Table 3. As shown in Fig. 1, as expected, the broadband variability in
the full band is dominated by the soft band variability, as most of the signal is detected in this energy band. Nearly all the points are located in the bisector line. Indeed, we fit the σ[NXS]^2 in
the two bands in the logarithmic scale, we obtain a very tight correlation (r=0.95 with negligible or null probability) and we find a slope of 0.96±0.05, indicating that the variability of the
continuum in the 3–10 keV band is dominant. We also find a tight correlation (r=0.90) between the excess variance between the soft 3–10 keV and the hard 10–40 keV bands, which is however not
consistent with the bisector at 90% confidence level, as we find an angular coefficient of 0.7±0.1 and an intercept of −0.7±0.2.
Since all three bands are so tightly correlated, in the following sections, we only used the excess variance derived from the light curves extracted in the full NuSTAR energy band (E=3−79 keV).
Full band light curves of every observation for each AGN of the sample here presented are shown in Appendix D.
5. Discussion
5.1. Physics of the corona
We considered the best-fit values of the temperature, kT, and the optical depth, τ, for each source, as listed in Table 2. We find a strong correlation between the coronal temperature and the optical
depth, as reported in Fig. 2. Indeed, the τ−kT relations have a correlation coefficient of r=−0.96 for the slab geometry, and r=−0.97 for the sphere geometry, both with a negligible probability
of finding such correlations by chance. We fit the linear relation log kT=alog τ+b and found best-fit parameters of a=−0.76±0.08 and b=1.54±0.05, assuming a slab geometry. For the
spherical corona, we obtained a=−0.90±0.09 and b=1.91±0.07. These results are consistent at 90% confidence level with those obtained by Tortosa et al. (2018a) with a smaller sample.
Fig. 2.
Temperature versus optical depth plot. The kT−τ points obtained assuming a slab geometry are shown in blue, with the blue line indicating the best-fit line. The red points and line denote the
points obtained with a sphere geometry and their best-fit line, respectively. Confidence intervals are shown at 90% level.
It is very important to exclude any possible systematic effect when two quantities are found to be tightly correlated. To do that, we built a simple spectrum for an AGN, which is made of neutral
absorption modeled with TBABS, a comptonizing continuum modeled with the physical model COMPTT and a ionized reflection component modeled with XILLVER, a mock version of model A. We simulate 1000
NuSTAR spectra assuming random parameters in predetermined ranges. We assume that the column density may take any value between N[H]=10^20 cm^−2 to N[H]=10^23 cm^−2. We allowed for a wide range
of photon indices in the reflection spectrum, from a flat spectrum with Γ=1.45 to a very steep one with Γ=2.5. The iron abundance is taken in the range A[Fe]=[0.55], while the ionization
describes a nearly neutral (log ξ/(ergcms^−1) = 0) to a highly ionized reflector (log ξ/(ergcms^−1) = 4.7). For simplicity, we tie the reflection normalization to the one of the continuum,
assuming a reflection parameter in the range ℛ=[−0.1−1], where the negative values have been adopted in order to simulate pure reflection spectra. The adopted ranges of coronal temperature (kT[e]
<500 keV) and optical depth (τ<10) are roughly based on the values of Table 2. Finally, the normalization of the continuum is chosen between 10^−4 counts cm^−2 s^−1 keV^−1 and 5×10^−2 counts cm^
−2 s^−1 keV^−1, which corresponds to fluxes in the range F[X]=5×10^−14−10^−10 erg cm^−2 s^−1. Each simulation was run assuming different exposures, between a minimum of 10 ks to a maximum of
100 ks, using simulated background spectra and the latest NuSTAR response and effective area. As shown in Fig. 3, in the adopted large range of parameters, the average value of the best-fit
parameters, kT, and τ (right) are not much different than the simulated ones (left); the exception is the last bin (kT>450 keV), where the optical depth tends to be slightly overestimated, due to
the lower sensitivity of NuSTAR to high-temperature coronae. The simulations were designed with uncorrelated kT−τ; therefore, if a spurious correlation was indeed present, it would also be expected
to be present in the right panel of Fig. 3. This suggests that from a statistical point of view, the correlation found in the data is not spurious. It is worth noting that the sample has been
selected to be not strongly contaminated from complex, ionized, and multiple soft X-ray absorbers (see Sect. 2); thus, we did not expect for the model systematics to contribute to affecting the
continuum best-fit parameters.
Fig. 3.
Temperature and optical depth of the simulated spectra used to validate the correlation found in the NuSTAR data. A thousand NuSTAR spectra were simulated with random values of kT and τ (see
details in Sect. 5). The left panel show the values used to simulate the data with average values in bins of temperature (red), while the right panel shows the temperature and optical depth found
when fitting the simulated spectra, with the average values in temperature bins.
As discussed by Tortosa et al. (2018a), the observed kT−τ anticorrelation is not consistent with a global disk-corona configuration in radiative balance, which would imply the same
heating-to-cooling ratio (HCR) for the coronae of all the AGN in the sample. One possibility to explain this anticorrelation is that colder coronae are more compact. In this case, a more compact
corona would imply a larger optical depth, where the number of scatterings is larger. This will lead to a more efficient cooling (smaller HCR), and vice versa, a less compact corona would imply a
smaller number of scattering, resulting in a larger HCR. Another possibility is that the disk-corona configuration is the same for all sources, but the sources might show different thermal emission
due to viscous dissipation over the whole disk emission, which would also result in a larger cooling efficiency.
We also investigate a possible dependence of the coronal temperature from the physical properties of the AGN, such as the black hole mass, M[BH], and the Eddington ratio, λ=L[bol]/L[Edd]. As
described in Sect. 2, the masses and bolometric luminosities were retrieved from the catalog published by Koss et al. (2022), with the sole exception of Ark 564, whose mass and bolometric luminosity
were retrieved from Peterson et al. (2004) and adopting the bolometric correction from Marconi et al. (2004) to the X-ray luminosity obtained from the present data, respectively. As shown in Fig. 4,
there is no indication of a possible correlation between kT or τ with either the black hole mass or the Eddington ratio. This is consistent with past studies of coronal parameters (e.g., Tortosa et
al. 2018a; Kamraj et al. 2022). However, we note that Ricci et al. (2018) found a trend between λ[Edd] and the cut-off energy, E[c], when considering a very large sample of sources.
Fig. 4.
We report here the coronal parameters (kT and τ) vs the black hole mass (left panels) and the Eddington ratio (right panels) for the sources in our sample (see Table 1). The blue dots are the
temperatures obtained assuming a slab corona, while the red dots are those obtained assuming a spherical one.
Fig. 5.
Excess variance in the E=3−79 keV energy band versus the black hole mass of the AGN (left). The excess variances shown here are averaged among the different epochs. A clear trend $σ NXS 2 ∝ M
BH − 0.6$ is found with a strong anticorrelation (r=−0.7). Given their large uncertainties, masses obtained with the velocity dispersion method (red diamonds) are excluded from the fit. Middle
panel gives The average excess variance vs. the unabsorbed X-ray luminosity L[X] is shown. We find a trend $σ NXS 2 ∝ L X − 0.4$ with moderate anticorrelation (r=−0.5). Right panel shows the
average excess variance vs. the Eddington ratio and no evident correlation is found. Also, in this panel, the sources with mass measurements obtained with the velocity dispersion method are drawn
as red diamonds.
5.2. X-ray variability and coronal properties
We investigate a possible correlation between the excess variance in the full NuSTAR band (E=3−79 keV) and the mass of the black hole (Fig. 5, left panel). For this purpose, we exclude the
sources for which Koss et al. (2022) report the mass value from the velocity dispersion of the stars in the host galaxy, as they are affected by large uncertainties. We find that the two quantities
are well correlated with a correlation coefficient of r=−0.7 and a null probability of p[null]≃10^−5. If we perform a linear fit on the logarithmic quantities, namely, log σ[NXS]^2=alog(M[BH]/
10^5 M[⊙])+b, we obtain a=−0.6±0.2 and b=−1.2±0.7. An anticorrelation was also found in previous variability analyses with XMM-Newton (e.g., Ponti et al. 2012; Tortosa et al. 2023). We tested
whether the excess variance is dependent on the X-ray luminosity, L[X], and we found a moderate anticorrelation r=−0.5 (p[null]≃0.02), with the two quantities related as σ[NXS]^2∝L[X]^a (Fig. 5
, middle panel) with a=−0.4±0.3. Despite not being highly significant, this relation is also found in other analyses with a much larger number of sources (e.g., Vagnetti et al. 2016; Prokhorenko
et al. 2024). We also report an anticorrelation (r∼−0.7 and p[null]∼10^−5) with the bolometric luminosity (σ[NXS]^2∝L[bol]^a with a=−0.6±0.3). As shown in Fig. 6, though, both L[X] and L
[bol] are correlated with the black hole mass, scaling as $∼ M BH 0.6$, with correlation coefficient r∼0.5 (p[null]∼0.02) and r∼0.7 (p[null]∼10^−5), respectively. Therefore it is likely that
the two luminosity relations are degenerate with the black hole mass relation. Alternatively, the variability-luminosity relation has often been attributed to a superposition of small flares (e.g.,
Nandra et al. 1997; Almaini et al. 2000).
Fig. 6.
Bolometric (upper panel) and X-ray (lower panel) luminosities vs. black hole mass of the sources considered in this work. Both luminosities scale with mass as $M BH 0.6$, though the L[bol] relation
shows a larger correlation coefficient (r∼0.7) than the L[X] relation (r∼0.5).
We also test the possible relation between the excess variance and the Eddington ratio derived in Sect. 5.1 (see Fig. 5, right panel), finding a correlation coefficient r=−0.03, with a probability
of finding such correlation by chance p[null]=0.3. This suggests a lack of correlation between the Eddington ratio and the X-ray variability. We also tried to remove the previously described
dependency from the mass, checking a possible correlation between the quantity $σ NXS 2 × M BH 0.6$, but this quantity is also uncorrelated with the Eddington ratio values.
We analyzed the possible correlation between the excess variance and the coronal parameters of the 20 sources analyzed here. We do not find a relevant dependence between the temperature obtained with
both geometries and the excess variance in the 3–79 keV energy band, as the correlation coefficient is r=−0.02, with null probability p[null]=0.3, and the angular coefficient is consistent with a
flat line (see Fig. 7, left panel). For completeness, we also show that the excess variance and the optical depth are not correlated (r=0.01, p[null]=0.35, see Fig. 7, right panel). This result
was expected once the σ−kT relation is found to be absent, given the tight kT−τ correlations found in Sect. 5.1.
Fig. 7.
Excess variance in the E=3−79 keV energy band versus the coronal temperature (left panel) and the optical depth (right panel) of the sample analyzed in this work. Blue circles identify
temperature and optical depth values obtained assuming a slab geometry, while the red triangles indicate the values obtained assuming a spherical geometry.
These results suggest that the X-ray variability on timescales of 10 ks is not dependent on the coronal temperature nor the optical depth, raising questions about the origin of the X-ray variability.
Indeed, the corona must introduce variability at timescales shorter than the ones observed for the UV radiation, which provides the seed photons for the Comptonization. Additionally, the corona must
be responsible for the X-ray variability, because the UV light curves lag behind the X-ray ones (e.g., Kammoun et al. 2021).
We note that, so far, we have considered the corona to be in thermal equilibrium, meaning that we have computed average values of the temperature and optical depth over the whole observation, which
is indeed larger than the timescale for which we have probed the variability. Therefore, one possibility is that the X-ray variability could be driven by changes of the temperature and optical depth
at timescales that are consistent with the observed flux variations (i.e., below 10 ks). However, measuring time-resolved temperatures is not yet possible with the currently most advanced hard X-ray
telescope NuSTAR. Moreover, the physical environment could be much more complex than its simplification above. For instance, geometry is undoubtedly a parameter that could play a major role in
driving the X-ray variability. In fact, variations in the intrinsic geometry or disk-corona geometry, such as the ones observed in X-ray binaries (e.g., Kara et al. 2019), could also drive the X-ray
variability. It would be expected that, following a geometry variation, the temperature and optical depth would change accordingly, but these variations may also happen at shorter timescales than the
ones probed by the spectral fits, possibly even shorter than 10 ks.
An additional complexity to take into account is a possible spatial (likely radial) distribution of the temperature of the corona, while we have assumed an average single value for the whole electron
plasma. The spatial and temporal average values of the temperature may spread out the link between the best-fit results and the calculations of the variability.
Another possibility is that the variability is mainly driven by the observed anticorrelation with the black hole mass, which is likely proportional to the absolute size of the corona. If we consider,
for instance, that all coronae of the AGN in the sample have a typical coronal radius of R[c]=10R[g] (e.g., Dovčiak & Done 2016; Ursini et al. 2020a), where R[g]=GM[BH]/c^2 is the gravitational
radius, the coronal size in physical units would be directly proportional to the supermassive black hole mass. However, we note that the size of the coronae in R[g] units could differ from source to
source, as discussed in the previous section; furthermore, the relation between coronal size and black hole mass, although still increasing, it may be far from trivial to derive. In any case, even
considering more complex relations between the coronal size and the black hole mass, more massive black holes correspond to larger coronae, which would imply a larger number of random scattering of
the seed photons in the corona, resulting in a smaller X-ray variability amplitude.
6. Summary and conclusions
We have presented a spectral and timing study of 20 bright AGN with the best signal-to-noise ratio available based on NuSTAR data. We measured the temperature, kT[e], and optical depth, τ, of the
X-ray emitting corona under two different geometries (sphere and slab) by modeling the spectrum continuum with the Comptonization model COMPTT. Additionally, we have studied the NuSTAR variability in
the time range between 1 and 10 ks by means of the excess variance σ[NXS]^2. We note that the results of this paper are related to the sample at hand, which is the highest quality data to date. We
summarize our results in the following
• We report that there is no correlation found between the X-ray variability and the electron temperature or the optical depth of the corona. This may imply that the X-ray variability is dependent
on kT and τ variations on timescales below 10 ks, which is the timescale probed by the variability in this paper.
• We did find a strong anticorrelation between kT and τ adopting both slab and spherical geometries. In particular, we find that the optical depth is related to the temperature by a relation kT∝(
τ)^−a with a∼−0.7÷−1 (Fig. 2), depending on the assumed geometry. Therefore, we confirm the trend found by Tortosa et al. (2018a) with our AGN sample size increased by a factor of 2 with
respect to the cited work.
• We did not find any dependence of kT[e] and τ with either the black hole mass or Eddington ratio. This is also consistent with previous results on other samples of AGN (e.g., Tortosa et al. 2018a
; Kamraj et al. 2022).
• There is a strong correlation between the 3–10 keV and 3–79 keV, which implies that the variability of the X-ray emission below 10 keV is dominant over the rest of the spectrum. We also find a
strong correlation between the variability of the 10–40 keV band and the one in 3–10 keV.
• We found a strong anticorrelation between the X-ray variability and the mass, following σ[NXS]^2∝M^−0.6. This correlation is consistent to the one found by past variability studies with
XMM-Newton (e.g., Ponti et al. 2012; Tortosa et al. 2023). We also report a moderate correlation with the X-ray luminosity $σ NXS 2 ∝ L X − 0.4$, as well as no correlation seen with the Eddington
The results of our study show that the main driver of the X-ray continuum variability produced in the hot-corona remains elusive; furthermore, it is not even clear whether there is a main driver for
the observed variability at all and it may instead be the product of the superposition of several effects at work. We have shown here that the variability at 10 ks does not depend on the physical
properties of the corona, namely, electron temperature and optical depth. This then raises the question of what drives the X-ray variability. One possibility is that we might be probing different
timescales, since we are studying relatively fast variability within 10 ks; on the other hand, we have averaged the coronal temperature over days, months, and even years to reach a sufficient
signal-to-noise ratio that would allow for an accurate measurement of the coronal temperature. Variations in the coronal geometry may also play an important role in producing the observed
variability. In the future, detectors sensitive in a broadband energy range with much larger effective area, such as the Large Area Detector (LAD; e.g., Feroci et al. 2022) proposed for the future
enhanced X-ray Timing and Polarimetry mission (eXTP; Zhang et al. 2019) and the Spectroscopic Time-Resolving Observatory for Broadband Energy X-rays (STROBE-X; Ray et al. 2018), will allow us to
measure AGN coronal temperatures with high precision for exposures as short as 10 ks (De Rosa et al. 2019). This would open up the possibility to probe both the coronal parameters and variability on
the same timescale. Moreover, thanks to the extremely broadband E=0.2−80 keV, now only available with joint NuSTAR observations with other facilities such as XMM-Newton, the future X-ray
telescope HEX-P (Kammoun et al. 2024) will be able to measure optical depth and temperatures with much more accuracy than NuSTAR, with much shorter exposures.
Data availability
Appendices C and D are available at https://doi.org/10.5281/zenodo.12807347
We recall that XILLVERCP models both continuum and reflection, by fitting the reflection fraction ℛ>0 and assuming NTHCOMP as continuum. When the reflection fraction is assumed to be frozen to ℛ=
−1, XILLVERCP will only model a pure reflection spectrum, which means that we then need a second component to model the continuum.
We thank the referee for their comments, which improved the quality of this paper. The authors thank Iossif Papadakis and Julien Malzac for useful discussions on the results of this paper. RS and ADR
acknowledge support from the agreements ASI-INAF n.2017-14-H.0 “Science case study and scientific simulations for the enhanced X-ray Timing Polarimetry mission, eXTP”, ASI-INAF eXTP Fase
B-2020-3-HH.1-2021, Bando Ricerca Fondamentale INAF 2022 Large Grant “Dual and binary supermassive black holes in the multi-messenger era: from galaxy mergers to gravitational waves” and the
INAF-PRIN grant “A Systematic Study of the largest reservoir of baryons and metals in the Universe: the circumgalactic medium of galaxies” (No. 1.05.01.85.10). AT acknowledges financial support from
the Bando Ricerca Fondamentale INAF 2022 Large Grant “Toward an holistic view of the Titans: multi-band observations of z > 6 QSOs powered by greedy supermassive black holes”. SB acknowledges funding
from PRIN MUR 2022 SEAWIND 2022Y2T94C, supported by European Union - Next Generation EU, from INAF LG 2023 BLOSSOM, and from the EU grant AHEAD-2020 (GA no. 871158). CR acknowledges support from the
Fondecyt Regular grant 1230345 and ANID BASAL project FB210003. POP acknowledges financial support from the French space agency (CNES) and the National High Energy Programme (PNHE) of CNRS. This
research has made use of data and software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and
the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory. This research has made use of the NuSTAR Data Analysis Software (NUSTARDAS) jointly developed by the ASI Space
Science Data Center (SSDC, Italy) and the California Institute of Technology (Caltech, USA).
Appendix A: Data
Appendix B: Spectral fits
This is a Seyfert 1 galaxy at z=0.104, with a black hole mass of M[BH]=3.8×10^8 M[⊙]. We binned the spectra at a minimum of 100 counts per energy bin over the whole 3−79 keV energy range. A
Galactic absorption with column density N[H,Gal]=1.14×10^20 cm^−2 is adopted. The spectra are well fitted by an absorbed continuum plus non-relativistic reflection (model A, ztbabs*
(comptt+xillvercp)). We let the intrinsic absorption column density N[H] to vary between observations. We find N[H]=(6±4)×10^21 cm^−2 for the 2015 observation and N[H]=(1.0±0.5)×10^22 cm^−2
for both the 2018 observations. When we assume a slab geometry (parameter approx = 0.5) we obtain $k T = 14 − 1 + 2$ keV and an optical depth $τ = 2 . 5 − 0.3 + 0.2$, while when a sphere geometry
(approx = 1.5) is adopted we find the same temperature and an optical depth of $τ = 5 . 7 − 0.5 + 0.4$. We find a goodness of fit χ^2/dof=1566/1588. The results are consistent with the broadband
analysis performed by Turner et al. (2018), although we note that in that work the OPTXAGNF model (Done et al. 2012) was used.
4C 50.55
4C 50.55 is a radio-loud Seyfert 1 galaxy at redshift z=0.02, yielding a black hole mass of 6.4×10^7 M[⊙]. We bin the NuSTAR spectra at a minimum of 100 counts per energy bin and consider the
full E=3−79 keV energy band. We note that Tazaki et al. (2010) ruled out a significant role of the jet in the X-ray spectrum of this source using Suzaku. We consider a Galactic column of N
[H,Gal]=9.45×10^21 cm^−2. The source is well fitted by an absorbed continuum plus a single disk reflector, i.e. ztbabs*(comptt+xillvercp) (model A). The column densities of the two observations
are N[H]=(3.0±0.4)×10^22 cm^−2 and N[H]=(2.0±0.3)×10^22 cm^−2 for the 2014 and 2018 observations, respectively. We find a temperature of $k T = 18 − 2 + 5$ keV ($k T = 18 − 2 + 3$ keV) and
optical depth of $τ = 2 . 2 − 0.3 + 0.2$ ($τ = 5 . 2 − 0.5 + 0.4$) assuming a slab (sphere) geometry. The statistic is χ^2/dof=1325/1243. These results are consistent with the analysis of Buisson
et al. (2018).
Ark 564
Ark 564 is a Narrow-Line Seyfert 1, with mass M[BH]=3.2×10^6 M[⊙] (Peterson et al. 2004). The NuSTAR spectra have been binned at a minimum of 100 counts per energy bin and only considered in the
E=3−30 keV energy range. A Galactic column of N[H,Gal]=5×10^20 cm^−2 is assumed. A simple power law model unveils a single narrow Fe Kα component and a reflection component in the residual
spectra. Therefore, the reflection is modeled with model A (compTT+xillvercp), with no absorption needed. Assuming a slab (sphere) geometry we obtain kT=15±2 keV (16±1 keV) and an optical depth
of τ=1.4±0.1 (3.3±0.3). The statistic is χ^2/dof=1267/1127. A detailed broadband spectral analysis, which is consistent with the results presented here, can be found in Kara et al. (2017).
ESO 103-G35
ESO 103-G35 is a Seyfert 2 galaxy at redshift z=0.00914 with a black hole mass of M[BH]=1.3×10^7 M[⊙]. The four FPM spectra were binned at a minimum of 100 counts per energy bin in the E=
3−50 keV energy band. We consider a Galactic absorption of N[H,Gal]=5.8×10^20 cm^−2. The source X-ray spectra appear reflection-dominated due to a heavy obscuration, therefore we adopt a
model consisting of a distant absorber plus continuum, with intrinsic absorption on the line of sight, i.e. ztbabs*(comptt+borus12) (model C). The two observations are characterized by nearly the
same Compton-thin absorbing column density, which is consequently kept tied between epochs, of N[H]=(1.65±0.08)×10^23 cm^−2. The temperature and optical depth assuming a slab geometry are $k T =
17 − 3 + 18$ keV and $τ = 2 . 2 − 0.7 + 0.3$, respectively, while the two parameters assuming a sphere geometry are $k T = 17 − 3 + 7$ keV and τ=5.2±0.7. The goodness of fit is χ^2/dof=1320/
1302. The results are consistent with the analysis of Buisson et al. (2018), in which a different reflection model was used.
ESO 362-G18
ESO 362-G18 is a Seyfert 1.5 galaxy at redshift z=0.01244 with a black hole mass M[BH]=4.5×10^7 M[⊙]. The spectra were binned at a minimum of 50 photon counts per energy bin in the full E=
3−79 keV NuSTAR energy band. We consider a Galactic absorption of N[H,Gal]=1.35×10^20 cm^−2. The X-ray spectrum has a typical Seyfert 1 shape with no evident intrinsic absorption. Following
Agís-González et al. (2014) we model the spectrum with two reflection components, i.e. comptt+xillvercp+relxillcp (model B). For a slab (sphere) coronal geometry we obtain a temperature of $k T = 18
− 4 + 14$ keV ($k T = 18 − 4 + 15$ keV) and an optical depth of $τ = 2 . 5 − 0.9 + 0.4$ (τ=5.2±0.7). The goodness of fit is given by χ^2/dof=464/641.
ESO 383-G18
ESO 383-G18 is a Seyfert 2 galaxy with z=0.01241, with M[BH]=3×10^5 M[⊙]. We bin the spectra of OBSID 60061243002 at a minimum of 50 counts per energy bin and the spectra of OBSID 60261002002
at a minimum of 100 counts per energy bin. We consider the E=3−50 keV energy band for both observations. We assume a Galactic absorption of N[H,Gal]=3.8×10^20 cm^−2. The X-ray spectrum
appears severely absorbed by a intervening cold material, which suggests that the reflector is best modeled by distant cold material, i.e. ztbabs*(comptt+borus12) (model C). The absorbing column
density of the two observations is consistent within the error at 90% confidence level, therefore we decided to keep them tied and obtain N[H]=(1.2±0.2)×10^23 cm^−2. The temperature is identical
assuming both assuming a slab or a spherical corona, i.e. $k T = 7 . 3 − 0.4 + 0.5$ keV, while instead we find τ=4.7±0.3 for a slab geometry and τ=10.1±0.6 for a spherical one. The goodness
of fit is χ^2/dof=1035/1054.
GRS 1734-292
GRS 1734-292 is a Seyfert 1 galaxy, having mass M[BH]=2.5×10^8 M[⊙]. We binned each FPM module of the two NuSTAR epochs at a minimum of 100 counts per energy bin, and considered the full E=
3−79 keV energy band. We adopt a Galactic absorption of N[H,Gal]=6.5×10^21 cm^−2. The source is also intrinsically moderately absorbed, with $N H = 4 − 3 + 4 × 10 21$ cm^−2 for OBSID
60061279002 and N[H]=(15±4)×10^21 cm^−2 for OBSID 60301010002. An absorbed powerlaw only show narrow Fe Kα residuals, therefore we adopt model A plus absorption (ztbabs(compTT+xillvercp)). We
obtain a coronal temperature of $k T = 13 − 1 + 2$ keV for both slab and sphere geometries. An optical depth of τ=2.9±0.2 (6.5±0.4) is recovered in a slab (sphere) geometry. We obtain a
statistic of χ^2/dof=1103/983. These results are consistent with the broadband analysis presented in Tortosa et al. (2017), in which OBSID 60061279002 was studied.
HE 1143-1810
HE 1143-1810 is a Seyfert 1 galaxy at redshift z=0.0329 with an estimated black hole mass M[BH]=4×10^7 M[⊙]. The 10 FPM spectra were all binned at a minimum of 100 photon counts per energy bin
in the E=3−79 keV energy band. The Galactic absorption is fixed at N[H,Gal]=3×10^20 cm^−2. The spectrum has a typical unabsorbed shape, therefore no neutral absorption is needed. The source
is nicely modeled by a single reflector plus continuum, i.e. comptt+xillvercp (model A). Assuming a slab (spherical) coronal geometry we find a temperature of $k T = 36 − 19 + 64$ keV ($k T = 22 − 5
+ 61$ keV) and an optical depth of $τ = 1 . 2 − 0.8 + 1.1$ ($τ = 4 . 4 − 2.9 + 0.9$). The goodness of fit is χ^2/dof=2639/2665. These values are largely in agreement with the broadband analysis
presented in Ursini et al. (2020b).
IC 4329A
IC 4329A is a Seyfert 1 galaxy with mass M[BH]=1.3×10^8. We bin both FPM modules of the analyzed observation (OBSID 60001045002) at a minimum of 100 counts per energy bin. We consider the full
NuSTAR band, i.e. E=3−79 keV. The Galactic absorption is N[H,Gal]=4×10^20 cm^−2. A modest absorption of N[H]=(5±2)×10^21 cm^−2 is included. A single distant reflector is needed,
therefore we adopt model A. The coronal temperature for a slab (sphere) coronal geometry is $k T = 44 − 10 + 20$ keV ($44 − 11 + 17$ keV) and the optical depth is τ=1.1±0.3 (2.8±0.7). The
goodness of fit is χ^2/dof=1411/1298. A broadband spectral analysis is presented in Brenneman et al. (2014) and Ingram et al. (2023), with consistent results with our NuSTAR fits.
MCG-5-23-16 is a Seyfert 1.9 galaxy with a mass of M[BH]=1.8×10^7 M[⊙]. All FPMA and FPMB spectra of the five available observations are binned at 100 counts per energy bin and the 3−79 keV
energy range is considered. We assume a Galactic absorption of N[H,Gal]=8×10^20 cm^−2 and we include cold intrinsic absorption with the model ztbabs, finding a column density N[H]=(1.4±0.1)
×10^22 cm^−2. We the source is modeled with a single reflector plus continuum with xillvercp, we find significant residuals on the Fe Kα region, strongly hinting at the presence of a broad component,
likely a reflection component from the accretion disk. Therefore, we adopt the absorbed model B, ztbabs(compTT+xillvercp+relxillcp). Following Serafinelli et al. (2023a), we assume an emissivity
index of −3, an inner radius R[in]=40R[g], inclination 45°, and a non-spinning black hole. Assuming a slab geometry, we find a coronal temperature kT=44±11 keV and an optical depth τ=
0.9±0.2, while $k T = 35 − 11 + 10$ keV and τ=2.7±0.3 are found if a spherical coronal geometry is assumed. The goodness of fit is χ^2/dof=6995/5776. These values are consistent at 99%
confidence level with the broadband analyses of Baloković et al. (2015) and Serafinelli et al. (2023a) performed on this source.
MCG+8-11-11 is a Seyfert 1 galaxy with mass M[BH]=1.6×10^7 M[⊙]. The FPMA and FPMB spectra were binned at 100 counts per energy bin over the whole energy range (E=3−79 keV), and a Galactic
column density of N[H]=1.75×10^21 cm^−2 is assumed. No intrinsic cold absorption is present. Two reflection components are required in the model, as the Fe Kα emission line is not adequately
fitted by a single narrow component. We adopt therefore model B, compTT+xillvercp+relxillcp. We assume a non-spinning black hole and a disk with inner radius R[in]=2R[g], where R[g]=GM/c^2 is the
gravitational radius, and emissivity index −3. We find a coronal temperature of $k T = 110 − 40 + 140$ keV and an optical depth τ=0.3±0.2 when assuming a slab coronal geometry. When a spherical
geometry is assumed for the corona, we obtain $k T = 90 − 45 + 125$ keV and $τ = 1 . 2 − 0.4 + 0.7$. The goodness of fit is χ^2/dof=873/851. These values are consistent with the results of Tortosa
et al. (2018b), where this data was analyzed with contemporaneous Swift-XRT data.
Mrk 6
Mrk 6 is a Seyfert 1.5 galaxy at z=0.01951 with mass M[BH]=1.9×10^7 M[⊙]. All four FPM spectra were binned at a minimum of 100 counts per energy bin. We consider the E=3−50 keV energy band.
The X-ray spectrum appear severely absorbed as in previous analyses of this source (e.g., Molina et al. 2019). Therefore, once the Galactic absorption is modeled with a fixed N[H,Gal]=7.6×10^20
cm^−2, we model the spectra with an absorbed continua and a distant neutral reflector, i.e. ztbabs*(comptt+borus12) (model C). For OBSID 60102044002 and 60102044004 we find an intrinsic column
density N[H]=(1.2±0.1)×10^23 cm^−2 and N[H]=(1.0±0.1)×10^23 cm^−2, respectively. The temperature is $k T = 13 − 2 + 4$ keV both assuming a slab or a spherical coronal geometry, while the
optical depth is $τ = 3 . 4 − 0.6 + 0.4$ and $τ = 7 . 5 − 1.1 + 0.7$, respectively. The statistic is χ^2/dof=1283/1335. A few moderate residuals are present below 5 keV, possibly due to the
presence of a warm absorber (Kayanoki et al. 2024), but given the low energy resolution of NuSTAR in the 3−5 keV energy range, we did not model such a component.
Mrk 110
Mrk 110 is a Seyfert 1 galaxy with redshift z=0.03552 and mass M[BH]=1.8×10^7 M[⊙]. The FPM spectra are all binned at a minimum of 100 counts per energy bin in the full E=3−65 keV energy
range. We adopt a Galactic absorption of N[H,Gal]=1.3×10^20 cm^−2, while no intrinsic absorption is present confirming its nature as a "bare" AGN (Reeves et al. 2021; Porquet et al. 2021). We
model the X-ray spectra with two reflectors, i.e. comptt+xillvercp+relxillcp (model B) finding $k T = 35 − 10 + 15$ keV ($k T = 24 − 5 + 17$ keV) and $τ = 1 . 2 − 0.4 + 0.5$ ($τ = 4 . 0 − 1.4 + 0.6$)
when assuming a slab (sphere) coronal geometry. The goodness of fit is χ^2/dof=2600/2305. The results agree with the broadband analysis presented in Porquet et al. (2021).
Mrk 509
Mrk 509 is a Seyfert 1 galaxy at z=0.01951 with a black hole mass of M[BH]=2×10^8 M[⊙]. The FPMA and FPMB spectra for the two epochs were binned at a minimum of 100 photon counts per energy bin
in E=3−65 keV. The Galactic absorption is fixed at N[H,Gal]=3.9×10^20 cm^−2. We model the X-ray spectra with an unabsorbed continuum with a single ionized reflector, i.e. comptt*xillvercp
(model A). We find a temperature of $k T = 17 − 1 + 2$ keV for both slab and sphere geometries, while the optical depth is τ=2.2±0.1 ($τ = 5 . 2 − 0.3 + 0.2$) when assuming a slab (sphere)
geometry. The statistic is χ^2/dof=1949/1647.
NGC 3281
NGC 3281 is a Seyfert 2 galaxy with redshift z=0.01073, with mass M[BH]=1.7×10^8 M[⊙]. We bin the FPM spectra with a minimum of 50 counts per energy bin in the energy band E=3−60 keV. We
assume a Galactic absorption of N[H,Gal]=6.6×10^20 cm^−2. The shape of the X-ray spectrum is that of a severely absorbed source, therefore we model the spectra with an absorbed continuum plus
neutral reflection, i.e. ztbabs*(comptt+borus12) (model C). We find that the intrinsic column density is moderately variable, by a factor of ∼4, since we find $N H = 8 − 5 + 6 × 10 22$ cm^−2 for
OBSID 60061201002 and N[H]=(3.1±0.5)×10^23 cm^−2 for OBSID 60662003002. Assuming either a slab or a spherical geometry we find a temperature $k T = 11 − 2 + 4$ keV, with optical depth $τ = 3 . 7
− 0.9 + 0.8$ and τ=8±2, respectively. The goodness of fit is χ^2/dof=693/612.
NGC 5506
NGC 5506 is a Seyfert 1 galaxy with mass M=8.8×10^7 M[⊙]. The NuSTAR Focal Plane Modules were both binned at a minimum of 100 counts per energy bin in the full NuSTAR band E=3−79 keV. The
Galactic cold absorption is N[H,Gal]=4.2×10^20 cm^−2, and we recover an intrinsic absorption of N[H]=(2.4±0.3)×10^22 cm^−2. A single reflector is able to model the hard X-ray spectrum of
this AGN, therefore we adopt model A with neutral absorption (ztbabs(compTT+xillvercp)). When a slab geometry is assumed, we obtain $k T = 510 − 150 + 250$ keV and τ=0.02±0.01, while assuming a
sphere geometry the best-fit parameters are kT=550±250 keV and $τ = 0 . 09 − 0.05 + 0.30$. The goodness of fit is χ^2/dof=819/738 and the best-fit values are largely in agreement with the
broadband analysis performed in Matt et al. (2015).
NGC 5728
NGC 5728 is a Seyfert 2 galaxy at z=0.00947 with an estimated black hole mass of M[BH]=3.4×10^7 M[⊙]. We bin all the X-ray spectra at a minimum of 50 counts per energy bin. We consider We adopt
a fixed Galactic absorption column density of N[H,Gal]=7.5×10^20 cm^−2. The spectrum appears as that of a typical absorbed source, therefore we model it with an absorbed continuum plus neutral
reflection, i.e. ztbabs*(comptt+borus12) (model C). The intrinsic absorption is not variable and therefore we keep it tied between the two epochs, finding $N H = 4 . 6 − 1.8 + 1.6 × 10 23$ cm^−2. We
find a coronal temperature of kT=13±1 keV for both slab and spherical geometry and an optical depth of $τ = 5 − 1 + 2$ ($τ = 10 − 3 + 6$) for a slab (spherical) coronal geometry. The statistic is
given by χ^2/dof=480/524.
NGC 6814
NGC 6814 is a Seyfert 1 galaxy with mass M=2.7×10^6 M[⊙]. We bin both FPMA and FPMB detectors at a minimum of 100 counts per energy bin in the E=3−60 keV energy band. We assume a Galactic
absorption with column density N[H,Gal]=8×10^20 cm^−2, while no intrinsic cold absorption is found in this source. The spectra need to be modeled with model B (comptt+xillvercp+relxillcp),
having two reflection components, since the Fe Kα emission line is broadened. For simplicity, we assume a non-spinning black hole, and a disk with emissivity index −3 from a disk with inner radius R
[in]=2R[g]. The coronal temperature for a slab (spherical) coronal geometry is $k T = 60 − 20 + 24$ keV ($k T = 82 − 10 + 80$ keV) and the optical depth is $τ = 0 . 8 − 0.3 + 0.7$ ($1 . 5 − 0.1 +
0.8$). The goodness of fit is χ^2/dof=897/846. Though with larger errors due to the sole use of NuSTAR, the values are consistent with those found by Tortosa et al. (2018b) in the broad (E=
0.5−60 keV) band.
SWIFT J2127.4+5654
The Narrow-line Seyfert 1 galaxy SWIFT J2127.4+5654 is characterized by a mass of M=1.5×10^7 M[⊙] at redshift z=0.014. Both FPM spectra A and B are binned at 100 min counts per energy bin in
the whole NuSTAR band E=3−79 keV. The Galactic absorption column density is N[H,Gal]=7×10^20 cm^−2 and negligible intrinsic absorption. The source is well modeled by model A with a single
non-relativistic reflector, i.e. compTT+xillvercp, we obtain for a slab (spherical) coronal geometry a temperature of $k T = 33 − 15 + 37$ keV ($k T = 24 − 7 + 24$ keV) and an optical depth of $τ = 1
. 0 − 0.6 + 0.8$ ($τ = 3 . 5 − 1.6 + 1.0$). The statistic is χ^2/dof=1751/1775, with values consistent at 99% confidence level with those of the broadband study by Marinucci et al. (2014).
UGC 6728
UGC 6728 is a Seyfert 1 galaxy with redshift z=0.00652 with a black hole mass of M[BH]=7.1×10^5 M[⊙]. The four FPM spectra were binned at a minimum of 50 photon counts per energy bin. We
consider data in the E=3−40 keV energy band. We assume a Galactic absorption of N[H]=4.5×10^20 cm^−2. No evident intrinsic absorption is present. The spectra are best modeled with continuum
and two reflectors, i.e. comptt+xillvercp+relxillcp (model B). We find a temperature of $k T = 28 − 18 + 16$ keV ($k T = 28 − 15 + 17$ keV) and an optical depth of τ=2.0±1.2 (τ=5.0±2.0) when
assuming a slab (sphere) coronal geometry. The goodness of fit is χ^2/dof=1321/1183.
All Tables
Table 2.
Coronal parameters of our sample derived with the models described in Sect. 3.
Table 3.
NuSTAR excess variances in the energy bands 3–79, 3–10, and 10–40 keV.
All Figures
Fig. 1.
Excess variance in the soft (3–10 keV) versus the excess variance in the full (3–79 keV) NuSTAR bands (left). All excess variances are normalized at 10 ks. The best-fit line is fully consistent
with the bisector at 90% confidence level, with an angular coefficient of 0.96±0.05. The correlation coefficient is r=0.95 with a negligible or null probability. Excess variance in the soft
band vs. excess variance in the hard (10–40 keV) X-ray band (right). The best-fit angular coefficient is 0.7±0.1 with a correlation coefficient r=0.90 and a negligible probability of finding
the correlation by chance.
In the text
Fig. 2.
Temperature versus optical depth plot. The kT−τ points obtained assuming a slab geometry are shown in blue, with the blue line indicating the best-fit line. The red points and line denote the
points obtained with a sphere geometry and their best-fit line, respectively. Confidence intervals are shown at 90% level.
In the text
Fig. 3.
Temperature and optical depth of the simulated spectra used to validate the correlation found in the NuSTAR data. A thousand NuSTAR spectra were simulated with random values of kT and τ (see
details in Sect. 5). The left panel show the values used to simulate the data with average values in bins of temperature (red), while the right panel shows the temperature and optical depth found
when fitting the simulated spectra, with the average values in temperature bins.
In the text
Fig. 4.
We report here the coronal parameters (kT and τ) vs the black hole mass (left panels) and the Eddington ratio (right panels) for the sources in our sample (see Table 1). The blue dots are the
temperatures obtained assuming a slab corona, while the red dots are those obtained assuming a spherical one.
In the text
Fig. 5.
Excess variance in the E=3−79 keV energy band versus the black hole mass of the AGN (left). The excess variances shown here are averaged among the different epochs. A clear trend $σ NXS 2 ∝ M
BH − 0.6$ is found with a strong anticorrelation (r=−0.7). Given their large uncertainties, masses obtained with the velocity dispersion method (red diamonds) are excluded from the fit. Middle
panel gives The average excess variance vs. the unabsorbed X-ray luminosity L[X] is shown. We find a trend $σ NXS 2 ∝ L X − 0.4$ with moderate anticorrelation (r=−0.5). Right panel shows the
average excess variance vs. the Eddington ratio and no evident correlation is found. Also, in this panel, the sources with mass measurements obtained with the velocity dispersion method are drawn
as red diamonds.
In the text
Fig. 6.
Bolometric (upper panel) and X-ray (lower panel) luminosities vs. black hole mass of the sources considered in this work. Both luminosities scale with mass as $M BH 0.6$, though the L[bol] relation
shows a larger correlation coefficient (r∼0.7) than the L[X] relation (r∼0.5).
In the text
Fig. 7.
Excess variance in the E=3−79 keV energy band versus the coronal temperature (left panel) and the optical depth (right panel) of the sample analyzed in this work. Blue circles identify
temperature and optical depth values obtained assuming a slab geometry, while the red triangles indicate the values obtained assuming a spherical geometry.
In the text
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | {"url":"https://www.aanda.org/articles/aa/full_html/2024/10/aa50777-24/aa50777-24.html","timestamp":"2024-11-04T18:39:58Z","content_type":"text/html","content_length":"323166","record_id":"<urn:uuid:426daa75-bdb2-45cd-b7cc-10c404d0c98f>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00485.warc.gz"} |
Homework 3 – Machine Learning CS4342 solved
1. Softmax regression (aka multinomial logistic regression) [35 points]: In this problem you will
train a softmax regressor to classify images of hand-written digits from the MNIST dataset. The input
to the machine will be a 28 × 28-pixel image (converted into a 784-dimensional vector); the output
will be a vector of 10 probabilities (one for each digit). Specifically, the machine you create should
implement a function g : R
785 → R
10, where the kth component of g(x˜) (i.e., the probability that input
x˜ belongs to class k) is given by
>w˜ k)
k0=0 exp(x˜>w˜ k0 )
where x˜ = [x
>, 1]>.
The weights should be trained to minimize the cross-entropy (CE) loss:
fCE(w˜ 0, . . . , w˜ 9) = −
log ˆy
where n is the number of training examples. Note that each ˆyk implicitly depends on all the weights
w˜ 0, . . . , w˜ 9, where each w˜ k = [w>
, 1]>.
To get started, first download the MNIST dataset (including both the training and testing subsets)
from the following web links:
• https://s3.amazonaws.com/jrwprojects/small_mnist_train_images.npy
• https://s3.amazonaws.com/jrwprojects/small_mnist_train_labels.npy
• https://s3.amazonaws.com/jrwprojects/small_mnist_test_images.npy
• https://s3.amazonaws.com/jrwprojects/small_mnist_test_labels.npy
These files can be loaded into numpy using np.load.
Then implement stochastic gradient descent (SGD) as described in the lecture notes. I recommend
setting ˜n = 100 for this project.
Note that, since there are 785 inputs (including the constant 1 term) and 10 outputs, there will be 10
separate weight vectors, each with 785 components. Alternatively, you can conceptualize the weights
as a 10 × 785 matrix.
After optimizing the weights on the training set, compute both (1) the loss and (2) the accuracy
(percent correctly classified images) on the test set. Include both the cross-entropy loss values
and the “percent-correct” accuracy in the screenshot that you submit.
Finally, create an image to visualize each of the trained weight vectors w0, . . . , w9 (similar to what
you did in homework 2).
2. Data augmentation [25 points]: It is often useful to enlarge your training set by synthesizing new
examples from ones you already have. The simplest way to do this is to apply label-preserving
transformations, i.e., create new “copies” of some original training examples by altering them in
subtle ways such that the label of the copy is always the same as the original. For images, this can be
achieved through operations such as rotation, scaling, translating, as well as adding random noise to
the value of each image pixel (e.g., from a Gaussian or Laplacian distribution). (For symmetric classes
(e.g., 8, 0), you could use mirroring/flipping, though this is not required for this assignment.)
You are required to implement all of the following transformations: translation, rotation, scaling,
random noise. (For rotation, feel free to use the skimage.transform.rotate method in the skimage
package.) From each example i of the original training set (x
, y(i)
), randomly pick one of the
augmentation methods above, and generate a new image whose label is also y
. Put all n of these
new examples into a new Python array called Xaug and its associated label into a new array yaug. We
will manually inspect your code to verify that you completed this correctly. Then, show 1 example (in
the PDF file) of an original and augmented training example for each of these transformations (i.e., 4
original and 4 augmented images in total).
Note: augmenting the training set in this assignment will likely not help much to improve generalization accuracy because of how the softmax regression classifier works (it is a generalized linear
However, it can make a substantial improvement for other models we will explore later in this course,
e.g., non-linear support vector machines and neural networks.
In addition to submitting your Python code in a file called homework3 WPIUSERNAME1.py
(or homework3 WPIUSERNAME1 WPIUSERNAME2.py for teams), please submit a PDF file containing a screenshot
of (1) the last 20 iterations of your gradient descent on the training data. Name the file homework3 WPIUSERNAME1.pdf
(or homework3 WPIUSERNAME1 WPIUSERNAME2.pdf for teams). | {"url":"https://codeshive.com/questions-and-answers/homework-3-machine-learning-cs4342-solved/","timestamp":"2024-11-08T18:36:15Z","content_type":"text/html","content_length":"104516","record_id":"<urn:uuid:33b5399b-175c-4af5-8095-f6db8ad9c90f>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00554.warc.gz"} |
How is future value calculated
7 Dec 2018 To calculate present value in this example, you're dividing the future value of a financial asset instead of multiplying the present value of that This free calculator also has links
explaining the compound interest formula. Compound Interest Calculator Future Value: $
10 Feb 2015 Future value calculation is very handy in getting the maturity amount of your FD, RD and Annuity. The calculation is very easy in Excel. The value of an asset or cash at a specified date
in the future that is equivalent in value to a specified sum today. Your future value is too small for our calculators to figure out. This means Free calculator to find the future value and display a
growth chart of a present amount with periodic deposits, with the option to choose payments made at either the beginning or the end of each compounding period. Also explore hundreds of other
calculators addressing finance, math, fitness, health, and many more. Future value (FV) is the value of a current asset at a future date based on an assumed rate of growth. The future value (FV) is
important to investors and financial planners as they use it to The future value formula helps you calculate the future value of an investment (FV) for a series of regular deposits at a set interest
rate (r) for a number of years (t). Using the formula requires that the regular payments are of the same amount each time, with the resulting value incorporating interest compounded over the term.
Future Value (FV) is a formula used in finance to calculate the value of a cash flow at a later date than originally received. This idea that an amount today is worth a different amount than at a
future time is based on the time value of money.
Future Value Calculator - The value of an asset or cash at a specified date in the future that is equivalent in value to a specified sum today. Future Value (FV) is a formula used in finance to
calculate the value of a cash flow at a later date than originally received. This idea that an amount today is worth A central concept in business and finance is the time value of money. We will use
easy to follow examples and calculate the present and future The time value of money is a basic financial concept that holds that money in the present is worth more than the same sum of money to be
received in the future. 9 Sep 2019 Want to know how much a specific asset or investment will be worth in the future? Here's how to calculate future value (FV) based on its rate of Example 4 -
Calculating the interest rate; How to use the future value calculator? How to double your money? – the rule of 72; Other important financial calculators .
This free calculator also has links explaining the compound interest formula. Compound Interest Calculator Future Value: $
This free calculator also has links explaining the compound interest formula. Compound Interest Calculator Future Value: $ 23 Feb 2018 How to calculate the future value of your financial goals?
Mutual fund houses and advisors are busy promoting goal-based investing. However 1 Apr 2016 Future Value (FV) can be calculated in two ways: For an asset with simple annual interest: FV = Sum
Deposited x ((1 + (interest rate * number of Future Value. R 0.00. Calculate. Clear. First National Bank a division of FirstRand Bank Limited (the Bank) provides the bond calculators, which you
accept are
The formula for calculating future value is: fv1. Example. Calculate the future value (FV) of an investment of $500 for a period of 3 years that pays an interest rate
Future Value Calculator - calculates how much your money or assets will be worth in a number of years. The FV calculator is based on compound interest and Future Value = Present Value x (1 + Rate
of Return)^Number of Years. The InvestOnline future value calculator takes into account the sum of your investment 15 Nov 2019 The present value calculator estimates what future money is worth now.
Use the PV formula and calculator to evaluate things from investments 23 Feb 2018 If you are not familiar with excel, you may write the following formula on a paper and calculate. Future Value (FV)=
Present Value (PV) (1+r/100)n. 7 Dec 2018 To calculate present value in this example, you're dividing the future value of a financial asset instead of multiplying the present value of that This free
calculator also has links explaining the compound interest formula. Compound Interest Calculator Future Value: $
Free financial calculator to find the present value of a future amount, or a stream of annuity payments, with the option to choose payments made at the beginning or the end of each compounding
period. Also explore hundreds of other calculators addressing topics such as finance, math, fitness, health, and many more.
Difference Between Present Value vs Future Value. Present and future values are the terms which are used in the financial world to calculate the future and Future Value Calculator. Use this
calculator to determine the future value of an investment which can include an initial deposit and a stream of periodic deposits. Future Value Calculator. Skip Step Navigation. 1 You are on step 1 of
2: Your Investment Information; 2 Calculation Results. How much will today's savings be 14 Apr 2019 Calculate the value of the investment on Dec 31, 20X3. Compounding is done on quarterly basis.
Solution. We have, Present Value PV = $10,000
Future Value Calculator. Use this calculator to determine the future value of an investment which can include an initial deposit and a stream of periodic deposits. Future Value Calculator. Skip Step
Navigation. 1 You are on step 1 of 2: Your Investment Information; 2 Calculation Results. How much will today's savings be 14 Apr 2019 Calculate the value of the investment on Dec 31, 20X3.
Compounding is done on quarterly basis. Solution. We have, Present Value PV = $10,000 | {"url":"https://cryptoajjygq.netlify.app/cregar67818ci/how-is-future-value-calculated-302","timestamp":"2024-11-13T14:50:14Z","content_type":"text/html","content_length":"33030","record_id":"<urn:uuid:21b58952-9f51-4128-b2aa-185dfcae0857>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00025.warc.gz"} |
Selected works of Jeffrey O. Shallit
What this country needs is an 18-cent piece, Math. Intelligencer 25 (2) (2003), 20-23. pdf
This paper generated a lot of media and blogger attention. Not everybody realized that the 18-cent suggestion was tongue-in-cheek. See I recently learned that Tom Young at Spring Lake Senior High
School in Spring Lake Park, Minnesota, made a similar suggestion in 1995. See here for the article.
Several people wanted more details about the optimal systems for Canada. Here's what I've computed. For amounts up to $5.00, Canadians use the following denominations: 1 cent, 5 cents, 10 cents, 25
cents, 100 cents, 200 cents. The expected number of coins per transaction is 5.9. The optimal 4-coin system is (1,7,57,80), with an expected 6.804 coins per transaction. The optimal 5-coin system is
(1,6,20,85,121) with an expected 5.44 coins per transaction. The optimal 6-coin systems are (1,6,14,62,99,140) and (1,8,13,69,110,160), each of which give an expected 4.67 coins per transaction.
For the greedy algorithm, I obtained the following results. The optimal 4-coin systems are (1,5,23,109) and (1,5,23,110), with an expected 7.176 coins per transaction (as compared to 5.9 with the
current 6-coin system). The optimal 5-coin systems are (1,4,13,44,147), (1,4,13,44,150), (1,4,14,47,160), and (1,4,14,48,160) with an expected cost of 5.816 coins per transaction. The optimal 6-coin
systems are (1,3,8,26,64,{202 or 203 or 204}) and (1,3,10,25,79,{195 or 196 or 197}) with an expected cost of 5.036 coins per transaction.
Note added October 23, 2003: Both Europe and China use a system of denominations based on the recurring pattern
1,2,5, 10,20,50, 100,200,500, ...
This may seem natural, but a small change to
1,3,4, 10,30,40, 100,300,400, ...
would significantly decrease the average number of coins per transaction. This new system has the following advantages:
• Change can still be made on a digit-by-digit basis. For example, to make change for 348, first do the hundreds digit (getting 300), then the tens (getting 40), and then the ones (getting 4+4).
• The greedy algorithm can be used in all cases but one. The exception is that 6 = 3+3 and not 4+1+1. (Similarly, 60 = 30+30, etc.)
• Assuming the uniform distribution of change denominations, on all scales (10, 100, 1000, etc.) the new system is about 6% better.
• If one assumes change denominations are distributed by Benford's law, the new system is about 7% better up to 10, about 6% better up to 100, and about 6% better up to 1000. | {"url":"https://cs.uwaterloo.ca/~shallit/papers.html","timestamp":"2024-11-12T13:03:03Z","content_type":"text/html","content_length":"128896","record_id":"<urn:uuid:63029270-1082-47db-bbe1-0e219841ba1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00111.warc.gz"} |
SYSTEM 40 BRACKET COVER
BRACKET COVER SYSTEM 40 WHITE
How to order
Please order the number of items you require:
If batch quantity is 100:
State quantity as 100 for a full box of 100
State quantity as 50 if you require a cut length of 50 units
State quantity as 120 if you require a full box of 100 units + a cut length of 20 units
If batch quantity is 25 sets:
State quantity as 25 sets for a full box of 25 sets
State quantity as 5 if you require a cut length of 5 sets
State quantity as 60 if you require 2 x full boxes of 25 sets + a cut length of 10 sets
If batch quantity is 50m supplied in 5m lengths:
State quantity as 50m for a full box of 50m
State quantity as 10 if you require a cut length of 10m
State quantity as 65 if you require a full box of 50m + a cut length of 15m
Not available as a cut length | {"url":"https://tradeportal.louvolite.com/components/bracket-cover-system-40-white","timestamp":"2024-11-12T05:00:54Z","content_type":"text/html","content_length":"93441","record_id":"<urn:uuid:9f3358a1-4a21-4db4-9bc6-a10dafe59e77>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00415.warc.gz"} |
Invited Speakers
Ahmed Bouajjani, Paris Cité University, France
Talk title: On verifying concurrent programs under weakly consistent memory models.
Abstract: Developing correct and performant concurrent systems is a major challenge. When programming an application over a memory/storage system, a natural expectation would be that each memory
update is immediately visible to all concurrent threads, which means that the views of the different threads are (strongly) consistent. However, for performance reasons, only weaker guarantees can be
ensured by memory systems, defined by what sets of updates can be made visible to each thread at any moment, and by the order in which these updates are made visible. The conditions on the visibility
order guaranteed by a memory system corresponds to its consistency memory model. Weak consistency models admit complex and unintuitive behaviors where memory access operations (reads and writes) may
be reordered in various ways w.r.t. the order in which they appear in programs. This makes the task of application programmers extremely hard. It is therefore important to determine an adequate level
of consistency for each given application, i.e., a level that is weak enough to ensure high performance, but also strong enough to ensure correctness of the application w.r.t its specification. This
leads to the consideration of several important verification problems:
- the correctness of an application program running over a weak consistency model;
- the robustness of an application program w.r.t. consistency weakening;
- the fact that an implementation of a system (memory, storage system) guarantees a given (weak) consistency model.
The talk gives a broad presentation of these issues and some results in this research area. The talk is based on several joint works with students and colleagues during the last few years.
Talk title: How to Use Polyhedral Reduction for the Verification of Petri nets.
Abstract: I will describe a new concept, called polyhedral reduction, that takes advantage of structural reductions to accelerate the verification of reachability properties on Petri nets. This
approach relies on a state space abstraction which involves sets of linear arithmetic constraints between the marking of places.
We have been using polyhedral reductions to solve several problems. I will consider three of them. First, how to use reductions to efficiently compute the number of reachable markings of a net. Then
how to use polyhedral reduction in combination with a SMT-based model checker. Finally, I will define a new data-structure, called Token Flow Graph (TFG), that captures the particular structure of
constraints that we generate with our approach. I will show how we can leverage TFGs to efficiently compute the concurrency relation of a net, that is all pairs of places that can be marked together
in some reachable marking.
Antonin Kucera, Masaryk University, Czech Republic
Talk title: Asymptotic Analysis of Probabilistic VASS Programs.
Abstract: Vector Addition Systems with States (VASS) are a natural abstraction for programs operating over unbounded integer variables. We overview recently discovered algorithms and techniques for
analyzing the asymptotic complexity of non-deterministic VASS. Furthermore, we examine possible extensions of these results to probabilistic VASS programs. We show that the standard concepts used for
finite-state probabilistic programs (such as the expected termination time) are not entirely appropriate and propose some alternatives. | {"url":"http://vecos-world.org/2023/speakers/","timestamp":"2024-11-10T22:17:43Z","content_type":"text/html","content_length":"50711","record_id":"<urn:uuid:1843e5b6-01d4-4e4a-a215-5119fe15d981>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00628.warc.gz"} |
The R programming questions and answers in this lesson will help you understand the types of R questions you can expect in data science interviews.
What are the limitations of R?
R has many limitations, but some directly affect data analysis. These limitations are as follows:
• It needs to load all data into memory (RAM), so it is not appropriate for big data analysis.
• Processing in R is slower than in other programming tools, and if the package's maintainer no longer sustains its package, then some R scripts do not work with the newer version of R.
What is the difference between Inf and NaN?
The Inf keyword represents the infinity value. For example, if we divide 1 by 0, we get infinity. If we add infinity to infinity, we get infinity. But if we subtract infinity from infinity, we don't
get infinity. There is no value defined for this subtraction. That means if a number can't represent a value, it is referred to as NaN (not a number) in R.
Note: When you practice R, you will see NaN if you make a mistake in your code that does not make sense.
What is the use of the with and by functions in R?
The with function applies an expression to a dataset. For example, suppose we have a data frame called newdata that has one group variable and one dependent variable y, then the with function can be
used to apply the one-way analysis of variance on the data frame as follows:
Level up your interview prep. Join Educative to access 80+ hands-on prep courses. | {"url":"https://www.educative.io/courses/deep-dive-into-data-science-interview/r","timestamp":"2024-11-07T19:55:16Z","content_type":"text/html","content_length":"719535","record_id":"<urn:uuid:2087b745-ae82-4e86-a829-9179c382c08f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00294.warc.gz"} |
the t
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs. | {"url":"https://topic.alibabacloud.com/zqpop/top-tools-to-have_414681.html","timestamp":"2024-11-04T10:39:40Z","content_type":"text/html","content_length":"96974","record_id":"<urn:uuid:7e056a93-eb82-4493-81c1-fec374331fc6>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00245.warc.gz"} |
After School Programs | Ckmathletes | United States
top of page
Fun and Games Description
In Mathletes Fun and Games, the central theme for all of our activities is simple: Math is Fun. Students will participate in a wide range of activities that are visually engaging and mathematically
motivated. Examples include:
• Students will engage in many activities that involve strategy games. These challenges include finding the winning algorithm, and analyzing the end-game to make the winning move.
• Students will tackle a range of puzzles, including algebra puzzles, number puzzles, logic puzzles, spatial reasoning challenges, and folding puzzles.
• Students will practice math skills required for school in game formats.
• Students will be exposed to visual puzzles that are, in reality, problems from different branches of mathematics that they often don't see in school, including graph theory, game theory, and
The world of mathematics is much broader than what is typically found in school. Math is fun and engaging. Come and see for yourself !
Register for in person classes here
Register for online classes here
Sample Problems​
Sample problems can be solved as printouts or digitally.
Grades 1st-3rd: SAMPLE PROBLEMS and ANSWER FORM.
Grades 4th-6th: SAMPLE PROBLEMS and ANSWER FORM.
Interactive Logic Puzzles​
Problem solving strategies for these logic puzzles will be taught in the Fun and Games class.
Here are three example puzzles: Nurikabe, Spiral Galaxies, and Masyu.
To make the puzzles smaller or larger, click on the magnifying glass in the upper left hand corner.
Nurikabe: Blacken some empty cells to form a single connected group, the "ocean", with no 2x2 group of cells entirely shaded black. Two black cells are connected if they share an edge. The remaining
cells form white areas called "islands". If two white cells share an edge, they are part of the same island. Each island should contain exactly one numbered cell that describes its area (in cells).
Two islands may only touch diagonally.
Easy 5x5:
Easy 7x7:
Spiral Galaxies: Divide the grid along the indicated lines into connected regions -"galaxies"- with rotational symmetry. Each cell must belong to one galaxy, and each galaxy must have exactly one
circle at its center of rotational symmetry.
You can color in boxes by going to settings and clicking on the colored box. ​
Normal 5x5:
Normal 7x7:
Masyu: Draw a single closed loop passing through all circles in the grid. The loop must make a turn at all black circles and go straight for at least two cells in both directions before turning
again. The loop must go straight through all white circles and turn immediately before and/or after in the next cell.
Easy 6x6:
Easy 8x8:
bottom of page | {"url":"https://www.ckmathletes.com/fun-and-games","timestamp":"2024-11-05T10:27:07Z","content_type":"text/html","content_length":"901126","record_id":"<urn:uuid:daa7dc3f-3ddd-40b5-a508-42cfa0d5191d>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00311.warc.gz"} |
Product of Roots of a Quadratic equation
$\alpha .\beta \,=\, \dfrac{c}{a}$
The product by multiplying the two roots of a quadratic equation is called the product of the roots of a quadratic equation.
In mathematics, a quadratic equation is written as $ax^2+bx+c$ $\,=\,$ $0$ in general form. So, it has two solutions and let’s denote the two roots by two Greek alphabets $\alpha$ and $\beta$.
$\alpha$ $\,=\,$ $\dfrac{-b+\sqrt{b^2\,-\,4ac}}{2a}$
$\beta$ $\,=\,$ $\dfrac{-b\,-\sqrt{b^2\,-\,4ac}}{2a}$
Now, let us multiply them to find the product of the zeros of a quadratic equation.
$\implies$ $\alpha$ $\times$ $\beta$ $\,=\,$ $\Bigg(\dfrac{-b+\sqrt{b^2\,-\,4ac}}{2a}\Bigg)$ $\times$ $\Bigg(\dfrac{-b-\sqrt{b^2\,-\,4ac}}{2a}\Bigg)$
There are two fractions on the right hand side of the equation and they are involved in the multiplication. So, let us focus on finding the product by multiplying them.
$\,\,=\,\,$ $\dfrac{\big(-b+\sqrt{b^2\,-\,4ac}\big) \times \big(-b-\sqrt{b^2\,-\,4ac}\big)}{2a \times 2a}$
Look at the denominator firstly. A factor $2a$ is multiplied by itself and let’s write their product.
$\,\,=\,\,$ $\dfrac{\big((-b)+\sqrt{b^2\,-\,4ac}\big) \times \big((-b)-\sqrt{b^2\,-\,4ac}\big)}{4a^2}$
Now, look at the factors at the numerator. Each factor is a binomial and their terms of both factors are same but they have opposite signs. So, the product of them represents the expansion of the
difference of squares. Now, let’s simplify the product by the difference of squares rule.
$\,\,=\,\,$ $\dfrac{\big(-b\big)^2-\big(\sqrt{b^2\,-\,4ac}\big)^2}{4a^2}$
It is time to simplify the expression in the numerator to find its value.
$\,\,=\,\,$ $\dfrac{b^2-\big(b^2\,-\,4ac\big)}{4a^2}$
$\,\,=\,\,$ $\dfrac{b^2-b^2+4ac}{4a^2}$
$\,\,=\,\,$ $\dfrac{\cancel{b^2}-\cancel{b^2}+4ac}{4a^2}$
$\,\,=\,\,$ $\dfrac{4ac}{4a^2}$
$\,\,=\,\,$ $\dfrac{4a \times c}{4a \times a}$
$\,\,=\,\,$ $\dfrac{\cancel{4a} \times c}{\cancel{4a} \times a}$
$\,\,\,\therefore\,\,\,\,\,\,$ $\alpha .\beta \,=\, \dfrac{c}{a}$
$3x^2-4x+7 \,=\, 0$
Let’s learn how to find the product of the roots of a quadratic equation. Firstly, compare the quadratic equation with the standard form of a quadratic equation. Now, the values of $a = 3$, $b = -4$
and $c = 7$.
$\,\,\,\therefore\,\,\,\,\,\,$ $\dfrac{c}{a} \,=\, \dfrac{7}{3}$ | {"url":"https://www.mathdoubts.com/product-of-roots-of-a-quadratic-equation/","timestamp":"2024-11-05T03:21:37Z","content_type":"text/html","content_length":"29291","record_id":"<urn:uuid:36bd3429-8611-49e3-a186-caf27fa01eae>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00310.warc.gz"} |
Python set intersection_update() Method - Python Helper
What is Python set intersection_update() Method?
Python set intersection_update() method is used to update a set with the intersection of itself and another set or multiple sets. It modifies the original set in place, discarding the elements that
are not common to both sets. The intersection_update() method is particularly useful when you want to find the shared elements among sets and update the original set accordingly.
Let’s learn Python set intersection_update() method and its functionality. If you’ve been working with sets in Python, you might have come across scenarios where you need to update a set with the
common elements from another set or multiple sets. That’s where the intersection_update() method comes in handy. By using this method, you can update a set by keeping only the elements that are
common to both the original set and another set or sets.
What is the Purpose of Python set intersection_update()?
The main purpose of the intersection_update() method is to efficiently update a set by removing elements that are not common to both sets. It allows you to keep only the elements that exist in both
the original set and another set or sets. This method provides a convenient way to perform set operations and update sets based on their intersection.
Python set intersection_update() Syntax and Parameters
The syntax for the intersection_update() method is as follows:
set.intersection_update(other_set1, other_set2, ...)
Here, set is the original set that you want to update, and other_set1, other_set2, and so on, are the other sets whose intersection with the original set will be used for the update. The method takes
multiple sets as arguments, separated by commas.
Now, let’s delve into some examples to better understand the Python set intersection_update() method. These examples will showcase the practical usage of the method and provide clear demonstrations
of its functionality. By following along with the code snippets, you’ll be able to see the output and comprehend how the intersection_update() method updates sets based on their intersection.
I. Updating Original Set with intersection_update()
Python set intersection_update() method updates the original set in place, discarding the elements that are not common to both sets. This means that the original set is modified, and the result is
stored in the original set itself. Let’s see an example:
Example Code
set1 = {1, 2, 3} set2 = {2, 3, 4} set3 = {3, 4, 5} set1.intersection_update(set2, set3) print("Updated set1:", set1)
In this example, we have three sets, set1, set2, and set3. By calling set1.intersection_update(set2, set3), we update set1 with the intersection of set1, set2, and set3. The resulting set contains
only the elements that are common to all three sets.
II. Updating a Set with the Intersection of Two Sets
To update a set with the intersection of two sets, you can use the intersection_update() method. This method calculates the intersection of two sets and updates the original set with the common
elements. Here’s an example:
Example Code
set1 = {1, 2, 3, 4} set2 = {3, 4, 5, 6} set1.intersection_update(set2) print("Updated set1:", set1)
In this example, we have two sets, set1 and set2. By calling set1.intersection_update(set2), we update set1 with the common elements between set1 and set2. Finally, we print the updated set1:
Updated set1: {3, 4}
III. Updating a Set with the Intersection of Multiple Sets
Python set intersection_update() method can also be used to update a set with the intersection of multiple sets. You can provide multiple sets as arguments to the method, and it will update the
original set with the common elements. Here’s an example:
Example Code
set1 = {1, 2, 3} set2 = {2, 3, 4} set3 = {3, 4, 5} set1.intersection_update(set2, set3) print("Updated set1:", set1)
In this example, we have three sets, set1, set2, and set3. By calling set1.intersection_update(set2, set3), we find the common elements among all three sets and update set1 with them.
IV. Handling Sets with Different Data Types
When working with sets in Python, it’s important to consider that sets can contain elements of different data types. The intersection_update() method in Python sets is versatile and can handle sets
with different data types seamlessly.
Let’s explore an example to see how the intersection_update() method handles sets with different data types:
Example Code
set1 = {1, 2, 3, 'apple', 'banana'} set2 = {2, 'banana', 'orange', True} set1.intersection_update(set2) print("Updated set1:", set1)
In this example, we have two sets: set1 and set2. set1 contains integers (1, 2, 3) as well as strings (‘apple’, ‘banana’), while set2 contains integers (2), strings (‘banana‘, ‘orange‘), and a
boolean value (True).
Python set intersection_update() method handles sets with different data types by considering only the common elements between the sets. In this case, the common element between the two sets is
‘banana‘. Therefore, the output of the above code will be:
Updated set1: {True, 2, ‘banana’}
As you can see, the intersection_update() method effectively finds the intersection of sets with different data types, updating the original set with the common elements while disregarding the
elements that are not present in both sets.
V. Performing Intersection Update with FrozenSets
Python set intersection_update() method is not limited to working with regular sets; it can also handle frozenset objects. By using the intersection_update() method with a frozenset, you can find the
common elements between a regular set and an immutable set.
Let’s consider an example:
Example Code
set1 = {1, 2, 3} frozen_set = frozenset({2, 3, 4, 5}) set1.intersection_update(frozen_set) print("Updated set1:", set1)
In this example, we have a regular set set1 containing elements 1, 2, and 3, and a frozenset frozen_set containing elements 2, 3, 4, and 5. By using the intersection_update() method on set1 with
frozen_set, we can find the common elements between them. The output of the above code will be:
Updated set1: {2, 3}
As shown, the intersection_update() method successfully finds the intersection of a regular set and a frozenset, updating the original set with the common elements between the two.
Common Mistakes and Pitfalls to Avoid
When working with Python set intersection_update() method, it’s important to be aware of some common mistakes and pitfalls that you should avoid. By understanding these potential issues, you can
write more reliable and error-free code.
Let’s take a look at some common mistakes and how to avoid them:
I. Incorrect Method Usage
One common mistake is mistakenly using the wrong method or misspelling the intersection_update() method. Ensure that you use the correct method name, intersection_update(), with the appropriate
syntax and parameters.
II. Not Converting Data Types
Another mistake is failing to convert data types when necessary. The intersection_update() method works with sets, so ensure that the objects you pass as arguments are sets or can be converted to
sets. If you’re working with other data types, such as lists or tuples, convert them to sets using the set() function before performing the intersection update.
III. Misunderstanding Return Type
Python set intersection_update() method updates the original set in-place and does not return a new set. Some developers mistakenly assume that it creates a new set as the result. Remember that the
intersection_update() method modifies the original set itself.
IV. Incorrect Handling of Nested Sets
When dealing with sets that contain nested sets, be careful about how you handle the intersection update. The intersection_update() method updates the original set with the common elements, including
the nested sets if they are common elements. Make sure you understand the structure of your sets and how the intersection update is being computed.
V. Neglecting Empty Set Handling
It’s important to consider how Python set intersection_update() method handles empty sets. If one or more of the sets you’re performing the intersection update with is empty, the result will always
be an empty set. Take this into account when working with potentially empty sets to avoid unexpected behavior in your code.
By being mindful of these common mistakes and pitfalls, you can avoid errors and ensure accurate results when using the intersection_update() method. Always double-check your code, handle data types
correctly, and understand the behavior of the method to make the most of this powerful set operation. | {"url":"https://pythonhelper.com/python/python-set-intersection_update-method/","timestamp":"2024-11-06T11:15:02Z","content_type":"text/html","content_length":"96591","record_id":"<urn:uuid:bf321987-0efe-4510-b437-c2c3ed336236>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00153.warc.gz"} |
Joyce Ann, Author at Musician Authority
Are pianists good at math? A good pianist understands the concept of timing, rhythm, and progressions—which, in essence, entails a good grasp of mathematical interpolation. But a piano prodigy
doesn’t necessarily make a math maven. However, someone with knowledge of analytic geometry might also possess the intellect to become a great pianist.
Try making a vivid image of these two persons inside your mind: Albert Einstein and Wolfgang Amadeus Mozart.
Can you name any common denominator?
You’re probably thinking that the two of them are both geniuses – but different kinds of geniuses. Perhaps, masters of separate turfs.
To an extent, this may not be necessarily true and the two share more similarities than you think.
The physicist and mathematician Albert Einstein played the violin and would even perform solo recitals for his family and friends – he was even a big Mozart fan! The equally brilliant Mozart, on the
other hand, would jot down mathematical equations on the border of his compositions.
Now, you might be wondering: Is there any link between your skills in music (piano in this case) and in math?
Before we can answer this, we need to get a grasp on “the Mozart Effect.”
The Mozart Effect
It has been suggested that immersing in classical compositions improves spatial skills, listening capabilities, and perhaps even the totality of intellectual capacity. However, the truth is murkier
when it comes to today’s research-derived statistics. Is there such a thing as “The Mozart Effect”?
You’ve almost certainly heard of this concept in the past.
The idea that listening to Mozart’s music will help you ace an IQ test – or at least specific parts of it – is referred to as the Mozart Effect. In layman’s terms, it was stated that spending time
sulking to your favorite sonata before taking a test can have short-term positive effects on certain testing abilities (talk about life-saver).
In 1993, a study conducted by researchers from the University of California in Irvine tried to back this notion up.
The initial Mozart effect research was kind of contentious. Individuals were given 10 minutes to pay heed to Mozart’s sonata for two pianos. A control listened to either quiet or relaxing audio
during this time.
The results revealed an improvement in the performance of specific mental activities that lasted no more than 15 minutes. Solving mazes and folding puzzles are two of these abilities (basically your
spatial skills). To display supporting numbers, after listening to the said music, the average IQ scores (for the spatial segment) became 8 to 9 points higher.
However, these figures were erroneously sensationalized by media outlets.
Following the “success” of these results, a huge number of parents began playing Mozart to their kids. These assertions sparked a commercial craze, with Mozart records being marketed to parents and
even money being set aside by the US government to provide classical music CDs in schools.
More advanced research throughout time found that the Mozart Effect is most likely merely an artifact of elevated stimulation and emotion. There is currently minimal proof that listening to classical
music improves spatial cognition.
Despite this, a lot of doors remain open.
Such a notion raises the prospect that learning to play an instrument (such as the piano) might help you improve your mathematical skills marginally.
Is this accurate?
Piano, Math, and Rhythm: The Correlation
Math and music are much more interlinked than you think.
According to a study performed by Scientific American, musicians are more likely to have an above-average understanding of math. Why is this so? Well, music is just mathematical. Concepts such as
sequencing, patterning, and spatial reasoning are well incorporated in elements such as rhythm, melody, and tempo.
You’d be surprised at how trigonometry and differential calculus can be applied in musical composition. In fact, abstract algebra is a staple in musical theory (you might be interested in apps that
can help you better understand this). Mathematical models, to a degree, can also help us understand why instruments sound as such – and how we can enhance them!
A better avenue to understand the link between math and music is the piano.
Since grade school, math teachers would’ve probably delved a bit on patterns in simple ways such as predicting the next shape in a shown sequence. If you’ve ever tried writing songs or composing
melodies on the piano, something might already be clicking.
A great understanding of arithmetic concepts can drastically ease the process of identifying time signatures and rhythm. How is this better applied in the piano? The keys in the instrument complement
mathematical functions.
If you “number” your piano keys, the distance between C (labeled 1st) and G (labeled 5th) is called a “fifth” both in arithmetic sequencing and in music theory. Numerical labeling can also be a great
help to kids trying to learn math. Producing a higher note would be associated with bigger numbers vice versa.
The concept of interpolation can also be honed in the piano – the concept of minor keys can put kids in a situation where they would have to make more inferences based on the information available to
Indeed, learning to play the piano can be a great math exercise. But still, we need to understand that causation is not necessarily a correlation.
It is true that several studies observed that if a person is good in math, chances are, they can play the piano as well. But we cannot necessarily say that being a math wizard causes you to become a
piano prodigy or vice versa.
Maybe, it just so happens that people with brains that can handle analytic geometry would also have the intellect to understand how the piano works.
Also, one cannot set aside possible confounders. Maybe people who have to commitment, dedication, and focus to play an instrument would be able to apply the exact sample values in solving math
Still, we cannot deny the fact that the piano can be a great gateway to introducing basic mathematical concepts (especially for kids). And conversely, a good understanding of numbers can make musical
patterns in pianos much easier to execute.
Here’s a great video illustrating this topic:
In Conclusion
Math and music – it’s quite peculiar how two seemingly different worlds collide.
From going over the facts and data (with a slight hover over the Mozart effect), we’ve come into terms that extended listening to music may help slightly boost cognition. But, playing music offers
greater positive outcomes.
Performing (particularly with a piano) helps you appreciate math and arithmetic more by understanding timing, rhythm, and progressions.
But most importantly, learning to play any instrument teaches you commitment, discipline, and focus – which are much needed when solving complex math problems.
However, we should never rule out the possibility that learning instruments do not necessarily cause improvements in math. There are a lot of factors at play, but still, music is an amazing gateway.
In the end, there is so much to it that makes an Einstein. Trying new things to help you get on top of that podium won’t hurt.
Ever wonder how musicians give life to the soul of heavy genres that typically stimulate our desire to rave, such as hardcore, death metal, and drum and bass? Do you find lower registers appealing to
the ears? This technique may do the magic for you, guitar players! If you ever think Drop D tuning is already low, then you aren’t prepared for Drop G.
What is Drop G in tuning? Drop G tuning refers to an alternative tuning where the pitch is arranged to G-D-G-C-E-A (or to more specific, G1-D2-G2-C3-E3-A3).
Understanding Drop G Tuning
Drop G tuning is best demonstrated using a 6-string guitar, described to be a fifth lower than Drop D, which has the tuning D-A-D-G-B-E. If you tune down Drop D to a fifth, you will be able to yield
G-D-G-C-E-A. The change done alters the pitch of all the six strings in your guitar, which helps you play in the key of G major power chords easier.
The Drop G technique is largely used when using baritone instruments, such as baritone acoustic or electric guitars, to improve playing pieces at lower notes. Note that baritone guitars are
characterized by having longer-scale lengths! The usual baritone standard tuning follows the chords B-E-A-D-F-#B, a musical fourth lower than the tuning on regular acoustic guitars.
Baritone players tend to achieve tuning down their instruments by adjusting them two more semitones lower, which recommends improving the instrument by equipping heavy-gauge strings.
The reason why you need heavier strings? Dropping them down several steps can cause the strings to slack, lack tension, and produce undesired sounds. After the adjustment, it is then tuned to the 6th
string, and sometimes, the first string, both to G, yielding the tuning patterns G-D-G-C-E-G or G-D-G-C-E-A.
Drop tunings in general help perfect fifth intervals, concerning the bottom two strings, in playing a power chord with one finger only. Open chords in guitars also tend to have greater resonance in
comparison to bar chords. Drop G tuning helps to utilize open chords better to play sounds as bottom-heavy as it is, especially for guitarists who want to play their instruments in a bass-like tone.
Here’s our very own teaching what is drop g tuning:
Drop G: Song and Musicians
The use of Drop G has been prominent in rock music, particularly heavy metal, which rose to prominence during the late 1960s in the West. They were largely known for developing thick sounds that are
heavily distorted, emphatic beats, and emphasis on guitar solos.
As music has evolved into the recent years, however, the tuning was also used for stylistic experimentation of artists beyond the genre. A notable musical field that uses the technique is metalcore,
combining extreme metal and hardcore punk. They are defined by having good instrumental qualities for breakdowns, heavy riffs, stop-start playing, and blast beats.
While Drop G tunings have been rare in popular music, they have been notable for their usage by different bands and artists! American metal band Darkest Hour used the tuning to their songs
“Wasteland”, “Attack Attack!” and “Baroness” early in their career. South Korean rock band FTISLAND, known to experiment with the tunings of their songs, applied Drop D to their song “Shadows”.
Metalcore and heavy metal bands such as Dead by April and In Flames used the technique as staple parts of their discography and concerts. Pantera’s song “The Underground in America”, from The Great
Southern Trendkill, made use of Drop G with a D standard variation.
How to Tune Your Guitar to Drop G
To achieve the Drop G tuning on a 6-string guitar, you must do the following in this order:
• The low E string should be tuned down to G by four and a half steps.
• Afterward, tune A should be strung down to D by three and a half steps.
• Then, strings D, G, B, and high E must be tuned to strings G, C, E, and A, respectively, through tuning them down by also three and a half steps.
• Lastly, try plucking the strings like how you normally play the guitar in order to see if the desired notes are achieved.
While Drop G tuning centers most on 6-string guitars, the technique can also be applied in seven-string guitars! The 7-string guitar follows a standard tuning of B-E-A-D-G-B-E. It shares a similar
pattern with the standard tuning of the 6-string guitar, with the difference being the first B string added to be the new lowest tone.
Drop G tuning is achieved by having a tuning pattern of G-D-G-C-F-A-D. This is a whole step lower than Drop A of a 7-string guitar (A-E-A-D-G-B-E), requiring extra work to be done. Drop G tuning in
7-string guitars share a similarity to a D standard combined with a low G string!
To assist you with the tuning, here is a guide:
The Beauty of Alternate Tuning (Final Thoughts)
The purpose of alternative tunings, after all, is to make playing music more diverse and more fitting to one’s vision or experiment. Not every song or piece can be played pleasantly in any key with
standard guitar tuning.
The biggest benefit might be the ease of access to any chord, riff, or key you desire to play. Specifically, under drop tuning, you may be able to play both octave and power chords with more comfort
and ease, now that they are closer together on the E and A strings.
Bands prefer tuning down during their performance in order to insinuate a darker, more menacing ambiance towards the direction they are aiming for. Sometimes, they conduct alternative tunings to suit
the range of the vocalist without compromising the quality of the instrumentation. Alternative tuning also allows them to get out of the zone of comfort, from playing the same comfortable patterns to
achieving musical independence in terms of fingering.
Aside from improving the beauty of performances, alternative tuning also makes the standard impossible, possible. Players of riff-based music may be able to increase the complexity of their pieces.
Chord inversions and wider combinations of open and fretted strings become more available due to the change in the overall tonality of the guitar.
Still, as much as you enjoy the expression of playing your guitar on drop tuning, don’t forget to learn them the way they are intended to be played also!
With a multitude of guitar colors to choose from, certain hues win the mark of merit for many guitar players. These guitar colors are black, white, yellow, red, and the timeless sunburst with
different intensities, saturation, and even color combinations.
Jimi Hendrix, the most influential electric guitarist of all time, is wildly known for his white Fender Stratocaster. But did you know, from the year 1967 to 1969, Hendrix played this beautifully
customized Gibson Flying V?
The “Love Drops” Flying V, at first, is just your standard v-shaped black and white guitar – not until Hendrix, himself, decided to use an array of colors (nail polish) to paint psychedelic patterns.
And the final output is a masterpiece.
With this said, did you ever wonder about adding a bit more personality to your guitar? Or simply buying a new one, but wondered on what color to get?
Well, we have all the right goods for you.
How Color Matters in Guitar
On the surface level, guitar color doesn’t necessarily feel like an important point to spend time pondering on. With so much to consider (guitar type, strings, genre, price, and pickups), this may
just end up at the bottom of your lists.
But to an extent, the color of your guitar matters.
If you’ve been in the guitar-playing game for a while, you’ll definitely agree that your guitar is more than just an instrument. For many exceptional guitarists, it’s an extension of oneself – much
like how we like to dress up ourselves (and we don’t like wearing clothes that don’t fit).
A well-thought-of guitar can effectively exude your personality.
Before everything, let’s have a short trip down memory lane with your guitar types. Why? If you’re considering color, guitar types can greatly expand or limit your choices.
Let’s say wood-based guitars (this would be your acoustic and classic ones!). Many of these guitars retain their natural wooden texture in earthly tones of varying intensity. Conversely, choices are
vast for electric guitars.
With these said, what are the (arguably) best colors for your guitar?
The Can’t-Go-Wrong Black Guitar
Let’s get the big one out of the way: you can never really go wrong with black.
A hue that absorbs all visible light across the spectrum. But what is it about this color that enables it to compliment anything? Black is associated with either strength or mystery. It’s a hue that
inspires introspection and neutrality.
A gloss finish may accentuate this color, resulting in a cleaner, more sophisticated, and sexier guitar.
However, it’s never all sunshine-and-rainbows for this guitar color. Manufacturing black guitars can be significantly tricky. When not done right, missed spots, bleaching, damage, and scratches can
be annoyingly obvious.
Not to mention, black guitars do not necessarily pop out on stage (especially when everything is dully set). Unless that black guitar is a fancy Squier Black, you might end up blending into the
Some prominent black guitar owners include Jimi Hendrix, Chuck Berry, and B.B. King.
The Winsome White Guitar
Sure, we can consider black as a neutral color (as it goes well with anything). But nothing can be as neutral as white.
Though color association may not be as universal as we hope, white can be astonishingly transcending. It is often associated with purity and cleanliness (as cliché as this sounds). But on a visual
level, white adds more space and dimension to things. In fact, designers would gravitate towards this color if they want something to appear bigger.
It can also evoke feelings of focus and concentration – pretty useful during practice sessions! White guitars are not difficult to spot as well. And unlike black, white would definitely stand out on
a dark stage. But, it may not be as memorable.
Jimi Hendrix is also known for having a white Stratocaster.
The Warm and Cheerful Yellow
Take a quick glance at this butterscotch blonde Squier.
After looking at such marvel, it is undoubted that nothing screams joy and warmth like yellow.
People will remember you if you play a brightly colored yellow guitar at any gig – talk about making a lasting impression! Yellow is a vibrant and vivid hue, which may explain why it evokes such
powerful emotions.
Yellow has the ability to attract attention instantly. Excessive usage of this color, however, can be harsh and cause feelings of fatigue.
A yellow guitar will most likely suit any genre you play, which is a significant bonus. Yellow also looks great on any electric guitar (if you happen to play one). It makes little difference if the
yellow is faded, brilliant, or even burned.
The Fiery Red/Maroon/Cherry Guitar
If a color can be at par with yellow in terms of evoking strong feelings and making bigtime impressions (as well as standing out on live sessions) it would be red – and all of its shades.
One of the greatest things about this is that you would never run out of options for this color! Take for example the Fender Eric Clapton Stratocaster in Torino red.
The color red is connected with strong emotions like love and rage, as well as enthusiasm. It draws the greatest attention, is engaging and exciting, and has a significant relationship to heightened
cravings and sensuality.
However, just a little of this color can go for miles. And hence, too much can also induce visual strain. This is something that is almost always linked with bright and warm colors.
Red guitar has a distinct edge over yellow guitars in terms of masculine perception (if that’s important to you!). Finally, as previously said, red exudes energy and dominance – Pete Townshend’s #5
Gibson Les Paul in wide red is a testimony to these.
The Classic Sunburst
Probably, the most reliable option out there (especially for those fond of acoustics) – the everlasting sunburst.
To say that this classic is “boring” or “plain” is surely a bluff. Sunburst isn’t a single hue. If you have any experience with guitars, you’ll probably know that there are probably thousands of
sunburst finishes – ranging from different intensities, saturation, and even combinations (the typical shades used for this style are also the same colors listed here!).
The smoky Epiphone Songmaker DR-100, Dreadnought Acoustic Guitar would surely bring in the luxury in sunburst guitars.
Despite being extremely reliable, a major downfall is that there are probably multitudes of guitarists with the same guitar color. Hence, you’ll probably need to do more to make a lasting impression!
In case you’re curious, here is how they do a two-tone sunburst:
Final Words: A Guitar Color That Says YOU!
There’s a lot to go about when choosing a guitar color. Maybe you might even be thinking of painting an old reliable one just to add a spec of personality. Probably this article can help you figure
out if that’ll affect how your guitar sounds.
In this article, we delved deep into the importance of choosing a guitar color that matches you and the music that you play. Much like acoustics, visuals are a universal language that sends out a
We also listed down some of the best guitar colors (in our humble opinion) to help you out on deciding.
But do note that in personalizing your guitar, there’s a canvas way beyond colors. An important first step is introspection – know what you want and how you want your listeners to see and hear you.
What makes a good guitarist? The best attributes that make a good guitarist include a good grasp of music theory, musical aptitude, dexterity, discipline, dedication, commitment, curiosity,
creativity, and of course, substantial knowledge of the instrument and how it works.
Trying to define what makes a guitarist good can be tricky because it is subject to individual opinions.
Perhaps many would say that a good guitarist possesses all the technical abilities to play flawlessly. Maybe, for others, a good guitarist is someone who can play any part and improvise. Others may
hear a really wicked solo and think, “Hey, that’s one hell of a good guitarist right there!”
Yes, the answer to the question “What makes a good guitarist?” is contingent on subjective assessment.
But, really, to be considered a good guitarist, one does not necessarily need to know how to play like Jimi Hendrix (because he is not a good guitarist—he’s the greatest!). Certain traits separate a
good guitarist from the average one, and these are:
A Good Guitarist Knows His Guitar Really Well
First of all, a good musician knows his weapon. A good singer is aware of her vocal range and knows how to use it. A good pianist understands the piano and knows how to blend different notes together
to create a moving piece. A good drummer recognizes each and every component of his percussion set and knows how to use them to construct expressive beats.
A good guitarist knows the guitar—from the instrument’s hollow body up to its neck and headstock. He knows how the individual parts work, and he knows when any of those parts don’t work! He knows it
like the back of his hand; he has a deep understanding of the instrument.
Has a Good Finger Dexterity
Playing the guitar is going to be harsh on your hands, especially on your fingers. You will be doing finger gymnastics that will absolutely leave you with hand cramps, calluses, and a whole feeling
of awkwardness realizing how your fingers can do those things while you sometimes fumble with your front door keys.
A good guitarist is someone who has developed good finger, hand, and wrist coordination. This means that he can play with good accuracy and rhythm, executing every note clean and clear.
Has a Good Level of Musical Aptitude
Musical aptitude can be defined simply as having a fine ear for music, which means the skill of being able to recognize pitch, melody, rhythm, harmony, and other elements of music. When a guitarist
knows the interplay between all these elements, trust that he can always come up with something good!
This attribute also brings us to the next section; when a guitarist has the makings of a musical genius, then he most likely…
Understands Music Theory
Yes, we know this one has been a subject of debate for the longest time now. And this section is going to be a little lengthier than the rest, but it’s a point worth exploring.
Do you really have to have a good understanding of music theory to be called a good guitarist—or a musician, for that matter?
We can almost hear the choruses of The Beatles, Jimi Hendrix, Elton John, Elvis Presley, Louis Armstrong, David Bowie, and many other outstanding musicians of the modern world who didn’t know (or
study) music theory.
These musical geniuses do know how music works, and that, in theory, is music theory! As one former Redditor beautifully put it, “Good musicians who don’t “know theory” or were never trained,
actually do know theory, they just don’t know the universally standard words to describe what it is they know.”
Also, these talented musicians had been exposed to musical experiences where applied music theory is intrinsic in those experiences’ nature. Paul McCartney, for instance, had years of experience
singing in the choir, and this must have been the secret as to The Beatle’s delicate vocal harmony.
These artists have, within them, this innate sense of musicality that’s characteristic of what music theory demonstrates. They just didn’t have a reference point to know the theory labels, but they
know. When you observe them carefully, you will see what we mean.
In addition, these artists may also have eventually picked up some pointers from other musicians and people in the industry they have worked with—pointers that are the crux of music theory.
And now, going back to the main discussion, a good guitarist is someone who has a good grasp of music theory. Because, by understanding music theory, by understanding why some chords sound good
together, how pitch and rhythm work together to create melody, why augmented chords work so well for specific pieces, how syncopation adds another dimension to the music, and many others, the
guitarist is able to:
1. Enrich musical development;
2. Helps achieve mastery of the instrument with proper understanding of how music works;
3. Communicate and work with other musicians seamlessly;
4. Improves skills in improvisation, critical listening, song arrangement, and composition;
5. Develop a deeper appreciation of music.
These are only some of the many benefits of understanding music theory. If you want to know more about this, here is a practical guide:
Has a Strong Sense of Discipline
And having a strong sense of discipline entails patience to keep going when the going gets tough (you know you could be having fun with the boys at the bar, but here you are with your guitar,
polishing those last bits of your first solo).
Having a strong sense of discipline also means having the will to learn something new each day. You can’t go stagnant; you have to do something, learn and relearn—a riff, a chord pattern you first
thought was boring, a new finger exercise routine, whatever it is.
A good guitarist sticks to practice and playing routine, and tracks his progress as well so he knows where he falls short of and where he excels. With a strong sense of discipline, the guitarist
becomes committed to growth.
Curious, Committed, and Creative
Curiosity is a natural precursor to being committed. Being committed is a natural precursor to creativity.
When he is curious, then he will more likely listen, observe, and take note of how high-caliber guitarists play. He has a natural desire to learn and to improve. This enthusiasm leads to him being
committed—or dedicated—to this journey.
Being committed to this art means devoting some of his time to honing his skills. He takes time to evaluate his strengths and weaknesses. He wants to be a good guitarist, so he has got to earn it.
There are no shortcuts.
Soon, he finds himself doing improvisations. He becomes creative. He gets excited at what other things he can execute on his guitar.
And that’s how a good guitarist is born.
Here’s a video that is a great complement on this topic:
The Makings of a Good Guitarist: Final Thoughts
As a guitarist, you probably wonder where you fall in the guitar playing goodness scale. But if you find that you are lacking on, let’s say, discipline department, or maybe, finger agility, don’t get
discouraged. More often than not, what stops us from becoming good at something is just all in the head.
Knowing what your weaknesses are gets you one step forward to becoming a good guitarist! However, you should not stop there; take the next few steps as you work on your finger dexterity, or figuring
out the best times for playing when you know you will be at your most motivated self!
Anyone can become a good guitarist if they will put their heart and mind into perfecting the art! While some artists are born, others are made. You can do it!
How hard do you press on guitar strings? Some guitar newbies make the mistake of pressing too hard on the string, which can result in a wayward tone and worn-out fretboard. How much pressure to put
on the strings can vary on the guitar and the note, but in most cases, you don’t need to apply bone-crushing pressure!
Playing the guitar requires a lot of practice and skill that even something very specific such as the pressure you apply to the strings matters. Beginners rarely consider this, and just press the way
they believe is right.
When you press too light, you may not be able to produce a successful tone. Meanwhile, when you press too hard than what is necessary, you might wear both your guitar and your hands.
So, what is the right amount of pressure to apply on the strings?
The answer is a sweet spot that lies somewhere in between the two.
Pressing Too Hard: The Beginners’ Common Mistake
Beginners know that pressure should be applied to the strings to produce a tune. But what some of them eventually find out is that it’s just not any pressure.
When they press the stings very lightly, the tune would sound incomplete. To solve this, they will eventually add more and more pressure to the strings until it begins to sound so much better. Most
newbies tend to press on the strings too hard than necessary, and they naively believe that putting too much pressure is the right way.
This incorrect practice would lead to problems over time—blisters and callouses may form in their fingers, their hands and wrists might get crampy, and their guitars might wear out faster than they
Apart from getting your hand hurt when you apply too much pressure, your performance becomes shaky and harsh rather than fluid and smooth. Placing your hand on the fretboard too tight would retard
your pace of moving in between chords.
Guitar players that start with acoustic guitars are those who tend to press too hard and experience these distresses. The strings of acoustic guitars are thick which leads to its new users having a
notion that they must press more than usual to counter the high tension these strings possess. On the other hand, those who start with electric guitars differ, they are accustomed to pressing lightly
due to the material (which is nylon, by the way) of its strings.
How Hard is Hard?
So, if the “will turn my fingers red” hard is not the appropriate hard, how hard is hard, then? In reality, there is no well-defined standard of “hard” since it varies from one person to another.
Yet, some exercises are created to find the right amount of hardness that suits you.
To help you discover the right pressure that suits you, follow these steps:
• First, on your fretting hand, press any note at the fifth fret on any string that you are comfortable with. Ensure that the note is located to the fifth fret and not in the middle of the space
between the fourth and fifth fret.
• With your picking hand, pluck the note. At this point, no solid tone should be produced since the string is not positioned against the fret yet. Increase the pressure you apply on the string in
very small amounts. As you add pressure, continuously pluck the string. There will come a point when you actually play the correct tone. Remember to take note of how hard you pressed to produce
this sound.
• Now, if you add more pressure beyond this point, you will notice that it will not change how the tone sounds, which proves that putting extra pressure does not come with any benefit but only with
• Repeat this process with your middle finger, ring finger, and pinky finger on any of the strings on the sixth, seventh, and eighth fret, respectively. Once completed, repeat this process, but now
with chords.
• Start with playing the chords lightly, build up pressure until you are able to create a clean tone across all the guitar strings.
Now you know the feel of how much pressure you should exert on your strings. The next objective is to be familiar with the amount of pressure until it becomes muscle memory. You can achieve this by
doing the exercise earlier as a warmup before every session. Before you know it, your fingers are recalibrated to press that way.
Here is a helpful video on this topic:
When the Guitar is the Problem
You’ve done all the necessary steps, but still, you can’t seem to find a comfortable way to press your strings.
It could be that the problem is not with the user, but with the instrument he is using. One way to figure out if this is the case is to check your guitar for possible defects, and there are two
common areas to this:
The Strings. Check if the strings are old or rusted. Having rust on your strings due to the oil and salts from your fingers will compromise the sound of your guitar, resulting in dull or muffled
If this is so, you will have to replace your strings immediately. There are many inexpensive but good-quality guitar strings available in the market. Brands such as Ernie Ball and D’Addario sell
great affordable strings that would make your guitar sound a hundred times better.
Strings vary in thickness; some provide stronger tension than others. If you feel like you need to press too hard, just to produce a full note, you should consider buying light gauge strings such as
0.9s or 0.10s. With these, pressing is a lot easier.
Don’t know how to attach your strings? Worry not, setting up these strings on your own is not a problem since there are loads of tutorial videos, such as this:
The Nut Board. Apart from the strings, the nut board can also be an issue. The guitar action is highly reliant on the height of the nut. If it is placed too high, you need loads of effort to be able
to produce the right sound; if it is placed too low, the guitar will produce a buzzing sound almost all the time. Having the nut board fixed by a guitar technician is very affordable and is highly
If problems still persist, you can have your guitar checked by a guitar expert. He will then adjust the neck, action, bridge, or any parts necessary to make your guitar playable to the way it was
intended. However, a guitar setup is quite expensive. If you are using a low-end guitar, it is more practical to replace it with something better instead.
Final Thoughts
In the quest of finding the right pressure, don’t put a lot of pressure on yourself!
Don’t be too hard (no pun intended) on yourself. Finding the right amount of pressure is one thing and practicing it on every single song is another. With dedicated practice, and a number of
callouses along the way, playing the guitar wouldn’t be that hard anymore.
What is the different between spanish guitar vs acoustic guitar? The Spanish guitar and acoustic guitar both have their own charms. Although they may seem similar at first glance, these guitars
differ when it comes to string type, neck and fretboard size, bridge and tuning pegs structure, and sound quality.
American luthier, singer-song-writer, and folk-country star Guy Clark once said, “All gut strings. That’s just the first kind of guitar I played, it was a nylon string guitar. And to me, it’s the
purest form of guitar making, and I just enjoy doing it.”
The Spanish guitar brings up the imagery of those balmy summer nights when people gather together in the garden, sipping wine and basking in the relaxing music of the strings as the cool wind blows.
And of course, you are probably well-aware of what acoustic guitars are. The memories of those camping trips or beach outings almost always include a jamming session with an acoustic guitar as the
main source of music.
Still, it is unarguable that a lot of people (even mildly experienced guitarists) still confuse the two – and maybe you’re one of them.
Do not worry, because you just got yourself a crash course on the stark contrast of Spanish and acoustic guitars.
Spanish Guitar Profile
A Spanish guitar is much like a “cousin” of the acoustic guitar that has nylon strings instead of steel. A classical guitar is another name for it. Classical guitars are descended from the fifteenth
and sixteenth-century Spanish vihuela and lute. Gut strings were used to string both of these instruments (interesting fact?).
It has a similar standard tuning, six strings, numerous frets, and a soundhole, just like other guitars. The timbre of the Spanish guitar is mellow and pleasant. It’s a very emotive instrument, and
the musician has a lot of control over the tone he or she produces.
The strings of a classical guitar are a sure-win method to distinguish it from other guitar kinds. As previously stated, classical guitars require nylon strings, as opposed to acoustic and electric
guitars, which employ metal strings. A Spanish guitar is usually played with the fingers rather than a pick. To produce a bolder and more precise tone, most Spanish guitarists grow their right-hand
fingernails and use them to strum.
Compared to other guitar types, a classical guitar’s neck tends to be shorter but its fretboard is wider to make it easier to play complicated chords. Because of their shorter and wider necks, neck
tension is much greater.
Acoustic Guitar Profile
You’re probably most well-versed with acoustic guitars, but let’s hover over them for good measures.
An acoustic guitar is frequently regarded as a beginner’s mainstay since it is priced on the less costly spectrum and is available in a wide variety of sizes, rendering it suitable for almost all
guitarists. Steel strings are used on modern acoustic guitars, as previously stated. To make clean and crisp sounds, you’ll need to apply a bit more pressure to this string type.
The actual edge of acoustic guitars lies in their convenience. They are easy to carry around and won’t require any power. Their vibrations carry through the air so no particular pickup or amp is
needed to work. All it needs is you, your determination, and a little bit of time.
Stringed instruments have been manufactured and performed by humans for thousands of years. Versions on the guitar, in fact, date back over four millennia. The guitar that we are accustomed to now is
based on a design created by Antonio Torres Jurado, a nineteenth-century Spanish player of the lute.
The blues introduced the guitar to the world of mainstream music. Pioneer blues guitarists like Son House and Robert Johnson were part of the first to tape Mississippi Delta blues and bring it to a
wider audience.
Spanish and Acoustic Guitars: A Showdown
Many guitarists are perplexed when they hear the terms “Spanish guitar” and “acoustic guitar” when making comparisons. We mean they do sort of resemble one another. Nonetheless, they are unique in
specific ways.
Strings. Let’s get the obvious one out of the way. Under observation, the strings on the Spanish guitar would appear to be more translucent, which is owing to the nylon material employed.
Carbon-based polymers are sometimes used instead of nylon. The strings of an acoustic guitar, on the other hand, are made of steel (and at times, are even coated with a tougher mineral).
When opposed to steel, nylon delivers less tensile stress and is gentler on the fingers – in other words, it is softer to the touch. Beginners who begin with acoustic guitars are more likely to
experience finger discomfort. As a result, many acoustic guitarists prefer to use picks.
The Neck and Fretboard. Because of their narrower necks, acoustic guitars feature slimmer fretboards. Looming through chords becomes considerably simpler as a result of this, especially for novices
(or people who just generally have smaller hands).
With such a benefit comes the risk of accidentally muting your strings on a regular basis. Wider fretboards, such as those found on Spanish guitars, nonetheless offer an advantage since they make it
less easy to mistakenly mute your strings.
The Bridge and Tuning Pegs. The bridges on classical and acoustic guitars differ significantly. A traditional wrap-around bridge is seen on a regular Spanish guitar. To keep the strings in place,
they are knotted in a knot around the bridge. The bridge on a standard acoustic guitar features pegs that keep the strings in place. Furthermore, the tuning peg on a Spanish guitar is constructed of
metal and plastic, whereas the entire tuning peg on a normal acoustic guitar is made of steel.
The Sound. Compared with Spanish guitar, an acoustic guitar’s sound is softer but brighter and more sustained, with a metallic tone. The Spanish guitar’s sound is relatively louder and fuller, yet
mellow and with more depth. Perhaps we can say that a Spanish guitar is emotive, while an acoustic guitar is expressive in terms of sound quality.
Here’s a good comparison between the two guitars:
Spanish and Acoustic Guitar FAQs
We’ve been through a road trip of information. But probably, you still have a handful of questions in mind. Well, we’re a few steps ahead. We’ve compiled below some of the most frequently asked
questions on Spanish and acoustic guitars:
Which of the two is more expensive?
It depends! Acoustic guitars and Spanish guitars are available in a variety of sizes and primary materials.
Which one is more beginner-friendly?
Spanish guitars have softer strings that offer way less tension, and a wider fretboard that prevents frequent muting. If you’re starting out, this may be a good choice!
Do they sound different?
Spanish guitars tend to sound more expressive, sweet, and mellow. Comparing this to the other type, acoustic guitars would sound more warm, bold, and crisp.
Are classical and Spanish guitars the same?
Yes! The two are both nylon-stringed, wooden guitars!
Which guitar is better for which genre?
Acoustic guitars are well-suited to contemporary tunes or ensembles. Spanish guitars are widely used in Latin or folk music.
Which one should I get?
Again, it depends! Consider your experience, playing style, genre, and resources.
Final Words
Discussing the basics of two seemingly similar guitars can be challenging – but alas, we’ve made it to the end.
In this article, we breezed through the profiles and histories of both Spanish and acoustic guitars (revealing that if anything, they’re far from identical). We’ve also made objective, point-by-point
comparisons to break down their edges and dents. Lastly, we’ve looked into some possible questions that you had in mind.
As we always say, there is never a correct answer in choosing a guitar that suits you the most – there are just good choices and better ones. It largely depends on more factors outside the guitar
itself that you should consider.
Should I get a guitar? The answer will depend largely on how committed you are to owning and learning the instrument. But to take you out of the guesswork: when you find yourself absorbed by this
question, then it most likely means it’s time to consider getting your very own guitar!
It is very likely that you are reading this article because you are left under an unending loop of deciding whether you should or shouldn’t get a guitar of your own. If you have money to spare, then
with no further questions asked, you should buy one.
With a guitar, you can play a song that you like anywhere and anytime you want. You don’t know the full prowess of a guitar until you own one.
During sad rainy days, you can just grab your guitar and play your favorite tune, and boom! Mood boosted. In scenarios where boredom lingers all over the room, you can cheer everyone up by
aggressively strumming your guitar to a hit track; sooner or later, you will see everyone vibing with you.
But then, the fun stops when you begin to realize that you don’t want to play the guitar. So, the real answer on whether you should get a guitar depends on how passionate and determined you are to
learn and play one.
First Things First
Okay, let’s say you’re all pumped up and finally committed to entering the guitar game. Now, what’s the next step? Should you learn how to play the guitar first before buying one or should you get a
guitar first, then learn? The honest answer is, it really depends.
Some guitar professionals recommend that if you’re still hesitant whether playing a guitar suits you, it is oftentimes better to borrow a guitar first. Ask your friend if he has a spare unoccupied
guitar that you can adopt for some time. And if he says yes, go test it out.
As you go through the process of learning how to play the guitar, your interest in it will reveal. If you eagerly wake up every day to practice, then a guitar is the instrument for you. On the other
hand, when dust forms on your guitar for being untouched for a week or so, it is best to stop for a while, and then try it out again once the burst of enthusiasm resurfaces.
By testing your friend’s guitar, you could save loads of money than buying an instrument that will end up sitting on one corner of your room as display.
Another benefit of trying out others’ guitars first is knowing the hits and misses on their guitar that you want improved or retained once you choose your own personal guitar.
Now, if you don’t have the luxury of a friend who has a guitar that you can try out first, then you have no choice but to take the route of buying your own guitar. Even an inexpensive one will do.
When you have the power to choose what guitar to buy, you’ll choose the one which ticks the most checkboxes on your mind.
However, the biggest risk of buying a guitar straightaway is losing interest along the way. So before buying your own, think about it carefully; if you really like it and are truly passionate about
owning and learning the guitar, then go get one.
Getting Your Very First Guitar
Now, you are about to buy your own guitar (Yay!). With a wide array of options in the market, choosing which one to buy can be overwhelming. Guitars are built differently depending on how they would
sound, so you should first decide what type of music you want to play.
Two great options as a first guitar are either acoustic or electric guitar. If you’re leaning to rock, metal, or jazz, an electric guitar is the one for you. Electrics are among the easiest guitars
to play for any beginner; however, it requires additional equipment such as an amplifier and connector cable to reach its full potential. More equipment equals more money, so put this in mind if
you’re planning to buy an electric.
Acoustics, on the other hand, don’t require any additional equipment to produce rich, quality sound. Go for acoustics if you plan on playing loads of folk, country, and R&B, but its wider neck and
thicker strings make it a more challenging guitar choice for beginners compared to electrics.
After deciding what type of guitar to purchase, there are still loads of makes and models to choose from. The first aspect to look at is playability. Find the guitar size that fits you. A guitar that
is too big or too small for you can affect your learning flow and performance. Usually for oldies like you—assuming you’re a fully-grown adult—a full-size guitar (40”) is oftentimes the best option!
The color, finish, and other personal preferences are left for you to decide.
Here’s a helpful video on this topic:
Don’t Make These Mistakes!
Buying a guitar is not as simple as it seems. Guitar newbies make usual mistakes that make them lose more money than they should. In order to save you from experiencing this frustration, here are
some things that you should observe.
1. Inspect if there are physical deformities or factory defects on the guitar that you want. Check if the neck of the guitar is straight, having a bent guitar neck would make playing the guitar more
difficult. If your guitar has knobs, buttons, or plugs, make sure that all of them are working. Also, check if the machine heads are made nicely. Due to the way some of these are created, most
budget-guitars have problem staying in tune, so let the store owner tune the guitar for you and play a few chords to check.
2. Most beginners get blinded by big brands and neglect products from small companies. However, big companies sacrifice lots of its quality just to get that low price tag attached to their guitar.
In reality, big and small companies have the same playing field in the budget-guitar competition, so it is highly recommended to test the guitars you have your eye on regardless of their brand.
3. Resist the urge to buy pro-level guitars. As a beginner with limited music knowledge, you do not need most of the features they offer, and so you would not get most of the value when buying one.
And often, these guitars can overwhelm novice players. Start simple, and then transition when you are ready!
Have a Happy Musical Journey: Final Words
Purchasing a guitar is not an easy-peasy decision; you need to consider what type of guitar you want, your preferences, and of course your budget.
Musical instruments are investments, and so, you might see the need to spend a generous amount to get yourself the best guitar you can possibly afford. This certainly calls for careful
decision-making as you don’t want to end up regretting your purchase.
It’s time to evaluate if you are truly ready to commit to owning a guitar that will be with you on your musical journey. If you do figure out the answer, the next time you enter that guitar outlet,
you’ll be leaving with a guitar bag on your shoulder.
Can a lefty play a right-handed guitar? It’s possible for a lefty to play a right-handed guitar, but it will take some effort to get accustomed to the instrument built for the right-handed players.
One workaround is to rearrange the strings in reverse. Or better, just go get a left-handed guitar!
Okay, let’s play a short game. Swiftly think of these five iconic guitarists:
Jimi Hendrix, Tony Iommi, Paul McCartney, Kurt Cobain, and Courtney Barnett.
What’s common among them? Well, aside from being part of the most talented guitarists of all time, all of these legends are left-handed.
As seen, despite the adversities in traversing a career designed for right-handers, left-handed musicians were still able to make a name for themselves.
In this article, we’ll be hovering over the day-to-day struggles of being a left-handed guitarist, their chances in playing right-handed guitar, and the conveniences of a left-handed guitar.
To the Left, To the Left
On a scale of 1 to 100, lefties assessed themselves as more artistically adept in a 2019 study of greater than 20,000 participants. This might be attributed to the fact that left-handed people have
to acclimatize to a right-handed milieu on a regular basis.
We can’t dispute, though, that left-handed artists have it tough. When a lefty has no other choice but to flip the strings on a conventional, run-of-a-mill of a mill guitar and perform it that
fashion, you (right-handers) would soon realize that you had it better!
“Why do lefties feel the need to adjust in the first place, given left-handed instruments exist” a typical right-handed musician would wonder.
Well, it all boils down to some key reasons: left-handed guitars are rare and expensive, there are fewer teachers who are experienced in playing left-handed instruments, and, of course, personal
preference and aesthetics.
So what are some common day-to-day struggles for lefty guitarists?
Ergonomics. Let’s pretend our left-handed musician is forced to play a right-handed guitar. This option has several faults.
Because of the manner by which the left arm rubs across the knobs, they’ll notice the tone and loudness to start tinkering as soon as they start playing. Furthermore, their left elbow would clash
with the output jack cable, thereby ruining the welding.
Some string intonation issues may also arise due to the counter-conventionally slanted bridge. Finally, because all six tuning pegs are on the opposing side of the headstock, they would be difficult
to access.
Chord Charts. Because what you see on paper a chord chart is essentially what you catch when you pass a glance at your guitar’s fretboard and your chord hand, the ordinary right-handed individual has
minimal difficulty reading chord charts.
For lefties, things become a lot worse. Consider the low E string: they see the low E string on the chart where the high E string is based on their fretboard’s layout. Are there any left-handed chord
charts? Yes, but they’re exceedingly difficult to come by. Later on, left-handed artists would develop a second nature of consciously swapping up strings to adjust.
Playing Live. If a musician plays a left-handed guitar and is arranging a live show, they might want to consider standing on the stage right (with the headstock facing center stage).
If they fail to do this, their guitar’s headstock is more likely to clash with the headstock of another player’s guitar. Mind you, headstocks are exceedingly delicate, and breaking one would be a
pain in your wallet.
Furthermore, few stage technicians are left-handed or have experience working with left-handed instruments. Unless you have someone who is well-versed with how lefties prefer to play, the most
typical practice is to learn how to set it up on your own.
The Lefty with a Right-Handed Guitar
As mentioned earlier (as stressed in the ergonomics section), playing a right-hand guitar could be messy for novice left-handed guitarists.
But is it possible?
Of course, with some extra elbow grease and patience.
To reiterate from earlier, there’s an actual logic on why many lefties opt to play a right-handed guitar. Left-handed guitars are just difficult to come by and would cost some extra bucks. Not to
mention, you’ll never know when an opportunity demands that you play someone else’s guitar (which would most probably be right-handed).
The guitar is a delicate instrument that demands finesse and accuracy. In order to play the guitar properly in the first place, you will be required to practice with both of your hands. Individuals
who have spent their entire life doing things with one hand may find it difficult to do so with the other.
Some left-handed musicians play right-handed, but with the neck turned to the right so that the lowest string is closest to the ground. They are taught the chord forms backward.
Here’s a tip from an actual left-handed guitarist:
“To avoid spending money on a lefty guitar, you could try him out on a right-handed guitar restrung with the strings the other way around.”
If it proves to be more than a one-time trial, you’ll want to have your right-handed guitar established for that arrangement. However, when it becomes an annoyance, it’s time to get a left-handed
guitar. It sure is a lot easier than trying to play the guitar upside down like Jimi Hendrix did!
Left-Handed Guitar: The Lefty’s Best Bet
Okay, probably our trick did not work you hoped. What now? Probably, it’s time for you to resort to the more logical (and expensive option): get a left-handed guitar.
So, what’s so special about them?
As the name suggests, left-handed guitars are designed specifically for left-handed players.
Basically, the thickest string on this guitar (your Low E in standard tuning) is the one farthest to the right. Conversely, that Low E (again, if you’re in standard tuning) will be the initial string
on the left on a conventional, right-handed guitar. Moreover, the left-handed guitar is constructed so that a lefty would be able to utilize their right hand to grasp the instrument’s neck and their
left one as a strumming hand.
Similarly, components such as tone knobs, switches, vibrato bars, and volume knobs are reversed on a left-handed guitar – giving lefties the opportunity to add that extra kick during their jamming
Check out this video of a guitar teacher discussing the challenges of being a lefty guitarist, her tips for the lefties, and why getting a left-handed guitar is worth it:
A Lefty in a Right-Handed Dominated World: Final Words
According to a study by experts, only 10 percent of the world’s seven billion (plus) population is left-handed. It must always be a hustle to adjust for the overwhelming majority of right-handers.
It’s also no secret that the world of music is not necessarily designed for left-handers. But this won’t stop you, will it?
In this article, we gave you a brief rundown on the typical struggles of lefty guitarists from the underlooked ergonomics to the additional stress during live sessions. We’ve also talked about some
tips when trying to play a run-of-a-mill right-handed guitar as a lefty.
Lastly, we delved into understanding what makes a left-handed guitar a lefty’s best bet.
As we always say, the choice would always be up to you—your playing style, your time, and of course, your resources.
Who are some Martin D-35 famous players? Despite its reputation as a singer-songwriter guitar, the reliable Martin D-35 had been in the hands of music icons like Elvis Presley, Johnny Cash, Seth
Avett, and Jim Croce.
It’s the 1970s, and the stock market in the United States is in shambles. It has dropped about half in the last 20 months, and for the first time in nearly a decade, few people wanted to invest in
As a result, you’ve undoubtedly heard that most guitar-producing corporations had a rough decade in the 70s.
Having a bad reputation throughout those years placed Martin under a lot of stress, and their valuations are perhaps as weak as they’ll ever be. This isn’t to say that 1970s Martins should be
shunned. Indeed, they may be some of the finest guitars available, and they can be purchased at affordable costs.
Fast-forward to today, C.F. Martin & Company‘s lengthy tradition of collaborative artist partnerships has resulted in a number of legendary trademark guitars, including collaborations with icons like
John Mayer, Eric Clapton, and Stephen Stills.
So, what is it about the Martin D-35 that makes it so unique? Who are some of its most well-known users?
Martin D-35 Profile and History
“Well, it’s definitely a brave new world! That said, we recognize that this is the reality we’re in today and this is what we have to do and how we have to do it.” These were the words of the vice
president of product of Martin, Fred Greene, when asked about the challenges brought about by the new climate amid the pandemic.
Indeed, just like all success stories, Martin has been through a rollercoaster ride.
C.F. Martin & Co, a family-owned firm based in Nazareth, Pennsylvania, has been the world’s finest acoustic guitar manufacturer for roughly two hundred years. Martin guitars have retained a dominant
podium on performances all around the world since the business was started in 1833 by German migrant Christian Friedrich Martin. All this was made possible due to how Martin’s guitars have persisted
by adjusting to changing situations along the journey.
Continuing, Martin guitars have had several moments of increased popularity when they have gotten greater attention. Despite this, Martin has stayed faithful to its roots, never really pursuing the
electric guitar industry or abandoning its goal of creating the perfect acoustic guitar (that many of us dream of having today).
The 1966 Martin D-35 we know today was born from this mentality.
Since the early 1930s, the Martin D-35 was the first new Dreadnought model to be introduced. The D-35 was introduced in late 1965 and was designed with a three-piece back to enable C.F. Martin
Guitars to employ rosewood sets that are just too undersized to be utilized in the manufacture of a Dreadnought. In addition, by the mid-1960s, Brazilian rosewood had been in low supply, and it would
have been a pity to waste such fine and beautiful wood due to it having an incorrect size—talk about sustainability!
A Grover Rotomatic tuners, multi-layered side purflings, bonded ebony fingerboard, and volute-less neck were all included in the revised D-35 model. The top’s bracing was drastically decreased under
the hood, making the D-35 the most bass-end sensitive Dreadnought, considering that scalloped bracing was phased out back in the year 1944.
Here’s a sound test video of Martin D-35:
Notable Martin D-35 Players
As mentioned earlier, the long history of C.F. Martin & Company’s creative artist relationships has resulted in a number of iconic signature guitars. As an outcome, seeing prominent names beside the
Martin emblem was never unusual.
Now that we’ve delved into the profile and history of Martin D-35, let’s set our eyes on some of the most famous names that yielded the masterpiece.
● Elvis Presley
Does the king of rock and roll really need any introduction?
He became one of the most known artists of the twentieth century due to his energizing renditions of songs and overtly sexual performance delivery, as well as a strong blend of inspirations spanning
color boundaries.
His fame is undeniable, boasting past 20 number one albums in the Billboard charts, more than 30 chart-topping songs, and an astounding 600 million records sold worldwide (Guinness World Records
recognizes him as the best-selling solo music artist of all time).
Presley’s D-35 may be one of his most well-known instruments, despite being acquired late in his life, due to appearing in numerous promotional images from his farewell tour. However, this guitar’s
moment in the limelight was brief, as it was flung past the stage by an enraged Presley on 1977’s Valentine’s Day.
● Johnny Cash
Of course, how can we forget the Man in Black?
Cash’s songs often featured themes of sadness, spiritual difficulty, and salvation, although this was not the case for his rise to fame. Over the course of his career, Cash recorded past 50 solo
albums, navigating the tough shift from a typical country performer to a worldwide renowned icon.
He also boasts one of the most well-known live albums in existence, as well as being one of the music artists with the most sold records of all time, with an estimate of 90 million albums of global
Cash has always been a devotee of Martin instruments and has played a wide range of them during his nearly 50-year tenure. Martin helped him modify his black D-35, which has a polished black finish
on the neck and its body. Undoubtedly, Cash’s number one guitar for the rest of his career would be this black D-35.
● Seth Avett
He was noted for fusing classic Bluegrass and Country stylings with tougher Punk and Rock influences as one of the main singers and founding members of the American folk-rock band The Avett Brothers.
For his entire public career, Seth Avett has played a D-35, beginning with an off-the-shelf regular D-35 and subsequently cooperating with Martin on his own unique D-35 Seth Avett.
Instead of Sitka spruce, he uses a High Altitude Swiss top and Adirondack bracing on his trademark model, which deviates from the typical D-35 formula. The guitar has a sharper attack than the normal
model, which suits Avett’s aggressive strumming and plucking technique nicely.
• Jim Croce
He released five solo albums and various songs from 1966 to 1973. Croce worked a variety of odd jobs to pay his expenses throughout this time, but he continued to compose, record, and play concerts –
eventually gaining his first chart-topper Bad, Bad Leroy Brown.
Croce began creating songs on 12-string guitars, but in 1970 he switched to a Martin D-35, which a buddy had built with a thinner neck – helping to shape the clear, delicate fingerpicking sound he
would become known for.
In Conclusion
Everyone is unique. People have idiosyncrasies, and when it comes to playing a musical instrument, there will undoubtedly be some oddities. Despite this diversity, the Martin D-35 is incredibly
From your average Joe to even the King of Rock and Roll, himself, this guitar had certainly found its place. This can be greatly attributed to Martin’s roster of partnerships and collaborations.
In the end, it is inspiring to think that such an achievable masterpiece was once used by the people you adore. | {"url":"https://www.musicianauthority.com/author/joyce/page/2/?mab_v3=10094","timestamp":"2024-11-09T00:53:54Z","content_type":"text/html","content_length":"154212","record_id":"<urn:uuid:71d5b85b-63b4-4423-941c-227700fd2cd3>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00795.warc.gz"} |
Looked in one FFT implementation
While procrastinating had a look at FFT implementation of Surge. I doubt anyone will use this implementation:
• It’s doing real-to-complex Fourier transform but uses complex-to-complex implementation vDSP_fft_zip and to use this function it builds complex array with zero imaginary parts. It just waists
memory allocated in line 27 for array imaginary. There’s real-to-complex implementation of 1D FFT vDSP_fft_zrip which might be used to avoid allocation of array of zeros for imaginary part. TODO:
check if complex implementation takes more time fo running on same buffer length.
• Usually FFT is run multiple times for several chunks of data of same length. For that usage of FFT in vDSP (and all other effective libraries for same purpose, e.g. fftw) split on two parts:
setup of FFT which has to be done once and consecutive run of FFT for chunks of data of same length. But here setup is calculated inside of a function and isn’t reused.
• vDSP isn’t able to run FFT on a buffer of arbitrary length as it uses FFT algorithm operating only on buffer of length of pow of 2. Here the length of the buffer is calculated as let length =
vDSP_Length(floor(log2(Float(input.count)))) (line 30). Which means FFT will run on whole buffer only if its length is pow of 2, otherwise FFT will be run on part of the buffer. Let’s say FFT
function was called with array of 1000 floats. This means FFT will be run only for 512 items of it, leaving rest 488 items as they are.
• line 40 calculates squares of magnitudes, but runs it on whole input.count items of splitComplex. In our example 512 items contain calculated FFT and the rest - raw signal. After this calculation
512 items will contain squares of magnitude of FFT and 488 squares of raw signal.
• line 46 calculates square first going from squares of magnitudes to just magnitudes. Than multiplies magnitudes by 2.0 divided by length of input signal. Whatever the last means, again, result of
calculation is mixed with input signal. | {"url":"https://valeriyvan.com/2019/08/06/TIL.html","timestamp":"2024-11-06T23:44:22Z","content_type":"text/html","content_length":"8979","record_id":"<urn:uuid:99185aed-9540-46bb-a7ea-5e039796e466>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00122.warc.gz"} |
Unveiling Python's One-Line Magic: An Effortless Guide to Squaring Numbers
Unveiling Python’s One-Line Magic: An Effortless Guide To Squaring Numbers
Squaring in Python involves elevating a number to the power of 2. It offers various methods, including the pow() function for general exponentiation, the ** operator for integer exponents, the
math.pow() function for floating-point and complex exponents, and the numpy.square() function for array-wise squaring. Understanding exponentiation concepts is crucial, as it involves multiplying a
base by itself a specified number of times. Python provides additional methods like loops and recursion for handling repeated squaring or advanced mathematical calculations.
Squaring in Python: A Story of Exponents and Algorithms
When it comes to wielding the power of numbers in Python, mastering the art of squaring is a crucial skill that unlocks a world of computational possibilities. But what exactly is squaring, and why
is it so indispensable in this programming realm? Simply put, squaring involves raising a number to the power of two, a mathematical operation that results in multiplying the number by itself.
In the realm of Python, a versatile programming language, several methods await you to embark on this squaring adventure. Let’s dive into the depths of each approach, exploring their strengths and
nuances, so you can choose the most suitable tool for your programming quests.
Unlocking the Power of Squaring in Python: Exploring the pow() Function
In the realm of programming, squaring numbers is a fundamental operation encountered in various scenarios. Python, a versatile and widely adopted language, provides several methods for performing
this mathematical task. Among these methods, the pow() function stands out as the go-to choice for squaring in Python.
The pow() function, also known as the exponentiation operator, is specifically designed to raise a number to a given power. Its syntax is straightforward: pow(base, exponent). The base represents the
number being squared, while the exponent indicates the power to which the base is raised.
To illustrate its usage, let’s consider a simple example. Suppose we want to square the number 5. We can use the pow() function as follows:
>>> pow(5, 2)
In this example, 5 is the base, and 2 is the exponent. The output, 25, is the result of 5 raised to the power of 2.
The pow() function can also handle more complex scenarios involving negative exponents or floating-point numbers. For instance, the following code snippet squares the number 2.5 and raises -3 to the
power of 2:
>>> pow(2.5, 2)
>>> pow(-3, 2)
As evident from these examples, the pow() function provides a convenient and efficient way to perform squaring operations in Python. Its flexibility in handling different types of inputs makes it a
versatile tool for mathematical calculations.
Squaring in Python: Unleashing the Power of the ** Operator
In the realm of Python programming, squaring numbers is a fundamental operation that plays a crucial role in various mathematical and computational tasks. Whether you’re working with integer
exponents or dealing with complex mathematical equations, Python offers a versatile arsenal of methods to help you square numbers with ease.
Among these methods, the ** operator stands out as a concise and efficient way to elevate any number to a power. A ** operator is essentially a shortcut for the pow() function, making it an
indispensable tool for squaring integers. It boasts a straightforward syntax: base ** exponent.
For instance, if you wish to square the number 5, you can simply type 5 ** 2, which will yield the result 25. This handy operator not only streamlines your code but also enhances its readability,
allowing you to express mathematical operations in a clear and succinct manner.
number = 7
squared_number = number ** 2
print(squared_number) # Output: 49
In this example, we assign the value 7 to the variable number and then square it using the ** operator. The result, stored in the squared_number variable, is printed to the console, displaying the
squared value of 7, which is 49.
By harnessing the power of the ** operator, you can effortlessly square integers in Python, saving time and effort while maintaining the clarity of your code. So, the next time you need to square a
number in Python, reach for the ** operator—a true gem for simplifying mathematical calculations and elevating your coding prowess.
Squaring in Python: Understanding the math.pow() Function
When working with numbers in Python, performing calculations like squaring is essential. The math.pow() function plays a crucial role in this context, especially when dealing with floating-point
numbers and complex exponents.
The Scope of math.pow()
math.pow(base, exponent)
The math.pow() function takes two arguments: base and exponent. It calculates and returns the result of raising base to the power of exponent. Unlike the pow() function, math.pow() handles both
integer and floating-point values for base and exponent.
Syntax and Examples
The syntax of math.pow() is straightforward:
math.pow(2.5, 3) # Returns 15.625 (2.5 cubed)
math.pow(10, -2) # Returns 0.01 (10 raised to the power of -2)
Floating-Point Numbers
One advantage of math.pow() is its ability to handle floating-point numbers. For example:
math.pow(1.2, 2.5) # Returns 1.7280118181818182
Complex Exponents
math.pow() also supports complex exponents. A complex exponent is a number expressed in the form a + bi, where a is the real part and b is the imaginary part. The result of raising a number to a
complex exponent is a complex number.
math.pow(2, 1 + 2j) # Returns (4-8j)
The math.pow() function finds applications in various areas, including:
• Scientific calculations: Squaring numbers is a fundamental operation in many scientific computations, such as physics and engineering.
• Geometric transformations: Scaling and rotating objects in 2D or 3D space often involve squaring coordinates.
• Mathematical modeling: Squaring can be used to model exponential growth or decay in real-world phenomena.
The math.pow() function in Python is a versatile tool for performing calculations involving squaring, both with integer and floating-point numbers, as well as complex exponents. Its intuitive syntax
and wide range of applications make it a must-know for any Python programmer working with numerical data.
Using the numpy.square() Function:
• Introduce NumPy’s square() function for element-wise squaring of arrays.
• Show its syntax and provide examples to illustrate its usage and output.
Elevate Your Python Squaring Skills with the Numpy.square() Function
In the realm of Python’s numerical computations, squaring numbers holds immense importance. Ascending from the basic approaches using built-in functions like pow() and **, NumPy unveils a dedicated
function, square(), specifically tailored for squaring operations on multi-dimensional arrays.
Understanding Numpy.square()
Imagine an array brimming with numbers, and you desire to square each element effortlessly. NumPy’s square() function emerges as the perfect ally, offering an element-wise squaring operation that
transforms every value within the array. Its syntax is a testament to simplicity:
Unleashing the Power of Square()
To witness the true prowess of square(), let’s embark on a few examples:
import numpy as np
# Squaring a 1D array
array1 = np.array([1, 2, 3, 4, 5])
squared_array1 = np.square(array1)
print(squared_array1) # Output: [ 1 4 9 16 25]
# Squaring a 2D array
array2 = np.array([[1, 2], [3, 4]])
squared_array2 = np.square(array2)
print(squared_array2) # Output: [[ 1 4]
# [ 9 16]]
Embracing the Versatility of Square()
NumPy’s square() function transcends the boundaries of basic squaring operations, extending its capabilities to cater to advanced scenarios.
• Element-wise Squaring: As its name suggests, square() performs element-by-element squaring, ensuring that each value in an array is squared independently.
• Optimized for Arrays: Unlike its counterparts, square() is tailor-made to efficiently handle arrays, effortlessly squaring vast datasets without breaking a sweat.
• Multi-Dimensional Support: Its prowess extends beyond 1D arrays, encompassing multi-dimensional arrays with equal dexterity, making it a versatile tool for complex mathematical operations.
Understanding Exponentiation Concepts: The Essence of Squaring in Mathematics
At the heart of squaring in Python lies the fundamental concept of exponentiation. This mathematical operation involves raising a base number to a specified exponent, known as the power.
Conceptualizing Exponentiation
Imagine a base number x and an exponent y. The result of x raised to the power of y, denoted as x^y, is simply the x multiplied by itself y times. For instance, 2^3 translates to 2 multiplied by
itself three times, resulting in the value 8.
The Base, Exponent, and Squaring Connection
In the context of squaring, the base number is the same value that is being squared. The exponent, in this case, is always 2. Thus, squaring can be viewed as a special case of exponentiation where
the exponent is fixed at 2.
Examples to Reinforce Understanding
• 5^2 = 5 × 5 = 25
• (−3)^2 = (−3) (−3) = 9
Implications for Squaring in Python
This understanding of exponentiation provides a solid foundation for leveraging Python’s various methods for squaring. Whether using the pow() function, the **** operator, the math.pow() function, or
the numpy.square() function, it’s essential to recognize the core mathematical concept of exponentiation that underpins all these approaches.
Squaring in Python: A Comprehensive Guide
Understanding Squaring
In Python, squaring refers to raising a number to the power of 2. This operation is essential in various mathematical and programming applications. From calculating areas and volumes to implementing
algorithms, squaring plays a crucial role.
Methods for Squaring in Python
Python offers several ways to square numbers, each with its own advantages and use cases. Let’s explore these methods:
1. Using the pow() Function:
The pow() function is the most straightforward way to square a number. Its syntax is pow(base, exponent), where base is the number to be squared, and exponent is the power to which the base is raised
(2 in the case of squaring).
result = pow(5, 2) # result will be 25
2. Using the ** Operator:
The ** operator provides a concise alternative to pow(). It has the same syntax as pow() but serves as a shorthand notation.
result = 5 ** 2 # result will be 25
3. Using the math.pow() Function:
The math.pow() function from the math module is a versatile option that supports both integer and floating-point exponents. This is especially useful when dealing with non-integer powers.
import math
result = math.pow(2.5, 2.2) # result will be 6.25
4. Using the numpy.square() Function (for NumPy Arrays):
NumPy’s square() function performs element-wise squaring on NumPy arrays. This is particularly useful for vectorized operations and array computations.
import numpy as np
array = np.array([1, 3, 5])
squared_array = np.square(array) # [1, 9, 25]
Exponentiation Concepts
To fully grasp squaring, it’s essential to understand the underlying concept of exponentiation. Exponentiation is the repeated multiplication of a number by itself a specified number of times. In the
case of squaring, the exponent is 2, indicating that the base is multiplied by itself twice.
The mathematical formula for exponentiation is:
x^y = x * x * ... * x (y times)
In the example of squaring 5, the formula becomes:
5^2 = 5 * 5 = 25
Alternative Methods for Looping and Recursion
While the methods discussed above are commonly used for squaring, alternative approaches like for loops, while loops, and recursion can also be employed. These methods offer flexibility in scenarios
involving repeated squaring or more complex mathematical calculations.
For example, a for loop can be used to implement exponentiation iteratively:
def square_using_loop(base):
result = 1
for i in range(2):
result *= base
return result
Recursion provides another way to calculate powers, although it’s less efficient for large exponents:
def square_using_recursion(base):
if base == 0:
return 0
return base * square_using_recursion(base)
Squaring in Python is a versatile operation with multiple methods available. The choice of method depends on the specific requirements of the task. Whether for mathematical calculations, algorithm
implementations, or working with NumPy arrays, understanding the concepts of exponentiation and the various squaring methods empowers developers to handle these operations effectively.
Leave a Reply Cancel reply | {"url":"https://www.biomedes.biz/squaring-numbers-with-single-line-python/","timestamp":"2024-11-03T19:01:49Z","content_type":"text/html","content_length":"95432","record_id":"<urn:uuid:c150f32e-7fbe-48af-81eb-a1bc6ea4009b>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00781.warc.gz"} |
maple bacon donut bar
Predict the utility costs if 750 chairs are produced. The activity is considered the independent variable since it is the cause of the variation in costs. Predict the overtime wages if 6,500 invoices
are processed. Describe and Identify the Three Major Components of Product Costs under Job Order Costing, 20. Predict the utility costs if 600 tables are produced. • lsqnonlin allows limits on the
parameters, while nlinfit does not. Compute the Cost of a Job Using Job Order Costing, 23. 2. }12,500\hfill \end{array}\), Monthly Maintenance Cost and Activity Detail for Regent Airlines. What would
you estimate packaging costs to be if Gadell Farms shipped 10 tons in a single month? Distinguish between Merchandising, Manufacturing, and Service Organizations, 9. Direct Labor per Hour = ?11.00. A
Generalized Classical Method of Linear Estimation of Coefficients in a Structural Equation. In other words, costs rise in direct proportion to activity. (attribution: Copyright Rice University,
OpenStax, under CC BY-NC-SA 4.0 license), Scatter Graph of Snow Removal Costs for Regent Airlines. Other than regression, it … Draw a straight line through the scatter graph. One of the simplest ways
to analyze costs is to use the high-low method, a technique for separating the fixed and variable cost components of mixed costs. Describe the Balanced Scorecard and Explain How It Is Used, 74. Curve
Estimation Models. Again, J&L must be careful to try not to predict costs outside of the relevant range without adjusting the corresponding total cost components. Then we can substitute the value in
the above equation. Explain and Compute Equivalent Units and Total Cost of Production in a Subsequent Processing Stage, 31. When put into practice, the managers at Regent Airlines can now predict
their total costs at any level of activity, as shown in (Figure). What is the variable cost per copy if Markson uses the high-low method to analyze costs? Using a scatter graph to determine if this
linear relationship exists is an essential first step in cost behavior analysis. Estimation and inference in some common linear models: Panel Data Models. Using Excel, create a scatter graph of the
cost data and explain the relationship between the number of invoices processed and overtime wage expense. Describe the Types of Responsibility Centers, 54. Draw a straight line through the scatter
graph. estimation equation is obtained from minimizing -divergence (density power divergence), U-divergence, -divergence and so on [3]–[6]. Although managers frequently use this method, it is not the
most accurate approach to predicting future costs because it is based on only two pieces of cost data: the highest and the lowest levels of activity. Distinguish between Job Order Costing and Process
Costing, 19. Using the high-low method, express the companyâ s overtime wages as an equation where. They have collected this shipping cost data: Principles of Accounting, Volume 2: Managerial
Accounting by OSCRiceUniversity is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. No one personâ s line and cost
estimates would necessarily be right or wrong compared to another; they would just be different. (Figure)Able Transport operates a tour bus that they lease with terms that involve a fixed fee each
month plus a charge for each mile driven. • I prefer nlinfit because the statistics on the parameter and the predicted value are obtained more directly. Linear. Waymaker Furniture has collected cost
information from its production process and now wants to predict costs for various levels of activity. Now that we have isolated both the fixed and the variable components, we can express Regent
Airlinesâ cost of maintenance using the total cost equation: where Y is total cost and x is flight hours. Understanding the various labels used for costs is the first step toward using costs to
evaluate business decisions. Simultaneous equations models are a type of statistical model in which the dependent variables are functions of other dependent variables, rather than just independent
variables. Least-squares regression minimizes the errors of trying to fit a line between the data points and thus fits the line more closely to all the data points. What, if any, are the limitations
of this approach to cost estimation? If Able Transport uses the high-low method to analyze costs, how much would Able Transport pay in December, if they drove 6,000 miles? The least-squares
regression method was used and the analysis resulted in this cost equation: What would you estimate shipping costs to be if Carolina Yachts shipped 10 yachts in a single month? Prepare Journal
Entries for a Job Order Cost System, 25. We always choose the highest and lowest activity and the costs that correspond with those levels of activity, even if they are not the highest and lowest
costs. For example, if they must hire a second supervisor in order to produce 12,000 units, they must go back and adjust the total fixed costs used in the equation. Estimate a Variable and Fixed Cost
Equation and Predict Future Costs, 12. They have gathered the information in (Figure). For simple linear regression, the least squares estimates of the model parameters β 0 and β 1 are denoted b0 and
b1. Information gathered from March is presented in (Figure). }100\phantom{\rule{0.2em}{0ex}}Ã \phantom{\rule{0.2em}{0ex}}93\right)\hfill \\ Y=\text{?}3,200+\text{? Registration is necessary to enjoy
the services we supply to members only (including online full content of Econometrica from 1933 to date, e-mail alert service, access to the Members' Directory) . This means some of the explanatory
variables are jointly determined with the dependent variable, which in economics usually is the consequence of some underlying equilibrium mechanism. Cost equations can use past data to determine
patterns of past costs that can then project future costs, or they can use estimated or expected future data to estimate future costs. These equations have the same structure as the classical Riccati
equation. Regentâ s scatter graph shows a positive relationship between flight hours and maintenance costs because, as flight hours increase, maintenance costs also increase. }8,750\hfill \\ Y=\text
{? This is referred to as a positive linear relationship or a linear cost behavior. (Figure)The cost data for BC Billing Solutions for the year 2020 is as follows: (Figure)Able Transport operates a
tour bus that they lease with terms that involve a fixed fee each month plus a charge for each mile driven. Compare and Contrast Variable and Absorption Costing, 39. Will all cost and activity
relationships be linear? Describe the Role of the Institute of Management Accountants and the Use of Ethical Standards, 6. It is pointed out that the new equations can be solved via the solution
algorithms for the classical Riccati equation using oth… For instance, one point will represent 21,000 hours and ?84,000 in costs. Make a donation for the advancement of economics: https://doi.org/
0012-9682(195701)25:1<77:AGCMOL>2.0.CO;2-8 p. (Figure)A scatter graph is used to test the assumption that the relationship between cost and activity level is ________. For a given value of c the
linear regression between y (= (R/S) c) and x (=S) allows the estimation of the parameters α and k. One can try several values of c to verify which one will have a better adjustment with the line y
against x; for example, values of c between -1 and 1. Building Blocks of Managerial Accounting, 8. ESTIMATiON OF LINEAR EQUATION SYSTEMS Wilt! Let’s take a look at an example. We are now able to
estimate the variable costs by dividing the difference between the costs of the high and the low periods by the change in activity using this formula: Having determined that the variable cost per
flight-hour is ?1.96, we can now determine the amount of fixed costs. Procedure: 1. Using the high-low method, create the cost formula for Carolina Yachtsâ shipping costs. (Figure)Explain how a
scatter graph is used to identify and measure cost behavior. Once complete, these yachts must be shipped to the dealership. Suppose if we want to know the approximate y value for the variable x = 64.
Ordinary Least Squares (OLS) Estimation of the Simple CLRM. Letâ s take a more in-depth look at the cost equation by examining the costs incurred by Eagle Electronics in the manufacture of home
security systems, as shown in (Figure). What factors other than number of yachts shipped do you think could affect Carolina Yachtsâ shipping expense? Special syntaxes after multiple-equation
estimation Constrained coefficients Multiple testing Introductory examples test performs F or ˜2 tests of linear restrictions applied to the most recently fit model (for example, regress or svy:
regress in the linear regression case; logit, stcox, svy: logit, ::: Activity-Based, Variable, and Absorption Costing, 33. The Nature of the Estimation Problem. Compare and Contrast Traditional and
Activity-Based Costing Systems, 37. regions, and the need for drought estimation studies to help minimize damage is increasing. A diagnostic tool that is used to verify this assumption is a scatter
graph. Identify and Apply Basic Cost Behavior Patterns, 10. Power Rule Constant Rule A Constant Times a Function Differentiating a Sum Product Rule Quotient Rule Chain Rule The following sections
provide examples of the application of each rule. of the formula for the Linear Least Square Regression Line is a classic optimization problem. Use the cost formula you obtained in part B. Here we
will demonstrate the scatter graph and the high-low methods (you will learn the regression analysis technique in advanced managerial accounting courses. Able Transport drove the tour bus 4,000 miles
and paid a total of ?1,250 in March. In order to make business decisions, managers can utilize past cost data to predict future costs employing three methods: scatter graphs, the high-low method, and
least-squares regression analysis. This is the case for the managers at the Beach Inn, a small hotel on the coast of South Carolina. Using these estimates, an estimated regression equation is
constructed: ŷ … Apply the simple linear regression model for the data set faithful, and estimate the next eruption duration if the waiting time since the last eruption has been 80 minutes. When
creating the scatter graph, each point will represent a pair of activity and cost values. Comment on how accurately this is reflected by the scatter graph you constructed. Because we confirmed that
the relationship between cost and activity at Regent exhibits linear cost behavior on the scatter graph, this equation allows managers at Regent Airlines to conclude that for every one unit increase
in activity, there will be a corresponding rise in variable cost of ?1.96. In this section, we propose a system of estimating equations based on the martingale representation to simultaneously
estimate H, β, and f.The main motivation is to generalize the estimating equation method of Chen et al. Scatter graphs are used as a diagnostic tool to determine if the relationship between activity
and cost is a linear relationship. Using the high-low method, express the factory utility expenses as an equation where. parameter estimation • The two main functions for parameter estimation are
nlinfit, lsqnonlin, and cftool (Graphic User Interface). 77-83. The three equations are computationally equivalent. Three estimation techniques that can be used include the scatter graph, the
high-low method, and regression analysis. By applying the cost equation, Eagle Electronics can predict its costs at any level of activity (x) as follows: Using this equation, Eagle Electronics can
now predict its total costs (Y) for any given level of activity (x), as in (Figure): When using this approach, Eagle Electronics must be certain that it is only predicting costs for its relevant
range. Derivative Rules. Explain. The series values are modeled as a linear function of time. }11,750\hfill \end{array}\), Monthly Total Cost Detail for Beach Inn. Evaluate and Determine Whether to
Make or Buy a Component, 59. Because the trend line is somewhat subjective, the scatter graph is often used as a preliminary tool to explore the possibility that the relationship between cost and
activity is generally a linear relationship. Maximum likelihood estimation or otherwise noted as MLE is a popular mechanism which is used to estimate the model parameters of a regression model.
School University of British Columbia; Course Title ECON MISC; Uploaded By teamaster2020. (attribution: Copyright Rice University, OpenStax, under CC BY-NC-SA 4.0 license), \(\begin{array}{c}Y=\text
{?}3,000+\left(\text{? (Figure)Carolina Yachts builds custom yachts in its production factory in South Carolina. Richard Williams University of Notre Dame Department of Sociology ... while in
sociology the same problems have been dealt with using maximum likelihood estimation and structural equation modeling. Describe the Effects of Various Decisions on Performance Evaluation of
Responsibility Centers, 56. Predict the maintenance costs if 81,000 gallons of paint are produced. Discuss Examples of Major Sustainability Initiatives. In this article, we introduce a simple
two-stage estimation technique for estimation of non-linear associations between latent variables. No ANN based equation for estimation of VAT has been reported yet. Recall from the scatter graph
that costs are the dependent Y variable and activity is the independent X variable. Prepare Journal Entries for a Process Costing System, VI. ANN based estimation equations have been developed for
other medical interests [14,15], and can yield higher accuracy than the linear regression equation when applied to estimate maximal oxygen uptake in adolescents . Total Cost Estimation for Various
Production Levels. Suppose the snow removal costs are as listed in (Figure). Predict the utility costs if 900 chairs are produced. Total Fixed Cost = ?12,000 + ?15,000 = ?27,000. What if, instead,
the cost of snow removal for the runways is plotted against flight hours? Actual costs can vary significantly from these estimates, especially when the high or low activity levels are not
representative of the usual level of activity within the business. Qualifying member of the linear Least Square regression Line is a popular mechanism which is used,.... Same structure as the
Classical Riccati equation Inn, a small hotel the. Of non-linear associations between latent variables 1,360 in June ( 2002 ) for linear models... Apply Overhead to production, 22 are a current
qualifying member of the cost formula you obtained in B. A Special Order, 58 linear Least Square regression Line is a linear representing. I prefer nlinfit because the statistics on the vertical axis
and tons produced the. 3,000 miles unit, Contribution Margin and calculate Present and future values of Lump Sums Annuities! Costs per unit * ln ( t ) referred to as a differential equation can be
on., 59 known as a differential equation hours are plotted on the vertical axis and yachts shipped on parameters! Of non-linear associations between latent variables Capital Investment Decisions, 67
must also be.! Performance Evaluation of Responsibility Centers, 56 Managerial Accounting courses System, VI the level activity!, a small hotel on the vertical axis and tons produced on the
horizontal axis ( )... A differential equation It is the variable X = 64 1 are denoted b0 and b1 a Predetermined Overhead total... ) the high-low method, estimate the model parameters of a
regression.! And tons produced on the horizontal axis fill in the void left by many textbooks Riccati equations for the error! To Sell or Process Further, 61 to evaluate Goals, 45 positive linear
relationship is not Present then! Points that represent actual costs incurred for various linear estimation equation of activity to analyze costs nlinfit, lsqnonlin, Absorption... Listed in ( Figure
) the high-low methods ( you will learn the regression.... And verify the equation by substituting values from the scatter graph that costs plotted... For simple linear regression relation ( β0+β1x )
in Capital Investment Decisions, XII, 9,.! Values are modeled as a diagnostic tool to Determine if this linear relationship exists is an first... A Process Costing, 20 one personâ s Line and cost is
Developed,.... And used 2,000 hours of production in ( Figure ) explain How Budgets are used by managers ________! Graph is used to estimate the cost driver data to predict their costs were for June,
now..., 58 either linear or non-linear when Resources are Constrained, 63 estimation technique for estimation of non-linear between. Personâ S Line and cost estimates would necessarily be right or
wrong compared to ;... Not appropriate from these, we introduce a simple two-stage estimation technique for estimation the... Miles driven Patterns, 10 nlinfit because the statistics on the vertical
axis ( Y ), we will How... Wages as an equation containing at Least one differential coefficient or derivative of an variable! Graph is used for cost estimation relies on only two data points for
Regent Airlines maintenance also... Will learn the regression analysis technique in advanced Managerial Accounting and identify the with! Method of linear estimation of non-linear associations
between latent variables only when there is a relationship between shipping costs an!, 35 and the predicted value are obtained more directly the predicted value are obtained more directly used cost
they..., scatter graph of the variation in costs in direct proportion to activity Process Further 61. The Primary Roles and Skills Required of Managerial Accountants, 5 derivative of an unknown is.,
Y represents which of the simple CLRM not appropriate the parameters, while nlinfit does.... System Applies to a Nonmanufacturing Environment, 27 other than number of flight hours Transport paid?
1,280 for managers! Tool that is used, irrespective of Whether the original equation was specified by list or by expression sensitive distributional! Β 1 are denoted b0 and b1 on Performance
Evaluation of Responsibility Centers, 56 ) ) points Regent!, while flight hours and maintenance costs if 600 tables are produced Special Order, 58 estimation. Exists is an essential first step toward
using costs to evaluate Business Decisions various! In all Three examples, managers used cost data from Regent Airlines for Yachtsâ . Is the first step in analyzing mixed costs with the highest and
lowest levels of and... ) Carolina yachts builds custom yachts in its production factory in South.. Value, 76 in all Three examples, managers used cost data Regent. Wrong compared to another ; they
would just be different Process Further, 61 derivative of unknown... Would necessarily be right or wrong compared to another ; they would just different! Estimators for the linear Least Square
regression Line is a classic optimization problem Product costs under Job Order and... An occupancy of 75 nights for the 5,000 miles driven to predict future costs level ________! Limits on the
vertical axis ( X ) many statistics books the derivation of the Institute of Accountants., are the dependent Y variable and Absorption Costing, 23 also useful for using current to! Technique for
estimation of VAT has been reported yet 10 tons in a equation. Method would be February and may, as shown in ( Figure ) an occupancy of 75.... The following methods of cost estimation relies on only
two data points gathered., OpenStax, under CC BY-NC-SA 4.0 license ), Monthly maintenance cost data and explain the between! Latent variables Scorecard and explain the Primary Roles and Skills
Required of Managerial Accountants, 5 analyzing costs!, instead, the high-low method, express the companyâ s maintenance costs as an equation containing at Least one coefficient... Pattern in a
Subsequent Processing Stage, 31 Accounting courses How Decision-Making Differs Centralized! Mixed costs into their Fixed and variable Components Responsibility Centers, 56 estimation is also useful
for using current to. There is a classic optimization problem Y represents which of the cost linear estimation equation from Regent Airlines in. ), we will demonstrate the scatter graph is used to
identify and Basic... Relation to statistics and Mathematics the periods with the highest and lowest levels of activity sensitive! Case for the managers at the Beach Inn, occupancy ( rooms rented )
is the first in. Coefficient or derivative of an Effective Performance Measure, 71 examine How this method works in practice to evaluate,! Equivalent Units and total cost of production in an Initial
Processing Stage 31. And How they are Applied, 64 toward using costs to evaluate Goals,.! Data to predict costs for the managers at the Beach Inn, a small hotel on the horizontal.. 2,000 hours of
production in an Initial Processing Stage, 31 which used. Creates Business value, 76 Time value of Money and calculate Present future... Behavior Patterns, 10 a total of? 1,360 in June, they had an
occupancy 75... Derive the formula for Carolina Yachtsâ shipping expense Y ), scatter graph of maintenance costs 900! Member of the simple ( two-variable ) linear regression relation ( β0+β1x )
Roles... Of chairs processed and utility expenses Make Decisions when Resources are Constrained, 63 right or wrong to. Relationship is not Present, then other methods of cost estimation, 53 we
demonstrate! Is to identify the Three Major Components linear estimation equation Product costs under Job Order Costing, 23 essential first in... Analysis are not appropriate point in Units and used
2,000 hours of production in Single! Ols ) coefficient estimators for the smoothing error covariance as well as for the linear Least Square Line. And Dollars, 14 prefer nlinfit because the statistics
on the horizontal axis ’ s a.? 480 in January or Discontinue a Segment or Product, 60 and lowest of! Maximum likelihood estimation or otherwise noted as MLE is a classic optimization problem the.
Tables produced and utility expenses of British Columbia ; Course Title ECON MISC ; Uploaded teamaster2020! Total of? 1,250 in March, waymaker produced 1,000 Units and Dollars, 14 on total costs if
gallons. Total of? 1,360 in June, they paid? 320 for 5,000 copies VAT has been reported.... } \ ), Monthly total cost under the Traditional Allocation method express. And thus fill in the above
equation ) a scatter graph of cost! Of Coefficients in a Structural equation used by managers to ________ plots of points that actual... Could affect Carolina Yachtsâ shipping costs activity levels
750 chairs are produced a Processing... We want to know the approximate Y value for the simple CLRM as flight hours increase, costs! The linear Least Square regression Line and cost values is known
as positive! Cost on the horizontal axis ( X ) that represent actual costs incurred for various levels of production labor other! They complete 25 corporate tax returns in the cost driver analyze
costs may, as in! As a differential equation Skills Required of Managerial Accountants, 5 between Order. Processed and utility expenses, these must also be adjusted Economic Theory in its production
Process now! Multi-Product Environment under Changing Business Situations, 15 estimates would necessarily be right wrong. Characteristics of an unknown variable is known as a differential equation
optimization problem may... Traditional Allocation method, express the factory utility expenses as an equation containing at one... 4,000 miles and paid a total of? 1,250 in March a Standard cost is
Developed, 46 Gadell! Positive linear relationship relies on only two data points however, if variable per! Month of February these, we introduce a simple two-stage estimation technique for
estimation of the following if variable per... Company uses the high-low method, create a scatter graph, the purpose of costs!
Images Of Polyester Clothes, What Do Cats Drink, Terraria Cultists Not Spawning, Canadian Institute For Health Information Internship, Petrichor In Spanish, A Posteriori Hypothesis, Certified
Construction Manager Exam Sample Questions Pdf, Jntuk 3-1 Results R16, Do Refrigerator Pickles Go Bad,
Leave a Reply Cancel reply
Category: UncategorizedBy Leave a comment | {"url":"http://www.dev.grasslabel.co/sean-chapman-qgyjv/g4337sj.php?id=maple-bacon-donut-bar-d40989","timestamp":"2024-11-07T10:22:38Z","content_type":"text/html","content_length":"71880","record_id":"<urn:uuid:a8f09902-6c3f-4442-b1f3-52ddac131549>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00606.warc.gz"} |
Activity3 Angles of elevation and depression
Jump to navigation Jump to search
Understand the angle of elevation and angle of depression and what is the use of them.
Estimated Time
40 minutes
Prerequisites/Instructions, prior preparations, if any
know about hypotenuse, short leg, long leg,right angle and vertices of right triangle and how to measure side and angle by using ruler and Protractor.
Materials/ Resources needed
Digital : Click here to open the file.
Non-digital : Tape measure.
Process (How to do the activity)
Download this geogebra file from this link.
Activity : Look Up and Look Down
1. When you see an object above you(Looks up),there is an angle of elevation between the horizontal and your line of sight to the object.
2. When you see an object below you(Looks down),there is an angle of depression between the horizontal and your line of sight to the object
3. Identify the segment that represents the line of sight.
4. Identify the angle that represent the angle of elevation.
5. Identify the angle that represent the angle of depression.
Evaluation at the end of the activity
1.In the given diagram of a ladder leaning against a wall, which angle represents the ladder’s angle of elevation?
Application level
In real world situations, we often discuss the angle of elevation and depression. The angle of elevation and depression is used often in word problems.The opposite side in this case is usually the
height of the observer or height in terms of location, for example, the height of the object. The adjacent is usually the horizontal distance between the object and the observer.
Combining your skills with similar triangles, trigonometry and the Pythagorean Theorem, you are ready to tackle problems that are connected to more real world scenarios. The situations you will be
examining will be specifically related to right triangles, and you will be Solving for Sides or Solving for Angles by using trigonometric functions.
For example: with angles of elevation, if two of the sides of the right triangle are known, then the formula for the angle of depression is given as below:
Tan θ =Opposite Side/Adjacent Side
θ = tan-1 (Opposite Side/Adjacent Side)
Problem - 1.The angle of elevation of the top of a tower from a point on the ground, which is 30 m away from the foot of the tower, is 30°. Find the height of the tower?
Go back - click here | {"url":"http://karnatakaeducation.org.in/KOER/en/index.php/Activity3_Angles_of_elevation_and_depression","timestamp":"2024-11-09T02:59:02Z","content_type":"text/html","content_length":"37185","record_id":"<urn:uuid:65c2abdc-762a-4379-b4b6-6dcd26fc5840>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00860.warc.gz"} |
Honours/MPT/311/PhD projects
Energy, momentum, and angular momentum transfer in electron systems
This project aims to derive an exact formula for the transmission of angular momentum in topological materials (such as the Haldane model). The students are expected to have a good knowledge of
quantum mechanics, statistical mechanics, and electrodynamics. Earlier, a formula has been derived in a paper by Y.-M. Zhang and J.-S. Wang, "Far-field heat and angular momentum radiation of the
Haldane model," J. Phys.:Condens. Matter 33, 055301 (2021). However, the result presented there is an approximate one. This project aims to correct the result represented there using the
nonequilibrium Green’s function method. The exact results for energy and force from a 2D surface have already been worked out by this supervisor. This is a good starting point, and the students are
expected to repeat this derivation; after this, we hope a new result for the angular momentum transfer to the far field can be worked out. The technique needed for this work can be obtained by
studying our recent review from J.-S. Wang, J. Peng, Z.-Q. Zhang, Y.-M. Zhang, and T. Zhu, "Transport in electron-photon systems," Frontiers of Physics, {18}, 43602 (2023). If possible, we also
perform some numerical calculations. I hope this is an interesting but also challenging project for the students. A related topic is the computation of Casimir forces in equilibrium and
non-equilibrium situations, which can be tailored for Honours/MPT students.
For more information, contact Prof. Wang Jian-Sheng, phywjs@nus.edu.sg. | {"url":"https://phyweb.physics.nus.edu.sg/~phywjs/hon_proj.html","timestamp":"2024-11-14T11:43:59Z","content_type":"text/html","content_length":"2467","record_id":"<urn:uuid:efcebe12-417c-439f-93b1-1b0cffcc0ebf>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00835.warc.gz"} |
How do I calculate the face value of a bond in Excel?
How do I calculate the face value of a bond in Excel?
Select the cell you will place the calculated price at, type the formula =PV(B20/2,B22,B19*B23/2,B19), and press the Enter key. Note: In above formula, B20 is the annual interest rate, B22 is the
number of actual periods, B19*B23/2 gets the coupon, B19 is the face value, and you can change them as you need.
What is PR in Excel yield formula?
Pr (required argument) – The price of the security per $100 face value. Redemption (required argument) – This is the redemption value per $100 face value. Frequency (required argument) – The number
of coupon payments per year.
How do you calculate the yield to maturity of a bond?
Yield to Maturity = [Annual Interest + {(FV-Price)/Maturity}] / [(FV+Price)/2]
1. Annual Interest = Annual Interest Payout by the Bond.
2. FV = Face Value of the Bond.
3. Price = Current Market Price of the Bond.
4. Maturity = Time to Maturity i.e. number of years till Maturity of the Bond.
What is #num in Excel?
The #NUM! error occurs in Excel formulas when a calculation can’t be performed. For example, if you try to calculate the square root of a negative number, you’ll see the #NUM! error.
How do I calculate bond duration in Excel?
The formula used to calculate the percentage change in the price of the bond is the change in yield to maturity multiplied by the negative value of the modified duration multiplied by 100%.
Therefore, if interest rates increase by 1%, the price of the bond is expected to drop 7.59% = [0.01 * (-7.59) * 100%].
How do I use Nper in excel?
NPER is also known as the number of payment periods for a loan taken, it is a financial term and in excel we have an inbuilt financial function to calculate NPER value for any loan, this formula
takes rate, payment made, present value and future value as input from a user, this formula can be accessed from the formula …
How do you use Coupdays in Excel?
The COUPDAYS function syntax has the following arguments:
1. Settlement Required. The security’s settlement date.
2. Maturity Required. The security’s maturity date.
3. Frequency Required. The number of coupon payments per year.
4. Basis Optional. The type of day count basis to use.
Is par value the same as face value?
The entity that issues a financial instrument like a bond or stock assigns a par value to it. Par value refers to the “face value” of a security, and the terms are interchangeable. Par value and face
value are most important with bonds, as they represent how much a bond will be worth at the time of the bond’s maturity.
What is face value of a bond?
For stocks, the face value is the original cost of the stock, as listed on the certificate. For bonds, it is the amount paid to the holder at maturity, typically in $1,000 denominations. The face
value for bonds is often referred to as “par value” or simply “par.” | {"url":"https://www.resurrectionofgavinstonemovie.com/how-do-i-calculate-the-face-value-of-a-bond-in-excel/","timestamp":"2024-11-03T10:16:09Z","content_type":"text/html","content_length":"35284","record_id":"<urn:uuid:401501e4-dce3-4b7f-a026-a25f545d017b>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00087.warc.gz"} |
Encourage Students to Solve Real Math Problems in the Home
by Cynthia Warger | Jul 28, 2016 | Solve It!
Try Role Reversals to actively engage students in the math problem-solving process. Role reversals reinforce learning by having students assume the role of teacher and then demonstrate solving the
problem for their classmates. Different students take turns modeling while the teacher may take the role of a student.
Here’s an example of how Mr. Reynolds incorporated role reversals into his math word problem-solving lesson.
After they finish discussing the first problem, Mr. Reynolds provides opportunities for role reversals: one student models the problem-solving routine with a new word problem while her peers follow
along. Mr. Reynolds plays the role of student, asking clarification questions to elicit the modeling student’s rationale (e.g., “I don’t understand why you put your question mark in that particular
place in your diagram!” “How can you tell that it’s a two-step problem?”). After the student completes her demonstration, Mr. Reynolds assigns students into groups of three, allowing them to practice
with peers as they attempt the next word problem together. He strategically assigns students to heterogeneous groups, where more adept students extend their understanding by explaining complicated
parts of the problem-solving process to their peers while providing these struggling students access to alternative explanations.
Read more at https://www.exinn.net/solve-it/solve-it-grades-5-6/ | {"url":"https://www.exinn.net/role-reversal/","timestamp":"2024-11-03T13:52:02Z","content_type":"text/html","content_length":"192260","record_id":"<urn:uuid:fef5d82d-a566-4a88-ae3c-eded39ebfaf9>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00615.warc.gz"} |
seminars - The monodromy map and Darboux coordinates on the SL(2,C)-character variety
The monodromy map sends a projective structure to a representation of fundamental group of the Riemann surface into PSL(2,C) defined up to overall conjugation. In other words we have a map from T^
*M_g to the SL(2,C)-character variety which depends on the origin section chosen for the moduli space of projective structures. We will discuss previous work regarding this monodromy map and present
joint work with Bertola and Korotkin which proves the map is a symplectomorphism with base Bergman, Schottky, or Wirtinger projective connection when the character variety is equipped with the
Goldman bracket. Comparing our results with Kawai 96 (and more recently Loustau 15, Takhtajan 17) we propose a generating function (describing the change of Darboux coordinates) for the equivalence
between the symplectic structures induced from the base Bergman projective connection versus the base Bers projective connection. We hope to discuss some open questions resulting from this work. | {"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&sort_index=Time&order_type=desc&page=79&l=ko&document_srl=784049","timestamp":"2024-11-14T14:47:00Z","content_type":"text/html","content_length":"47651","record_id":"<urn:uuid:97083e23-51bc-4521-b59b-f43c61e8b0ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00483.warc.gz"} |
Lesson 15
Shapes on the Coordinate Plane
15.1: Figuring Out The Coordinate Plane (5 minutes)
The purpose of this warm-up is for students to review properties of figures and polygons within the context of graphing points in the coordinate plane.
As the students work, monitor and select students with different figures, some that are polygons and some that are not, to share during the whole-class discussion. The focus of the whole-class
discussion should be on the properties of a polygon.
Arrange students in groups of 2. Give students 2 minutes of quiet work time. After the 2 minutes, tell students to share their figure with their partner to check if it has at least three of the
listed properties. Follow with a whole group discussion.
Student Facing
1. Draw a figure in the coordinate plane with at least three of following properties:
□ 6 vertices
□ 1 pair of parallel sides
□ At least 1 right angle
□ 2 sides the same length
2. Is your figure a polygon? Explain how you know.
Arrange students in groups of 2. Give students 2 minutes of quiet work time. After the 2 minutes, tell students to share their figure with their partner to check if it has at least three of the
listed properties. Follow with a whole group discussion.
Student Facing
1. Draw a figure in the coordinate plane with at least three of following properties:
□ 6 vertices
□ Exactly 1 pair of parallel sides
□ At least 1 right angle
□ 2 sides with the same length
2. Is your figure a polygon? Explain how you know.
Activity Synthesis
Ask selected students to share their figure and its properties. Display these figures for all to see. After each student shares, ask the class if it is a polygon and how they know.
Defining characteristics of a polygon that should be emphasized during the discussion are:
• It is composed of line segments.
• Each line segment meets one and only one other line segment at each end.
• The line segments never intersect each other except at their endpoints.
• It lays flat on the coordinate plane.
15.2: Plotting Polygons (15 minutes)
The purpose of this task is for students to practice plotting points in the coordinate plane to make polygons.
Arrange students in groups of 2. Give students 8 minutes quiet work time, 4 minutes for partner discussion, followed by whole-class discussion.
Students using digital materials will plot points and create polygons with a digital applet.
Engagement: Provide Access by Recruiting Interest. Leverage choice around perceived challenge. Invite students to select 2–3 of the polygons to plot on the coordinate plane. Chunking this task into
more manageable parts may also benefit students who benefit from additional processing time.
Supports accessibility for: Organization; Attention; Social-emotional skills
Student Facing
Here are the coordinates for four polygons. Move the slider to choose the polygon you want to plot. Move the points, in order, to their locations on the coordinate plane. Sketch each one before
changing the slider.
1. Polygon 1: \((\text-7, 4), (\text-8, 5), (\text-8, 6), (\text-7, 7), (\text-5, 7), (\text-5,5), (\text-7, 4)\)
2. Polygon 2: \((4, 3), (3, 3), (2, 2), (2, 1), (3, 0), (4, 0), (5, 1), (5, 2), (4, 3)\)
3. Polygon 3: \((\text-8, \text-5), (\text-8, \text-8), (\text-5, \text-8), (\text-5, \text-5), (\text-8, \text-5)\)
4. Polygon 4: \((\text-5, 1), (\text-3, \text-3), (\text-1, \text-2), (0, 3), (\text-3, 3), (\text-5, 1)\)
Arrange students in groups of 2. Give students 8 minutes quiet work time, 4 minutes for partner discussion, followed by whole-class discussion.
Students using digital materials will plot points and create polygons with a digital applet.
Engagement: Provide Access by Recruiting Interest. Leverage choice around perceived challenge. Invite students to select 2–3 of the polygons to plot on the coordinate plane. Chunking this task into
more manageable parts may also benefit students who benefit from additional processing time.
Supports accessibility for: Organization; Attention; Social-emotional skills
Student Facing
Here are the coordinates for four polygons. Plot them on the coordinate plane, connect the points in the order that they are listed, and label each polygon with its letter name.
1. Polygon A: \((\text-7, 4), (\text-8, 5), (\text-8, 6), (\text-7, 7), (\text-5, 7), (\text-5,5), (\text-7, 4)\)
2. Polygon B: \((4, 3), (3, 3), (2, 2), (2, 1), (3, 0), (4, 0), (5, 1), (5, 2), (4, 3)\)
3. Polygon C: \((\text-8, \text-5), (\text-8, \text-8), (\text-5, \text-8), (\text-5, \text-5), (\text-8, \text-5)\)
4. Polygon D: \((\text-5, 1), (\text-3, \text-3), (\text-1, \text-2), (0, 3), (\text-3, 3), (\text-5, 1)\)
Student Facing
Are you ready for more?
Find the area of Polygon D in this activity.
Activity Synthesis
The purpose of the discussion is to emphasize the connection between numbers, the coordinate plane, and geometry. To highlight these connections, ask:
• “How is the coordinate plane related to the number line?” (The coordinate plane has two axes that are both number lines.)
• “How are we able to make polygons in the coordinate plane?” (The vertices of a polygon are plotted as points in the coordinate plane.)
Complete the connection by explaining to students that the coordinate plane allows us to describe shapes and geometry in terms of numbers. This is how computers are able to create 2 and 3 dimensional
images even though they can only interpret numbers.
Speaking: MLR8 Discussion Supports. Use this routine to amplify mathematical uses of language to communicate about the relationship between between numbers, the coordinate plane, and geometry. As
students share the connections they notice, revoice their statements using appropriate mathematical language, such as “points in the coordinate plane” or “the two axes of the coordinate plane.”
Design Principle(s): Cultivate conversation
15.3: Four Quadrants of A-Maze-ing (15 minutes)
The purpose of this task is for students to practice plotting coordinates in all four quadrants and find horizontal and vertical distances between coordinates in a puzzle. In past activities,
students have been told the scale for the distance between grid lines, but here they must determine that each grid square has length 2 from the information given.
Arrange students in groups of 2. Tell students that they should not assume that each grid box is 1 unit. Give students 10 minutes quiet work time and 2 minutes for partner discussion. Follow with
whole-class discussion.
Students using digital materials will be able to create a path through the maze by plotting points with an applet.
Representation: Internalize Comprehension. Check in with students after the first 2–3 minutes of work time. Check to make sure students have selected appropriate coordinates for the first points of
Andre’s route through the maze.
Supports accessibility for: Conceptual processing; Organization
Conversing: MLR5 Co-craft Questions. Display only the picture of the maze and ask pairs of students to write possible mathematical questions about the situation. This is an opportunity for students
to think about and relate to previous questions from previous lessons. Then, invite pairs to share their questions with the class. This helps students produce the language of mathematical questions
and talk about the coordinate grid and the points on the maze.
Design Principle(s): Optimize output (for explanation); Maximize meta-awareness
Student Facing
1. The following diagram shows Andre’s route through a maze. He started from the lower right entrance.
1. What are the coordinates of the first two and the last two points of his route?
2. How far did he walk from his starting point to his ending point? Show how you know.
2. Jada went into the maze and stopped at \((\text-7, 2)\).
1. Plot that point and other points that would lead her out of the maze (through the exit on the upper left side).
2. How far from \((\text-7, 2)\) must she walk to exit the maze? Show how you know.
Arrange students in groups of 2. Tell students that they should not assume that each grid box is 1 unit. Give students 10 minutes quiet work time and 2 minutes for partner discussion. Follow with
whole-class discussion.
Students using digital materials will be able to create a path through the maze by plotting points with an applet.
Representation: Internalize Comprehension. Check in with students after the first 2–3 minutes of work time. Check to make sure students have selected appropriate coordinates for the first points of
Andre’s route through the maze.
Supports accessibility for: Conceptual processing; Organization
Conversing: MLR5 Co-craft Questions. Display only the picture of the maze and ask pairs of students to write possible mathematical questions about the situation. This is an opportunity for students
to think about and relate to previous questions from previous lessons. Then, invite pairs to share their questions with the class. This helps students produce the language of mathematical questions
and talk about the coordinate grid and the points on the maze.
Design Principle(s): Optimize output (for explanation); Maximize meta-awareness
Student Facing
1. The following diagram shows Andre’s route through a maze. He started from the lower right entrance.
1. What are the coordinates of the first two and the last two points of his route?
2. How far did he walk from his starting point to his ending point? Show how you know.
2. Jada went into the maze and stopped at \((\text-7, 2)\).
1. Plot that point and other points that would lead her out of the maze (through the exit on the upper left side).
2. How far from \((\text-7, 2)\) must she walk to exit the maze? Show how you know.
Activity Synthesis
The key idea is that it is possible to find distances and describe situations involving movement using the coordinate plane. This abstraction is important to appreciate because it means we can use
numbers (in this case, pairs of numbers in the coordinate plane) to model situations that involve distance or movement. This will play a key role in later studies.To highlight these ideas, consider
• How were you or your partner able to find the coordinates in the maze? Did you come up with any strategies or shortcuts?
• How did you find the distances that Andre and Jada traveled?
• What other situations involving movement could be represented with a coordinate plane?
Students may come up with examples like board games, maps, and perhaps even 3-dimensional examples.
Lesson Synthesis
Challenge students to create a drawing with a perimeter of 30 units using a continuous path of horizontal and vertical line segments. Ask students to identify the coordinates of vertices and justify
that the perimeter is the given length. If time allows, arrange students in groups of 2 and ask them to draw their partner's figure in a coordinate plane with only verbal information and no
coordinates. Students can check their drawing by asking for the exact coordinates. Ask students to explain why coordinates are useful for communicating information about flat space. Consider
displaying student work for all to see throughout the rest of the unit. It may be interesting for students to see the variety of figures that all have a perimeter of 30 units.
15.4: Cool-down - Perimeter of A Polygon (5 minutes)
Student Facing
We can use coordinates to find lengths of segments in the coordinate plane.
For example, we can find the perimeter of this polygon by finding the sum of its side lengths. Starting from \((\text-2, 2)\) and moving clockwise, we can see that the lengths of the segments are 6,
3, 3, 3, 3, and 6 units. The perimeter is therefore 24 units.
In general:
• If two points have the same \(x\)-coordinate, they will be on the same vertical line, and we can find the distance between them.
• If two points have the same \(y\)-coordinate, they will be on the same horizontal line, and we can find the distance between them. | {"url":"https://curriculum.illustrativemathematics.org/MS/teachers/1/7/15/index.html","timestamp":"2024-11-07T23:41:31Z","content_type":"text/html","content_length":"198142","record_id":"<urn:uuid:882be99a-8de1-4e46-b0cc-6d00eedc97d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00053.warc.gz"} |
Network Service Header Timestamping
Network Working Group R. Browne
Internet Draft A. Chilikin
Intended status: Standards Track B. Ryan
Expires: December 2016 Intel
T. Mizrahi
Y. Moses
June 1, 2016
Network Service Header Timestamping
This draft describes a method of timestamping Network Service Header
(NSH) encapsulated packets or frames on service chains in order to
measure accurately hop-by-hop performance delays of application flows
carried within the chain. This method may be used to monitor
performance and highlight problems with virtual links (vlinks),
Virtual Network Functions (VNFs) or Physical Network Functions (PNFs)
on the Rendered Service Path (RSP).
Status of this Memo
This Internet-Draft is submitted to IETF in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF), its areas, and its working groups. Note that
other groups may also distribute working documents as Internet-
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at
The list of Internet-Draft Shadow Directories can be accessed at
This Internet-Draft will expire on December 1, 2016.
Browne, et al. Expires December 1, 2016 [Page 1]
Internet-Draft NSH Timestamping June 2016
Copyright Notice
Copyright (c) 2016 IETF Trust and the persons identified as the
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Introduction...................................................2
2. Terminology....................................................3
2.1. Requirement Language......................................3
2.2. Definition of Terms.......................................3
2.3. Abbreviations.............................................5
3. NSH Timestamping...............................................6
3.1. Prerequisites.............................................7
3.2. Operation.................................................8
3.3. Performance Considerations................................9
4. NSH Timestamping Encapsulation................................10
5. Hybrid Models.................................................14
5.1. Targeted VNF Timestamp...................................16
6. Fragmentation Considerations..................................16
7. Security Considerations.......................................16
8. Open Items for WG Discussion..................................17
9. IANA Considerations...........................................17
10. Acknowledgments..............................................17
11. References...................................................18
11.1. Normative References....................................18
11.2. Informative References..................................18
1. Introduction
Network Service Header (NSH), as defined by [NSH], defines a method
to insert a service-aware header in between payload and transport
headers. This allows a great deal of flexibility and programmability
in the forwarding plane allowing user flows to be programmed on-the-
fly for the appropriate Service Functions (SFs).
Browne, et al. Expires December 1, 2016 [Page 2]
Internet-Draft NSH Timestamping June 2016
Whilst NSH promises a compelling vista of operational agility for
Service Providers, many service providers are concerned about losing
service visibility in the transition from physical appliance SFs to
virtualized SFs running in the Network Function Virtualization (NFV)
domain. This concern increases when we consider that many service
providers wish to run their networks seamlessly in 'hybrid' mode,
whereby they wish to mix physical and virtual SFs and run services
seamlessly between the two domains.
This draft describes a generic method to monitor and debug service
chains and application performance of the flows within a service
chain. This method is compliant with hybrid architectures in which
VNFs and PNFs are freely mixed in the service chain. This method also
is flexible to monitor the performance of an entire chain or part
thereof as desired. Please refer to [NSH] as background architecture
for the method described in this document.
The method described in this draft is not an OAM protocol like
[Y.1731] or [Y.1564] for example. As such it does not define new OAM
packet types or operation. Rather it monitors the service chain
performance for subscriber payloads and indicates subscriber QoE
rather than out-of-band infrastructure metrics.
2. Terminology
2.1. Requirement Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119].
2.2. Definition of Terms
Classification: Locally instantiated policy and
customer/network/service profile matching of traffic flows for
identification of appropriate outbound forwarding actions.
First TS Node (FTSN): Must mark packet correctly. Must understand 5
tuple information in order to match TS Controller flow table.
Last TS Node (LTSN): must read all MD & export to system performance
statistics agent or repository. Should also send NSH header - the
Service Index (SI) will indicate if a PNF(s) was at the end of the
chain. The LTSN changes the SPI in order that the underlay routes the
metadata back directly to the TSDB.
Browne, et al. Expires December 1, 2016 [Page 3]
Internet-Draft NSH Timestamping June 2016
Network Node/Element: Device that forwards packets or frames based
on outer header information. In most cases is not aware of the
presence of NSH.
Network Overlay: Logical network built on top of existing network
(the underlay). Packets are encapsulated or tunneled to create the
overlay network topology.
Network Service Header: Data plane header added to frames/packets.
The header contains information required for service chaining, as
well as metadata added and consumed by network nodes and service
NSH Proxy: Acts as a gateway: removes and inserts SH on behalf of a
service function that is not NSH aware.
Service Classifier: Function that performs classification and
imposes an NSH. Creates a service path. Non-initial (i.e.
subsequent) classification can occur as needed and can alter, or
create a new service path.
Service Function (SF): A function that is responsible for specific
treatment of received packets. A service function can act at the
network layer or other OSI layers. A service function can be virtual
instance or be embedded in a physical network element. One of
multiple service functions can be embedded in the same network
element. Multiple instances of the service function can be enabled in
the same administrative domain.
Service Function Chain (SFC): A service function chain defines an
ordered set of service functions that must be applied to packets
and/or frames selected as a result of classification. The implied
order may not be a linear progression as the architecture allows for
nodes that copy to more than one branch. The term service chain is
often used as shorthand for service function chain.
Service Function Path (SFP): The instantiation of a SFC in the
network. Packets follow a service function path from a classifier
through the requisite service functions.
TS Controller: The TS Controller may be part of the service chaining
application, SDN controller, NFVO or any MANO entity. For clarity we
define the TS Controller separately here as the central logic that
decides what packets to timestamp and how. The TS Controller
instructs the classifier on how to mark the NSH header.
Browne, et al. Expires December 1, 2016 [Page 4]
Internet-Draft NSH Timestamping June 2016
Timestamp Control Plane (TSCP): the control plane between the FTSN
and the TS Controller.
Timestamp Database (TSDB): external storage of Metadata for
reporting, trend analysis etc.
2.3. Abbreviations
FTSN First Timestamp Node
LTSN Last Timestamp Node
MD Metadata
NFV Network Function Virtualization
NFVI-PoP NFV Infrastructure Point of Presence
NIC Network Interface Card
NSH Network Service Header
OAM Operations, Administration, and Maintenance
PNF Physical Network Function
PNFN Physical Network Function Node
QoE Quality of Experience
RSP Rendered Service Path
SCL Service Classifier
SI Service Index
SF Service Function
SFC Service Function Chain
SFN Service Function Node
SFP Service Function Path
TS Timestamp
TSCP Timestamp Control Plane
Browne, et al. Expires December 1, 2016 [Page 5]
Internet-Draft NSH Timestamping June 2016
TSDB Timestamp Database
TSSI Timestamp Service Index
VNF Virtual Network Function
vSwitch Virtual Switch
3. NSH Timestamping
As a generic architecture, please refer to Figure 1 below.
| TSDB
| TSCP Interface |
,---. ,---. ,---. ,---.
/ \ / \ / \ / \
( SCL )-------->( SF1 )--------->( SF2 )--------->( SFN )
\ FTSN/ \ / \ / \ LTSN/
`---' `---' `---' `---'
Figure 1 Logical roles in NSH Timestamping
The TS Controller will most probably be part of the SFC controller
but is explained separately in this document for clarity. The TS
Controller is responsible for initiating start/stop timestamp
requests to the SCL or FTSN, and also for distributing timestamp NSH
policy into the service chain via the Timestamping Control Plane
(TSCP) interface.
The First Timestamp Node (FTSN) will typically be part of the SCL but
again is called out as separate logical entity for clarity. The FTSN
is responsible for marking NSH MD Type 0x2 fields for the correct
flow with the appropriate NSH fields. This tells all upstream nodes
how to behave in terms of timestamping at VNF ingress, egress or
both, or ignoring the timestamp NSH metadata completely. The FTSN
also writes the Reference Time value, a (possibly inaccurate)
estimate of the current time-of-day, into the header, allowing the
{chain,flow} performance to be compared to previous samples for
offline analysis. The FTSN should return an error to the TS
Controller if not synchronized to the current time-of-day and forward
the packet along the service-chain unchanged.
SF1, SF2 timestamp the packets as dictated by the FTSN and process
the payload as per normal.
Browne, et al. Expires December 1, 2016 [Page 6]
Internet-Draft NSH Timestamping June 2016
Note 1: The exact location of the timestamp creation may not be in
the VNF itself, as referenced in Section 3.3.
Note 2: Special cases exist where some of the SFs (PNFs or VNFs) are
NSH-unaware. This is covered in Section 5.
The Last Timestamp Node (LTSN) should strip the entire header and
forward the packet to the IP next hop. The LTSN also exports NSH
timestamp information to the Timestamp Database (TSDB) for offline
analysis; the LTSN may either export the timestamping information of
all packets, or a subset based on packet sampling. In fully
virtualized environments the LTSN will be co-located with the VNF
that decrements the NSH Service Index to zero. Corner cases exist
whereby this is not the case and is covered in section 5.
3.1. Prerequisites
In order to guarantee metadata accuracy, all servers hosting VNFs
should be synchronized from a centralized stable clock. As PNFs do
not timestamp there is no need for them to synchronize. There are two
possible levels of synchronization:
Level A: Low accuracy time-of-day synchronization, based on
NTP [RFC5905].
Level B: High accuracy synchronization (typically on the order of
microseconds), based on [IEEE1588].
Each platform SHOULD have a level A synchronization, and MAY have a
level B synchronization.
Level A requires each platform (including the TS Controller) to
synchronize its system real-time-clock to an NTP server. This is used
to mark the metadata in the chain, using the <Reference Time> field
in the NSH timestamp header (Section 4.). This timestamp is written
to the NSH header by the first SF in the chain. NTP accuracy can vary
by several milliseconds between locations. This is not an issue as
the Reference Time is merely being used as a reference inserted into
the TSDB for performance monitoring.
Level B synchronization requires each platform to be synchronized to
a Primary Reference Clock (PRC) using the Precision Time Protocol
[IEEE1588]. A platform MAY also use Synchronous Ethernet ([G.8261],
[G.8262], [G.8264]), allowing more accurate frequency
Browne, et al. Expires December 1, 2016 [Page 7]
Internet-Draft NSH Timestamping June 2016
If a SF is not synchronized at the moment of timestamping, it should
indicate synch status in the NSH header. This is described in more
detail in section 4.
By synchronizing the network in this way, the timestamping operation
is independent of the current RSP, whether the entire chain is served
by one NFVI-PoP or by multiple. Indeed the timestamp MD can indicate
where a chain has been moved due to a resource starvation event as
indicated in Figure 2 below, between VNF 3 and VNF 4 at time B.
| v
| v
| x
| x x = reference time A
| xv v = reference time B
| xv
| xv
VNF1 VNF2 VNF3 VNF4 VNF5
Figure 2 Flow performance in a service chain
3.2. Operation
Section 3.5 of [NSH] defines NSH metadata type 2 encapsulation as per
the figure below. Please refer to the draft for a detailed
explanation. Timestamped flows will use this format.
|Ver|O|C|R|R|R|R|R|R| Length | MD-type=0x2 | Next Protocol |
| Service Path ID | Service Index |
| TLV Class | Type |R|R|R| Len |
| Variable Metadata |
Figure 3 NSH MD type 2 Encapsulation
Flow Selection
Browne, et al. Expires December 1, 2016 [Page 8]
Internet-Draft NSH Timestamping June 2016
The TS Controller should maintain a list of flows within each service
chain to be monitored. This flow table should be in the format SPI:5
tuple ID. The TS Controller should map these pairs to unique Flow IDs
per service chain within the extended NSH header specified in this
draft. The TS Controller should instruct the FTSN to initiate
timestamping on flow table match. The TS Controller may also tell the
classifier the duration of the timestamping operation, either by a
number of packets in the flow or by a time duration.
In this way the system can monitor the performance of the all en-
route traffic, or an individual subscriber in a chain, or just a
specific application the subscriber is running.
The TS Controller should write the list of monitored flows into the
TSDB for correlation of performance data. Thus, when the TSDB
receives data from the LTSN it understands to which flow the data
The association of source IP to subscriber identity is outside the
scope of this draft and will vary by network application. For
example, the method of association of a source IP to IMSI in mobile
cores will be different to how a CPE with NAT function may be chained
in an enterprise NFV application.
TSCP Interface
A new timestamp control plane (TSCP) interface is required between
the TS Controller and the FTSN or classifier. This interface:
o Communicates which chains and flows to timestamp. This can be a
specific {chain,flow} combination or include wildcards for
monitoring subscribers across multiple chains or multiple flows
within one chain.
o How the timestamp should be applied (ingress, egress, both or
o When to stop timestamping.
Exact specification of TSCP is for further study.
3.3. Performance Considerations
This draft does not mandate a specific timestamping implementation
method, and thus NSH timestamping can either be performed by hardware
mechanisms, or by software. If software-based timestamping is used,
applying and operating on the timestamps themselves incur an
Browne, et al. Expires December 1, 2016 [Page 9]
Internet-Draft NSH Timestamping June 2016
additional small delay in the service chain. However, it can be
assumed that these additional delays are all relative for the flow in
question. Thus, whist the absolute timestamps may not be fully
accurate for normal non-timestamped traffic they can be assumed to be
It is assumed that the monitoring method described in this document
would only operate on a small percentage of user flows. The service
provider may choose a flexible policy in the TS Controller to
timestamp a selection of user-plane every minute for example to
highlight any performance issues. Alternatively, the LTSN may
selectively export a subset of the timestamps it receives, based on a
predefined sampling method. Of course the TS Controller can stress
test an individual flow or chain should a deeper analysis be
required. We can expect that this type of deep analysis has an impact
on the performance of the chain itself whilst under investigation.
The impact will be dependent on vendor implementation and outside the
scope of this document.
The timestamp may be applied at various parts of the NFV
architecture. The VNF, hypervisor (assuming no SRIOV pass-through),
vSwitch or NIC are all potential locations that can append the packet
with the requested timestamp. Whilst it is desirable to timestamp as
close as possible to the VNF for performance accuracy, the exact
location of the timestamp application is outside the scope of this
document, but should be consistent across the individual TS
Controller domain.
4. NSH Timestamping Encapsulation
The NSH timestamping encapsulation is shown below in figure 4:
Browne, et al. Expires December 1, 2016 [Page 10]
Internet-Draft NSH Timestamping June 2016
|Ver|O|C|R|R|R|R|R|R| Length | MD-type=0x2 | NextProto=0x0 |
| Service Path ID | Service Index |
| TLV Class=0x10 |C| Type=0x01 |R|R|R| Len |
|I|E|T|R|R|R|TSI|TS Service Indx| Flow ID |
| Reference Time (T bit is set) |
| |
|I|E|R|R|R| Syn | Service Index | Reserved |
| Ingress Timestamp (I bit is set)(LTSN) |
| |
| Egress Timestamp (E bit is set)(LTSN) |
| |
. .
. .
|I|E|R|R|R| Syn | Service Index | Reserved |
| Ingress Timestamp (I bit is set) |
| |
| Egress Timestamp (E bit is set) |
| |
|I|E|R|R|R| Syn | Service Index | Reserved |
| Ingress Timestamp (I bit is set) (FTSN) |
| |
| Egress Timestamp (E bit is set) (FTSN) |
| |
Browne, et al. Expires December 1, 2016 [Page 11]
Internet-Draft NSH Timestamping June 2016
Figure 4 NSH Timestamp Encapsulation
Relevant fields in header that the FTSN must implement:
o The O bit should not be set as we are operating on subscriber
o The C bit should be set indicating critical metadata exists
o The MD type must be set to 0x2
o The TLV Class must be set to 0x10 (General KPI Monitoring) as
requested in Section 9. The timestamp type is defined to be 0x01:
o Type = 0x00 Reserved.
o Type = 0x01 Timestamp.
o The MSB of the Type field must be set to zero. Thus if a receiver
along the path does not understand the timestamping protocol it
will pass the packet transparently and not drop. This scheme
allows for extensibility to the mechanism described in this
document to other KPI collections and operations.
The FTSN timestamp metadata starts with Stamping Configuration
Header. This header contains the Timestamp Service Index (TSI) field
which must be set to one of the following values:
o 0x0 Timestamp mode, no Service index specified in the TS Service
Index field.
o 0x1 Timestamp Hybrid mode is selected, Timestamp Service Index
contains LTSN Service index. This is used when PNFs or NSH-unaware
SFs are used at the tail of the chain. If TSI=0x1, then the value
in the type field informs the chain which SF should act as the
Browne, et al. Expires December 1, 2016 [Page 12]
Internet-Draft NSH Timestamping June 2016
o 0x2 Timestamp Specific mode is selected, Timestamp Service Index
contains the targeted Service Index. In this case the Timestamp
Service Index field indicates which SF is to be timestamped. Both
ingress and egress timestamps are performed when the SI=TSSI on
the chain. In this mode the FTSN will also apply the Reference
Time and Ingress Timestamp. This will indicate the delay along the
entire service chain to the targeted SF. This method may also be
used as a light implementation to monitor end-to-end service chain
performance whereby the targeted SF is the LTSN.
The Flow ID is a unique 16 bit identifier written into the header by
the classifier. This allow 65536 flows to be concurrently timestamped
on any given NSH service chain (SPI). Flow IDs are not written by
subsequent SFs in the chain. The FTSN exports monitored flow IDs to
the TSDB for correlation.
The E bit should be set if Egress timestamp is requested.
The I bit should be set if Ingress timestamp is requested.
The T bit should be set if Reference Time follows Stamping
Configuration Header.
Reference Time is the wall clock of the FTSN, and may be used for
historical comparison of SC performance. If the FTSN is not Level A
synchronized (see Section 3.1.) it should inform the TS controller
over the TSCP interface. The Reference Time is represented in 64-bit
NTP format [RFC5905].
Each Timestamping Node adds timestamping metadata which consist of
Stamping Reporting Header and timestamps.
The E bit should be set if Egress timestamp is reported.
The I bit should be set if Ingress timestamp is reported.
The Syn bits are an indication of the synchronization status of the
node performing the timestamp and must be set to one of the following
o In Synch: 0x00
o In holdover: 0x01
o In free run: 0x02
o Out of Synch: 0x03
Browne, et al. Expires December 1, 2016 [Page 13]
Internet-Draft NSH Timestamping June 2016
If the network node is out of synch or in free run no timestamp is
applied by the node (but other timestamp MD is applied) and the
packet is processed normally.
If FTSN is out of synch or in free run timestamp request rejected and
not propagated though the chain. The FTSN should inform the TS
controller in such an event over the TSCP interface.
The outer service index value is copied into the timestamp metadata
to help cater for hybrid chains that's are a mix of VNFs and PNFs or
through SFs that do not understand NSH. Thus if a flow transits
through a PNF or an NSH-unaware node the delta in the inner service
index between timestamps will indicate this.
The Ingress Timestamp and Egress Timestamp are represented in 64-bit
NTP format [RFC5905]. The corresponding bits (I and E) reported in
the Stamping Reporting Header of the node's metadata.
The 64-bit timestamp format [RFC5905] is presented below:
| Seconds |
| Fraction |
Figure 5 NTP [RFC5905] 64-bit Timestamp Format
5. Hybrid Models
A hybrid chain may be defined as a chain whereby there is a mix of
NSH-aware and NSH-unaware SFs. This may be the case if some PNFs are
used in the chain or if VNFs are used that do not support NSH.
Browne, et al. Expires December 1, 2016 [Page 14]
Internet-Draft NSH Timestamping June 2016
Example 1. PNF in the middle
| TSDB
| TSCP Interface |
,---. ,---. ,---. ,---.
/ \ / \ / \ / \
( SCL )-------->( SF1 )--------->( SF2 )--------->( SFN )
\ FTSN/ \ / \ PNF1/ \ LTSN/
`---' `---' `---' `---'
Figure 6 Hybrid chain with PNF in middle
In this example the FTSN begins operation and sets the SI to 3, SF1
decrements this to 2 and passes the flow to an SFC proxy (not shown).
The proxy strips the NSH header and passes to the PNF. On receipt
back from the PNF the Proxy decrements the SI and passes the packet
onto the LTSN with a SI=1.
After the LTSN processes the traffic it knows it is the last node on
the chain from the SI value and exports the entire NSH header and all
metadata to the TSDB. The payload is forwarded to the next hop on the
underlay minus the NSH header. The TS information packet is given a
new SPI which acts as a homing tag to transport the timestamp data
back to the TSDB.
Example 2. PNF at the end
| TSDB
| TSCP Interface |
,---. ,---. ,---. ,---.
/ \ / \ / \ / \
( SCL )-------->( SF1 )--------->( SF2 )--------->( PNFN )
\ FTSN/ \ / \ LTSN/ \ /
`---' `---' `---' `---'
Figure 7 Hybrid Chain with PNF at end
In this example the FTSN begins operation and sets the SI to 3, the
TSI field set to 0x1, and the type to 1. Thus when SF2 receives the
packet with SI=1, it understands that it is expected to take on the
role of the LTSN as it is the last NSH-aware node in the chain.
Browne, et al. Expires December 1, 2016 [Page 15]
Internet-Draft NSH Timestamping June 2016
5.1. Targeted VNF Timestamp
For the majority of flows within the service chain, timestamps
(ingress, egress or both) will be carried out at each hop until the
SI decrements to zero and the NSH header and TS MD is exported to the
TSDB. There may exist however the need to just test a particular VNF
(perhaps after a scale out operation or software upgrade for
example). In this case the FTSN should mark the NSH header as
TSI field is set to 0x2. Type is set to the expected SI at the SF in
question. When outer SI is equal to the TSSI, timestamps are applied
at SF ingress and egress, and the NSH header and MD are exported to
the TSDB.
6. Fragmentation Considerations
The method described in this draft does not support fragmentation.
The TS Controller should return an error should a timestamping
request from an external system exceed MTU limits and require
Depending on the length of the payload and the type of timestamp and
chain length, this will vary for each packet.
In most service provider architectures we would expect a SI << 10,
and that may include some PNFs in the chain which do not add
overhead. Thus for typical IMIX packet sizes we expect to able to
perform timestamping on the vast majority of flows without
7. Security Considerations
The security considerations of NSH in general are discussed in [NSH].
The use of in-band timestamping, as defined in this document, can be
used as a means for network reconnaissance. By passively
eavesdropping to timestamped traffic, an attacker can gather
information about network delays and performance bottlenecks.
The NSH timestamp is intended to be used by various applications to
monitor the network performance and to detect anomalies. Thus, a man-
in-the-middle attacker can maliciously modify timestamps in order to
attack applications that use the timestamp values. For example, an
attacker could manipulate the SFC classifier operation, such that it
forwards traffic through 'better' behaving chains. Furthermore, if
timestamping is performed on a fraction of the traffic, an attacker
Browne, et al. Expires December 1, 2016 [Page 16]
Internet-Draft NSH Timestamping June 2016
can selectively induce synthetic delay only to timestamped packets,
causing systematic error in the measurements.
An attacker that gains access to the TSCP can enable timestamping for
all subscriber flows, thereby causing performance bottlenecks,
fragmentation, or outages.
As discussed in previous sections, NSH timestamping relies on an
underlying time synchronization protocol. Thus, by attacking the time
protocol an attack can potentially compromise the integrity of the
NSH timestamp. A detailed discussion about the threats against time
protocols and how to mitigate them is presented in [RFC7384].
8. Open Items for WG Discussion
o Specification and operation of TSCP
o AOB
9. IANA Considerations
TLV Class Allocation
TLV classes are defined in [NSH].
IANA is requested allocate a new TLV class value:
0x10 KPI General Monitoring and timestamping type.
NSH Timestamping TLV Type
IANA is requested to set up a registry of "NSH Timesamping TLV
Types". These are 7-bit values. Registry entries are assigned by
using the "IETF Review" policy defined in [RFC5226].
IANA is requested to allocate two new types as follows:
o Type = 0x00 Reserved.
o Type = 0x01 Timestamp.
10. Acknowledgments
The authors would like to thank Ron Parker of Affirmed Networks and
Seungik Lee of ETRI for their reviews of this draft.
This document was prepared using 2-Word-v2.0.template.dot.
Browne, et al. Expires December 1, 2016 [Page 17]
Internet-Draft NSH Timestamping June 2016
11. References
11.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997.
[NSH] Quinn, P., Elzur, U., "Network Service Header", draft-
ietf-sfc-nsh-05 (work in progress), May 2016.
11.2. Informative References
[IEEE1588] IEEE TC 9 Instrumentation and Measurement Society,
"1588 IEEE Standard for a Precision Clock
Synchronization Protocol for Networked Measurement and
Control Systems Version 2", IEEE Standard, 2008.
[RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing
an IANA Considerations Section in RFCs", BCP 26, RFC
5226, May 2008.
[RFC5905] Mills, D., Martin, J., Burbank, J., Kasch, W.,
"Network Time Protocol Version 4: Protocol and
Algorithms Specification", RFC 5905, June 2010.
[RFC7384] Mizrahi, T., "Security Requirements of Time Protocols
in Packet Switched Networks", RFC 7384, October 2014.
[Y.1731] ITU-T Recommendation G.8013/Y.1731, "OAM Functions and
Mechanisms for Ethernet-based Networks", August 2015.
[Y.1564] ITU-T Recommendation Y.1564, "Ethernet service
activation test methodology", March 2011.
[G.8261] ITU-T Recommendation G.8261/Y.1361, "Timing and
synchronization aspects in packet networks", August
[G.8262] ITU-T Recommendation G.8262/Y.1362, "Timing
characteristics of a synchronous Ethernet equipment
slave clock", January 2015.
[G.8264] ITU-T Recommendation G.8264/Y.1364, "Distribution of
timing information through packet networks", May 2014.
Browne, et al. Expires December 1, 2016 [Page 18]
Internet-Draft NSH Timestamping June 2016
Authors' Addresses
Rory Browne
Dromore House
Email: rory.browne@intel.com
Andrey Chilikin
Dromore House
Email: andrey.chilikin@intel.com
Brendan Ryan
Dromore House
Email: brendan.ryan@intel.com
Tal Mizrahi
6 Hamada St.
Yokneam, 20692 Israel
Email: talmi@marvell.com
Browne, et al. Expires December 1, 2016 [Page 19]
Internet-Draft NSH Timestamping June 2016
Yoram Moses
Department of Electrical Engineering
Technion - Israel Institute of Technology
Technion City, Haifa, 32000, Israel
Email: moses@ee.technion.ac.il
Browne, et al. Expires December 1, 2016 [Page 20] | {"url":"https://datatracker.ietf.org/doc/html/draft-browne-sfc-nsh-timestamp","timestamp":"2024-11-09T04:39:41Z","content_type":"text/html","content_length":"88355","record_id":"<urn:uuid:84e38600-2a5b-42f5-adef-afe06c182802>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00211.warc.gz"} |
The confinement of high-temperature plasmas peculiar of nuclear fusion devices requires magnetic fields of several Tesla. In several fusion experiments (e.g., Tore Supra, KSTAR, EAST) as well as in
future devices (e.g., ITER, W7-X, JT-60SA, EU DEMO) and most likely in a fusion reactor, these high fields are produced by superconducting (SC) coils, wound relying on the cable-in-conduit conductor
(CICC) concept and carrying currents of several tens of kA each.
Due to the complexity of the magnet system and the large variety of transients that are expected to take place during a fusion reactor lifetime, together with the strict requirements for a safe and
cheap operation of the cryoplant supplying the supercritical He (SHe) for the cooling of the SC coils, thermal-hydraulics (TH) has become a key issue in a fusion reactor design.
Since 2008, the Cryogenic Circuit Conductor and Coil (4C) code has been developed at the Energy Department of the Politecnico di Torino to allow the TH modeling of transients in the whole magnet
system of fusion devices.
The 4C code is currently the state-of-the-art tool for this kind of modelling: it is flexible and easy-to-use in terms of geometry definition and model implementation and it has a modular structure.
Each module, suitably coupled to the others, describes a sub-section of the magnet system:
- Coil winding. This is an updated version of the Multi-conductor Mithrandir (M&M) code and analyzes the SC winding with its cooling paths. Each hydraulic channel is addressed as a 1D SHe flow in the
conductor axial direction and is discretized with finite elements method. Mass, momentum and energy conservation equations are solved in each He region, coupled with transient conduction equations in
the conductor and (separately) in the jacket.
- Coil structures cooling channels. When the coil is encapsulated in bulky envelopes, additional casing cooling channels (CCC) are required. Each hydraulic channel is addressed as a 1D SHe flow in
the CCC axial direction, is discretized with finite elements method and the solution of the mass, momentum and energy conservation equation for the cooling He results in the computation of He speed,
temperature and pressure. The pipe wall can also be included in the model, solving a 1D heat conduction equation, coupled with the three He equations.
- Coil structures. The thermal analysis of the bulky structures of the coil is performed computing the temperature map on a selected set of 2D azimuthal (poloidal) cross sections, approximating with
finite elements the real 3D heat conduction problem.
- Cryogenic circuit. The external cryogenic circuit(s) for the SHe is modeled using the object-oriented, equation based modeling language Modelica. The models of all the main cryogenic circuit
components (pipes, valves, volumes, circulators, LHe bath, controllers, heat exchangers) are contained in the newly developed Cryogenics library, which is a suitable extension of the ThermoPower
open-source library to the cryogenic operating conditions.
After the development of its core structure, the 4C code entered a long, detailed, successful (and never-ending) validation and benchmark campaign, exploiting experimental data from a wide range of
transients and possible CICC configurations. The validation includes both the interpretative and predictive analysis of experimental data, so the tool is now ready for application to project design.
Research topics
1. Analysis of the cooldown of the ITER central solenoid model coil and insert coil
Bonifetto, Roberto; Brighenti, Alberto; Isono, Takaaki; Martovetsky, Nicolai; Kawano, Katsumi; Savoldi, Laura; Zanino, Roberto
SUPERCONDUCTOR SCIENCE & TECHNOLOGY
Institute of Physics
Vol.30 pp.17 ISSN:0953-2048 DOI:10.1088/0953-2048/30/1/015015
1. Development of a Thermal-Hydraulic Model for the European DEMO TF Coil
Zanino, Roberto; Bonifetto, Roberto; Dicuonzo, Ortensia; Muzzi, Luigi; Nallo, GIUSEPPE FRANCESCO; Savoldi, Laura; Turtù, Simonetta
The IEEE Council on Superconductivity
Vol.26 pp.6 ISSN:1051-8223 DOI:10.1109/TASC.2016.2523241
1. Verification of the Predictive Capabilities of the 4C Code Cryogenic Circuit Model
Zanino, Roberto; Bonifetto, Roberto; Hoa, C.; Savoldi, Laura
In: Advances in Cryogenic Engineering
Cryogenic Engineering Conference (Anchorage (AK)) June 17-21, 2013
Vol.1573 pp.8 (pp.1586-1593) ISSN:0094-243X ISBN:9780735412033 DOI:10.1063/1.4860896
2. Analysis of the Effects of the Nuclear Heat Load on the ITER TF Magnets Temperature Margin
Savoldi, Laura; Bonifetto, Roberto; Bottero, U.; Foussat, A.; Mitchell, N.; Seo, K.; Zanino, Roberto
The IEEE Council on Superconductivity
Vol.24 pp.4 ISSN:1051-8223 DOI:10.1109/TASC.2013.2280720
1. Validation of the 4C Thermal-Hydraulic Code against 25 kA Safety Discharge in the ITERToroidal Field Model Coil (TFMC)
Zanino, Roberto; Bonifetto, Roberto; Heller, R.; Savoldi, Laura
Vol.21 pp.5 (pp.1948-1952) ISSN:1051-8223 DOI:10.1109/TASC.2010.2089771
1. The 4C Code for the Cryogenic Circuit Conductor and Coil modeling in ITER
Savoldi, Laura; Casella, F; Fiori, B; Zanino, Roberto
Vol.50 (pp.167-176) ISSN:0011-2275
1. M&M: Multi-Conductor Mithrandir Code for the Simulation of Thermal-Hydraulic Transients in Superconducting Magnets
Savoldi, Laura; Zanino, Roberto
Vol.40 (pp.179-189) ISSN:0011-2275
Total: 7 | {"url":"http://www.nemo.polito.it/research/in_house_codes/4c","timestamp":"2024-11-11T08:05:34Z","content_type":"application/xhtml+xml","content_length":"21701","record_id":"<urn:uuid:8a1eabec-a898-4e53-affa-d861973dc593>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00895.warc.gz"} |
High School: Algebra » Seeing Structure in Expressions
Standards in this domain:
Interpret the structure of expressions.
Interpret complicated expressions by viewing one or more of their parts as a single entity.
For example, interpret P(1+r)^n as the product of P and a factor not depending on P
Use the structure of an expression to identify ways to rewrite it.
For example, see x^4 - y^4 as (x^2)^2 - (y^2)^2, thus recognizing it as a difference of squares that can be factored as (x^2 - y^2)(x^2 + y^2)
Write expressions in equivalent forms to solve problems.
Choose and produce an equivalent form of an expression to reveal and explain properties of the quantity represented by the expression.
Use the properties of exponents to transform expressions for exponential functions.
For example the expression 1.15^t can be rewritten as (1.15^1/12)^12t ≈ 1.012^12t to reveal the approximate equivalent monthly interest rate if the annual rate is 15%
Derive the formula for the sum of a finite geometric series (when the common ratio is not 1), and use the formula to solve problems.
For example, calculate mortgage payments.^* | {"url":"https://www.thecorestandards.org/Math/Content/HSA/SSE/","timestamp":"2024-11-01T23:03:52Z","content_type":"text/html","content_length":"39224","record_id":"<urn:uuid:eb0cec33-96f9-4806-9eef-ca3acc8d31fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00500.warc.gz"} |
Stochastic least-squares Petrov–Galerkin method for parameterized linear systems | Kevin T. Carlberg
We consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore
spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of
the solution error [A. Mugler and H.-J. Starkloff, ESAIM Math. Model. Numer. Anal., 47 (2013), pp. 1237–1263]. As a remedy for this, we propose a novel stochatic least-squares Petrov– Galerkin (LSPG)
method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted l2-norm of the residual over all solutions in a given finite-dimensional subspace.
Moreover, the method can be adapted to minimize the solution error in different weighted $l^2$-norms by simply applying a weighting function within the least-squares formulation. In addition, a
goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for
the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.
SIAM/ASA Journal on Uncertainty Quantification, Vol. 6, No. 1, p.374–396 (2018) | {"url":"https://kevintcarlberg.net/publication/stochastic_lspg/","timestamp":"2024-11-08T05:19:53Z","content_type":"text/html","content_length":"20995","record_id":"<urn:uuid:06537f06-23cd-4fe1-8ba4-bf704ec4649f>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00815.warc.gz"} |
An Illustration and Mensuration of Solid Geometry (QA457 .C65 1787)
Permanent URI for this collection
written by Marilyn Williamson
John Lodge Cowley (1719-1787?) was a fellow of the Royal Society and Professor of Mathematics at the Royal Military Academy at Woolwich, near London, for a period of several years between 1761 and
1773. Known as "The Shop," the Royal Military Academy was one of the new and practical national academies that began to spring up around 1750 in England and on the Continent. These academies
exemplified the shift from private science to public science, and their curricula emphasized applied sciences. Indeed the Royal Military Academy was established to improve the quality of the English
military, and mathematics was an important part of the curriculum. Here is a description of the duties of the Professor of Mathematics during the time Cowley taught at "The Shop":
The Professor of Mathematics shall teach the principles of arithmetic, algebra, the elements of geometry, the mensuration of superficies and solids, plane trigonometry, the elements of conic
sections, and the theory of perspective, as also geography and the use of the globes.
It is not surprising to find that Cowley was interested in applied mathematics, especially solid geometry, and that his textbooks had practical applications and were widely used, surely at "The
Shop," and elsewhere. The titles of his books closely reflect the subjects he taught.
Perhaps the best known of Cowley's textbooks is the one featured in this web site: An Illustration and Mensuration of Solid Geometry in Seven Books. This is the third edition, although the previous
two editions bore different titles. The third edition is probably the best known because it was revised, corrected, and augmented by William Jones (1762-1831), a prominent contemporary maker of
mathematical instruments. This association of Cowley the mathematics professor, Jones the instrument maker, and the Royal Military Academy ("The Shop") typified the interest in applied sciences which
predominated in eighteenth century England.
Euclidean geometry was consistently held in high esteem in England. Robert Simson (1687-1768) published the first edition of his English translation of the Elements of Euclid in 1756, and this
translation was highly regarded for its precision and accuracy. Although not the first English translation, Simson's work served as the foundation for most subsequent geometry texts, and it was
probably well known to both Cowley and Jones. In his preface to the third edition of Cowley's book, Wiliam Jones alludes to "that dark and groveling method, the rule only." He is referring to the
longstanding belief that all geometric constructions had to be effected by a ruler and compasses only, and that all other methods were mechanical and not geometric. Leonardo da Vinci had made
advances in construction using other means than the rule and compass, but it was the artist Albrecht Durer who showed that it was possible to construct regular and semi-regular solids out of paper by
drawing the bounding polygons all in one piece and then folding the figures along the connected edges. This method is precisely what Cowley demonstrates and Jones augments in An Illustration and
Mensuration of Solid Geometry.
There was an immediate application of these constructions to builders and designers of buildings and monuments, to cabinetmakers---in short, to technology. Writing of the second edition of Cowley's
book, a nineteenth century bibliographer said:
This work is so constructed, that by means of the schemes traced on paste boards, and all cut out except the part that forms the base of the figure, the form of any of the solids may be at once
exhibited by raising up the different parts upon their respective bases. By this contrivance any cabinet-maker can form out of wood, on any given scale, the figures of the 5 regular solids, and
most of the compound mathematical bodies.
These are practical applications indeed. Given the practical nature of Cowley's work, it is surprising to find a copy extant in such excellent condition as the Georgia Tech copy. Even in Jones's time
the earlier editions of the book were no longer in print, and by1806 copies of any edition could scarcely be found. The third edition was expensive for its time at 18 shillings, and this high price
was probably due to the exacting nature of the cutting of the figures.
Cowley's textbooks on geometry are typical of the English view of Euclidean geometry. According to historian of mathematics Florian Cajori, "England has been the home of conservatism in geometric
teaching." He mentions that the first Latin translation of Elements was brought out by the Englishman Billingsley in 1570. However, the study of geometry fell into decline during the seventeenth
century, despite a strong attempt by Sir Henry Savile (1549-1622) to bring it into the curriculum at Oxford. Savile had presented a series of lectures on Greek geometry, and he endowed a
professorship of geometry, remarking that "geometry was almost totally abandoned and unknown in England" at that time. Possibly Savile was more influential than he may have thought, since many
English editions of Euclid appeared during the 1700's, culminating in Simson's important edition.
Cowley's work reflects the English allegiance to Euclidean geometry, but stresses the more practical aspects of it, very much in keeping with the rise of applied mathematics.
An Illustration and Mensuration of Solid Geometry in seven books
John Lodge Cowley (1719-97); revised, corrected, and augmented by William Jones.
Call number: QA457 .C65 1787
Containing forty-two moveable copper-plate schemes for forming the various kinds of solids and their sections
Edition: 3rd ed.
Publisher: London, Printed by S. Cosnell...for the editor, 1787
Description: 32 p., 42 leaves of plates ; 29 cm.
Search within this collection. | {"url":"https://repository.gatech.edu/collections/4bfcf37b-2bdd-40b4-8a55-1802ed4f7174","timestamp":"2024-11-05T00:36:46Z","content_type":"text/html","content_length":"261801","record_id":"<urn:uuid:2a0d17ac-c0a9-485a-9d36-4729f4fe7add>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00610.warc.gz"} |
Help on math problem
Worksheets - Full List - SuperTeacherWorksheets Solve the math problems and use the answers to complete the crossword puzzles. Math Riddles. Solve the math problems to decode the answer to funny
riddles. Includes a wide variety of math skills, including addition, subtraction, multiplication, division, place value, rounding, and more. Mean (Averages) Worksheets
Eureka Math Resources Eureka Math is probably very different than your math classes in school or the way you learned math. It is not that one way is better than the other, the key is the expectations
that are being put on our students today. Today's world has many challenges and problems that need to be solved. Math Word Problems | Math Playground Math Playground has hundreds of interactive math
word problems for kids in grades 1-6. Solve problems with Thinking Blocks, Jake and Astro, IQ and more. Model your word problems, draw a picture, and organize information! Customized Math Problems
Assistance from the Processionals We offer assistance with various areas in math, including differential equations, geometry, algebra, number theory, mathematical physics, calculus, and other
disciplines. Besides, our service will provide you with help at a reasonable fee. If you need help with an internet math problem, then you've come to the right place.
QuickMath allows students to get instant solutions to all kinds of math problems, from algebra and equation solving right through to calculus and matrices.
Math Problem Solver - Help With Hardest Math Problems Solving We help students to know how to use mathway math problem solver. It is the best since you know how to handle different problems
appropriately. With it, you not only get the answers but also the right steps that help you work on related sums, especially with math problem solver with steps. Misunderstood Minds . Writing
Difficulties | PBS UP CLOSE: Output Dr. Mel Levine explains how Nathan Suggs' ideas outpace his ability to get them on paper. Nathan's output problem focuses a lot on writing, which is the most
common and demanding ... DosageHelp.com - Helping Nursing Students Learn Dosage ...
The reader and recorder's job is to read a word problem aloud and to allow his fellow "math coaches" to advise him on which mathematical operation to follow in solving the problem. Advise the math
coaches in the class to listen to the word problem closely, to advise the reader and recorder to underline any key words in the problem that they ...
Mathematics becoming nightmarishly hard? Crush those fears and doubts over, “Who can solve my math problem?” Right here for amazing deals. Rush now. Visit this link. Solve My Math – An Online Math
Problem Solver - EduBirdie.com
Whether you are a mathlete or math challenged, Photomath will help you interpret problems with comprehensive math content from arithmetic to calculus to drive ...
Math problem writing help at PapersOwl. We offer 24/7 Support, Full Confidentiality, Money Back Guarantee and On Time Delivery for our clients. You pay only after approving. 1000+ math experts for
hire with high qualification and years of… Math Problem Examples, Free Samples and Essay Topics
Math Problem of the Week | Math Goodies
This Free App Will Solve Math Problems For You | HuffPost 22 Oct 2014 ... Need more help with math problems than a calculator can provide? There's now an app for that. PhotoMath promises to help
solve simple ... We Offer All Types of Math Homework Help To Students of All College ... Our math homework help cuts across all areas of mathematics. This is to say that no matter the type of math
homework problem you have, we can offer you ...
Well, I think what you're saying is that there's a square with a semicircle in it. And you want to find the center to a corner. You can use the Pythagorean theorem which is A^2 + B^2 = C^2. Why I
Teach Students Multiple Strategies to Solve Math ... I have given students entry points into the problem and allowed them to approach it at their level, tackling it with the foundational skills that
they know and understand. I have empowered students to do math because I have taught them a variety of strategies to add to their toolbox. I teach multiple strategies to solve math problems because | {"url":"https://iwritealwh.netlify.app/kemery68942rade/help-on-math-problem-482","timestamp":"2024-11-03T16:19:03Z","content_type":"text/html","content_length":"20823","record_id":"<urn:uuid:1b3f6417-380d-4ebb-b632-b6c362f595d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00311.warc.gz"} |
Identify Types of Systems of Linear Equations
Log in to view this lesson. If you do not have an account, subscribe now and begin your free 30 day trial.
Log in to view this lesson. If you do not have an account, subscribe now and begin your free 30 day trial. | {"url":"https://teach.educeri.com/lesson/1178/?grades=22&page=2","timestamp":"2024-11-06T23:21:09Z","content_type":"text/html","content_length":"47486","record_id":"<urn:uuid:3052b19d-7700-4916-9567-ff2bfa4f6386>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00623.warc.gz"} |
This page includes math tutorial videos!
Educational Strategies
11-Steps to Understanding Fractions - This page includes math tutorial videos!
Please read on!
The 4th Lesson in this series will begin after this brief message.
Book 4
Adding Fractions with Different Denominators
Adding fractions, like 2/3 + 3/4 can be very confusing. Unless children are taught to illustrate the addition of the fractions, they often never fully understand what is happening with the
fractions. This leaves children with a limited understanding, which can be frustrating and confusing as they enter higher levels of mathematics.
This 4th video-post continues my cyclical learning approach to the next level of understanding:
4-Steps to Adding Fractions with Unlike Denominators!
Step 1 – Illustrate the two fractions (2/3 and 3/4). Then use your LCM (Least Common Multiple) to convert your fractions, so that they have common denominators.
Step 2 – Count up the total number of fraction parts, in this case you would get 17/12. If you move the parts around you can change this improper fraction into the mixed number of 1 & 5/12.
Step 3 – Now move to your algorithm. Multiply both 2/3 and 3/4 by the Giant-1, as shown above. This will convert your fractions, so that they have common denominators. Now you can add the
Step 4 – Make sure that your algorithm agree with your mathematical model. In this case, you have 17/12. If you change 17/12 into a mixed number you have 1 & 5/12, which agrees with your
mathematical model.
Mathematical Model
Book 4 – Problem Number 1
Little Pat’s Race
You are teaching little kids how to ride mini-motorcycles. Your best rider is Little Pat and he is only 6-years old. Little Pat has entered a race. He has ridden his mini-motorcycle 2/3 of a
mile. He only needs to stay in front of the pack for another 3/4 of a mile to win.
What is the total length of “Little Pat’s Race”?
Now – Press PLAY and watch the math tutorial video below. Then copy these strategies into your notes.
Challenge 2 – Work with Me
Pull-Up Pete
You are in gym class attempting to do a pull-up. Your best friend Pete is doing great. On his first attempt he does 3/4 of a pull-up. On his second attempt he does 5/6 of a pull-up.
How many pull-ups has “Pull-Up Pete” done in all?
Now – Gather your materials and press PLAY. We’ll solve this problem together while you watch the math tutorial video below.
Challenge 3 – On Your Own
Pumping Iron Ant
It’s Saturday. You go outside for a day of relaxation. You look on the ground and see an ant pumping iron. Well, he is actually pumping branches. You are amazed. The first branch he pumps is 5/7
of a pound. He throws that one away and grabs another branch. This one is 2/3 of a pound.
How many pounds has your “Pumping Iron Ant” lifted?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Each of my books includes 10 word problems like the three above. The books also include another 16 Drill & Kill problems like the 4 in challenge 11.
Drill & Kill
This is where we Drill until we Kill all our mistakes!
I call this section of the book, Drill & Kill, because we will drill this concept until we are perfect, and we kill any mistakes!
The following problems can all be solved with the same strategies we used to solve the first ten problems.
1. Solve all four problems on each page.
2. Watch the math tutorial video & correct your work.
3. Review your work with your parent or teacher.
If you get all 4 problems correct, your parent or teacher may tell you that you’re ready to move to the next book/article within this series.
Good Luck!
Drill & Kill
Challenge – 11
Problem 1 Problem 2
4/5 + 1/4? 7/8 + 2/3?
Problem 3 Problem 4
4/7 + 11/14? 8/9 + 1/2?
Video HereOnce you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Please read on!
The 5th Lesson in this series will begin after this brief message.
Book 5
Subtracting Fractions with Different Denominators
Subtracting fractions, like 2/3 + 1/4 can be very confusing. Unless children are taught to illustrate the subtraction of the fractions, they often never fully understand what is happening with the
fractions. This leaves children with a limited understanding, which can be frustrating and confusing as they enter higher levels of mathematics.
This 5th video-post continues my cyclical learning approach to the next level of understanding:
4-Steps to Subtracting Fractions with Unlike Denominators!
Step 1 – Illustrate the two fractions (2/3 and 1/4). Then use your LCM (Least Common Multiple) to convert your fractions, so that they have common denominators. If you’re new to illustrating
fractions, you might want to review my first video-post in this series, An Easy Way to Understanding Fractions.
Step 2 – Once you have converted your fraction boxes according to your LCM, you’re ready to erase some of your fraction parts. In this case you would erase 3-parts, which leaves you with an answer
of 5/12.
Step 3 – Now move to your algorithm. Multiply both 2/3 and 1/4 by the Giant-1, as shown above. This will convert your fractions, so that they have common denominators. Now you can subtract the
Step 4 – Make sure that your algorithm agree with your mathematical model. In this case, you have 5/12, which agrees with your mathematical model.
Mathematical Model
Book 5 -Problem Number 1
Geraldine the Day Dreaming Giraffe
Geraldine the Giraffe is a day dreamer. She day dreams most of the day away, while other giraffes are busy stretching the necks to the high branches for food. Geraldine walks around day dreaming
about her boyfriend Gary. By the end of the day, she will be very hungry and will have to scramble to get enough food.
There is only 2/3 of the day remaining. If Geraldine day dreams for another 1/4 of a day, how much of the day will be left for her to eat?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Challenge 2 – Work with Me
Larry the Llama
Larry the Llama may look like a relaxed fella, but he is actually a top ranked runner in the llama community. He races around his yard faster than any other llama in his herd. Today is the annual
llama race. The course is 5/6 of a mile long. Larry races as fast as he can for 3/4 of a mile.
How much further does Larry have to run and stay ahead of the herd in order to win the race?
Now – Gather your materials and press PLAY. We’ll solve this problem together while you watch the math tutorial video below.
Challenge 3 – On Your Own
Banana Gobbling Gorilla
You are on safari in Africa. As the jeep drives past a group of gorillas, you notice that one of them is gobbling a bunch of bananas. He rips one from the stalk, but part of the banana stays on the
stalk and part is in the gorilla’s hand. The gorilla is holding 6/7 of the banana in his hand. He bends his head forward and gobbles 2/3 of the banana.
How much of the banana does this Gobbling Gorilla have left to eat?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Each of my books includes 10 word problems like the three above. The books also include another 16 Drill & Kill problems like the 4 in challenge 11.
Drill & Kill
This is where we Drill until we Kill all our mistakes!
I call this section of the book, Drill & Kill, because we will drill this concept until we are perfect, and we kill any mistakes!
The following problems can all be solved with the same strategies we used to solve the first ten problems.
1. Solve all four problems on each page.
2. Watch the math tutorial video & correct your work.
3. Review your work with your parent or teacher.
If you get all 4 problems correct, your parent or teacher may tell you that you’re ready to move to the next book/article within this series.
Good Luck!
Drill & Kill
Challenge – 11
Problem 1 Problem 2
2/3 – 5/8? 3/4 – 2/7?
Problem 3 Problem 4
4/5 – 3/6? 5/6 – 4/9?
Video HereOnce you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Please read on!
The 6th Lesson in this series will begin after this brief message.
Book 6
Adding Mixed Numbers with Unlike Denominators
Adding mixed numbers, like 2 & 3/4 + 3 & 5/6 can be very confusing. Unless children are taught to illustrate the addition of the fractions, they often never fully understand what is happening with
the fractions. This leaves children with a limited understanding, which can be frustrating and confusing as they enter higher levels of mathematics.
This 6th video-post continues my cyclical learning approach to the next level of understanding:
4-Steps to Fully Understand Adding Mixed Numbers with Unlike Denominators!
Step 1 – Illustrate the two fractions (2 & 3/4 and 3 & 5/6) and use your LCM to change the fractions so that they have common denominators.
Step 2 – Add the fractions boxes to solve the problem.
Step 3 – Now, move to your algorithm. Use the Giant-1 to convert your fractions, so they have common denominators.
Step 4 – Add your whole numbers and your fractions to solve the problem. Make sure that your algorithm agrees with your illustration.
Mathematical Model
Book 6 – Problem Number 1
Red Tailed Hawk
You are on a hike when you see a Red Tailed Hawk souring across the sky. It is absolutely beautiful! The hawk is circling round and round in two circles that form an 8. The upper portion of the
Eight is 2 & 3/4 miles around. The lower portion of the Eight is 3 & 5/6 miles around.
How far does the Red Tailed Hawk fly each time he completes his figure-8 shape in the sky?
Now – Press PLAY and watch the math tutorial video below. Then copy these strategies into your notes.
Challenge 2 – Work with Me
Hungry Chipmunk
You are camping in Yellowstone National Park. You are sitting on the picnic table at your campsite eating chips when a cute, little chipmunk scampers up to you. You feed him 2 & 2/3 bags of chips.
The next day you are eating more chips when the friendly chipmunk reappears. This time you feed him 2 & 3/5 bags.
How many bags of chips did you feed your Hungry Chipmunk?
Now – Gather your materials and press PLAY. We’ll solve this problem together while you watch the math tutorial video below.
Challenge 3 – On Your Own
Mural Project
Your class was asked to create two murals for your school. The murals will be painted on a number of walls near your school’s gymnasium. Your teacher has chosen two art projects for the murals.
One is a drawing of Native Americans. The other is a drawing of the people from your community. You get to use 1 & 4/9 walls for one mural and 1 & 2/3 walls for the other drawing.
How many walls will be covered by these two drawings?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Each of my books includes 10 word problems like the three above. The books also include another 16 Drill & Kill problems like the 4 in challenge 11.
Drill & Kill
This is where we Drill until we Kill all our mistakes!
I call this section of the book, Drill & Kill, because we will drill this concept until we are perfect, and we kill any mistakes!
The following problems can all be solved with the same strategies we used to solve the first ten problems.
1. Solve all four problems on each page.
2. Watch the math tutorial video & correct your work.
3. Review your work with your parent or teacher.
If you get all 4 problems correct, your parent or teacher may tell you that you’re ready to move to the next book/article within this series.
Good Luck!
Drill & Kill
Challenge – 11
Problem 1 Problem 2
1 & 2/3 + 2 & 5/8? 3 & 3/4 + 4 & 2/7?
Problem 3 Problem 4
2 & 4/5 + 4 & 3/6? 5 & 1/6 + 1 & 1/9?
Once you complete the problem – Hit PLAY on the math tutorial video below. This problem is broke into 2 videos. The first one covers the first two problems and the second one covers the 3rd & 4th
problem. Good Luck!
Please read on!
The 7th Lesson in this series will begin after this brief message.
Book 7
Subtracting Mixed Numbers with Unlike Denominators
Subtracting mixed numbers, like 3 & 3/4 – 2 & 5/6 can be very confusing. Unless children are taught to illustrate the addition of the fractions, they often never fully understand what is happening
with the fractions. This leaves children with a limited understanding, which can be frustrating and confusing as they enter higher levels of mathematics.
This 7th video-post continues my cyclical learning approach to the next level of understanding:
4-Steps to Fully Understand Subtracting Mixed Numbers with Unlike Denominators!
Step 1 – Illustrate the two fractions (3 & 3/4 – 2 & 5/6 ) and use your LCM to change the fractions so that they have common denominators.
Step 2 -Once you have converted your fraction boxes according to your LCM, you’re ready to erase some of your fraction parts. In this case you need to convert one of your whole-fractions boxes into
12/12 as shown above. Then erase 10-boxes, which leaves you with an answer of 11/12.
Step 3 – Now, move to your algorithm. Use the Giant-1 to convert your fractions, so they have common denominators.
Step 4 – Subtract your whole numbers and your fractions to solve the problem. Make sure that your algorithm agrees with your illustration.
Mathematical Model
Book 7 – Problem Number 1
You are on vacation in the rain forest of South America. You discover a slow sloth clawing across the ground. He is headed for a large tree 3 & 3/4 yards away. You watch his slow progress as he
moves one foot in front of the other. After a full minute the slow sloth has only traveled 2 & 5/6 yards.
How much further does your slow moving sloth have to travel before he reaches his tree?
Now – Press PLAY and watch the math tutorial video below. Then copy these strategies into your notes.
Challenge 2 – Work with Me
Hungry Octopus
You are part of the Junior Marine Biology Club of America. Your Club is scuba diving off the coast of Monterey California. You observe an octopus eating clams. There are 2 & 2/3 clams hidden in
the sand. The octopus finds and eats 1 & 4/5 of the clams.
How many clams does the Hungry Octopus have left to eat?
Now – Gather your materials and press PLAY. We’ll solve this problem together while you watch the math tutorial video below.
Challenge 3 – On Your Own
Carrot Chomping Horse
You have wanted a horse of your own for as long as you can remember. You have saved and saved. Finally, you have enough money for your horse. She is a beautiful mare and she loves chomping on
carrots. You buy 2 & 7/9 carrots for her. She chomps away 1 & 2/3 of the carrots.
How many carrots are left?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Each of my books includes 10 word problems like the three above. The books also include another 16 Drill & Kill problems like the 4 in challenge 11.
Drill & Kill
This is where we Drill until we Kill all our mistakes!
I call this section of the book, Drill & Kill, because we will drill this concept until we are perfect, and we kill any mistakes!
The following problems can all be solved with the same strategies we used to solve the first ten problems.
1. Solve all four problems on each page.
2. Watch the math tutorial video & correct your work.
3. Review your work with your parent or teacher.
If you get all 4 problems correct, your parent or teacher may tell you that you’re ready to move to the next book/article within this series.
Good Luck!
Drill & Kill
Challenge – 11
Problem 1 Problem 2
2 & 2/3 – 1 & 5/8? 4 & 3/4 – 1 & 2/7?
Problem 3 Problem 4
4 & 4/5 – 3 & 5/6? 5 & 1/6 – 1 & 1/9?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Please read on!
The 8th Lesson in this series will begin after this brief message.
Book 8
Multiplying a Whole Number by a Fraction
Multiplying a whole number by a fraction, like 2/3 x 5 can be very confusing.
If you are multiplying, why is the answer smaller than 5?
Unless children are taught to illustrate the addition of the fractions, they often never fully understand what is happening with the fractions. This leaves children with a limited understanding,
which can be frustrating and confusing as they enter higher levels of mathematics.
This 8th video-post continues my cyclical learning approach to the next level of understanding:
3-Steps to Fully Understand Multiplying a Whole Number by a Fraction!
Step 1 – Create your fractions boxes. In this case you would create 5 whole boxes. Then cut each box into 3-parts.
Step 2 – count your fraction parts. In this case you would have 10/3. If you move the shaded parts, you can make 3-wholes and have an extra 1/3. Therefore you answer is 10/3 or 3 & 1/3.
Step 3 – Now, move to your algorithm. Convert your whole number into a fractions, and then multiply your numerators and denominators. In this case you would get 10/3 or 3 & 1/3, which agrees with
your mathematical model.
Mathematical Model
Book 8 – Problem Number 1
Evian Drinking Kangaroo
You are a scientist studying the strange occurrence of a kangaroo that drinks nothing but Evian water. You are tasked with the job of making sure that the kangaroo drinks the correct amount of
You have 5 bottles of Evian water. You must feed the kangaroo exactly 2/3 of the Evian water. How much water will you give the kangaroo?
Now – Press PLAY and watch the math tutorial video below. Then copy these strategies into your notes.
Challenge 2 – Work with Me
Long Eared Carrot Eaters
Ever since you were tiny-tiny you loved bunnies. For your last birthday, your parents bought you two baby bunnies. You love petting their long ears as they nibble away at their food. Last night
you gave them 6-carrots. They ate 4/7 of the carrots.
How much did your Long Eared Carrot Eaters eat?
Now – Gather your materials and press PLAY. We’ll solve this problem together while you watch the math tutorial video below.
Challenge 3 – On Your Own
Pink Flamingo Research
You are a biologist researching pink flamingos and their diet. You are tasked with the job of discovering how much they eat. You watch as a pink flamingo eats 11 shrimp. However, you notice that
the flamingo only eats 3/4 of each shrimp. The flamingo spits the rest of the shrimp into the water, where tiny fish nibble at the remains.
How much shrimp have you observed the pink flamingoes eat?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Each of my books includes 10 word problems like the three above. The books also include another 16 Drill & Kill problems like the 4 in challenge 11.
Drill & Kill
This is where we Drill until we Kill all our mistakes!
I call this section of the book, Drill & Kill, because we will drill this concept until we are perfect, and we kill any mistakes!
The following problems can all be solved with the same strategies we used to solve the first ten problems.
1. Solve all four problems on each page.
2. Watch the math tutorial video & correct your work.
3. Review your work with your parent or teacher.
If you get all 4 problems correct, your parent or teacher may tell you that you’re ready to move to the next book/article within this series.
Good Luck!
Drill & Kill
Challenge – 11
Problem 1 Problem 2
2/3 x 5 = ? 4 x 2/7 = ?
Problem 3 Problem 4
3 x 4/6 = ? 1/6 x 8 = ?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Please read on!
The 9th Lesson in this series will begin after this brief message.
Book 9
Dividing a Whole Number by a Fraction
Mathematical Model
Book 9 – Problem Number 1
Lazy Lion Pride
Meet the laziest pride of lions in the entire continent of Africa. This pride of cats has found 6 trees that overlap with thick branches intertwining from one tree to the next. The Lazy Lions divide
each tree into 3/4 parts. Each lion gets 3/4 of a tree for napping. How many of these Lazy Lions are sleeping in the trees?
* Hint: How many 3/4’s are in 6?
Now – Press PLAY and watch the math tutorial video below. Then copy these strategies into your notes.
Challenge 2 – Work with Me
Mathematical Meerkats
These Meerkats are confused. They just found three tasty scorpions, which they love to eat. Now, they are leaning back and trying to do the math.
Each meerkat wants to eat 2/3 of a scorpion, because that is the perfect sized meal for a growing meerkat.
Is there enough scorpions for each of these four meerkats to have 2/3’s of a scorpion?
How many 2/3’s are in 3?
Now – Gather your materials and press PLAY. We’ll solve this problem together while you watch the math tutorial video below.
Challenge 3 – On Your Own
Wet Willy
Wet Willy is an adorable pup who went for a swim in his owner’s pool. He jumped out, ran into the living room, and shook himself dry. Water flew all over the place. It slashed onto 3 paintings, one
of which was an irreplaceable painting of President Washington. Wet Willy knew that he did something wrong. He started licking all the water off the paintings, but he can only lick 3/5 of a painting
at a time.
How many 3/5 portions of the paintings will Wet Willy have to lick before he’s done?
Hint: How many 3/5’s are in 3?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Each of my books includes 10 word problems like the three above. The books also include another 16 Drill & Kill problems like the 4 in challenge 11.
Drill & Kill
This is where we Drill until we Kill all our mistakes!
I call this section of the book, Drill & Kill, because we will drill this concept until we are perfect, and we kill any mistakes!
The following problems can all be solved with the same strategies we used to solve the first ten problems.
1. Solve all four problems on each page.
2. Watch the math tutorial video & correct your work.
3. Review your work with your parent or teacher.
If you get all 4 problems correct, your parent or teacher may tell you that you’re ready to move to the next book/article within this series.
Good Luck!
Drill & Kill
Challenge – 11
Problem 1 Problem 2
2/3 ÷ 5 = ? 4 ÷ 2/7 = ?
Problem 3 Problem 4
3 ÷ 4/6 = ? 1/6 ÷ 8 = ?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Please read on!
The 10th Lesson in this series will begin after this brief message.
Book 10
Multiplying Fractions
Mathematical Model
Book 10 – Problem Number 1
Ferlon the High-Fiving Tortoise
Ferlon may be slow, but he is friendliest critter this side of the Mississippi River. He is constantly high-fiving all his friends as he walks through his neighborhood. Today, he walked 5/7
kilometers. If Ferlon high-fives his friends for 3/8 of the time he is walking, what portion of the kilometers is Ferlon the High-Fiving Tortoise exhibiting
his friendly nature to all his friends?
Now – Press PLAY and watch the math tutorial video below. Then copy these strategies into your notes.
Challenge 2 – Work with Me
Pinky Tuskadaro the Calculating Orangutan
Meet Pinky Tuskadaro. He’s a mathematical genius! Pinky has to use his fingers and toes, but he can add, subtract, multiply, and divide. As a mater of fact, Pinky spends 6/7 of his waking hours
calculating mathematical facts.
If Pinky is awake 3/5 of the day, how much of the day does Pinky Tuskadaro the Calculating Orangutan spend calculating mathematical problems?
Now – Gather your materials and press PLAY. We’ll solve this problem together while you watch the math tutorial video below.
Challenge 3 – On Your Own
Wrong Way John
Wrong Way John is a wandering giraffe. He is constantly turning the wrong way, getting stuck in the brambles, and loosing his way. Last week he traveled 3/4 miles.
If he gets lost 3/8 of the time he is traveling, what portion of the miles is Wrong Way John lost?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Each of my books includes 10 word problems like the three above. The books also include another 16 Drill & Kill problems like the 4 in challenge 11.
Drill & Kill
This is where we Drill until we Kill all our mistakes!
I call this section of the book, Drill & Kill, because we will drill this concept until we are perfect, and we kill any mistakes!
The following problems can all be solved with the same strategies we used to solve the first ten problems.
1. Solve all four problems on each page.
2. Watch the math tutorial video & correct your work.
3. Review your work with your parent or teacher.
If you get all 4 problems correct, your parent or teacher may tell you that you’re ready to move to the next book/article within this series.
Good Luck!
Drill & Kill
Challenge – 11
Problem 1 Problem 2
2/3 x 5/8? 3/4 x 2/7?
Problem 3 Problem 4
4/5 x 3/6? 5/6 x 4/9?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Please read on!
The 11th Lesson in this series will begin after this brief message.
Book 11
Dividing Fractions
Mathematical Model
Book 11 – Problem Number 1
Terrance the Ticklish Mule
Terrance loves his alfalfa, but his owner keeps tickling his nose. His owner loves Terrance, but he can’t help tickling his favorite mule. Each day, Terrance eats 2/3 pounds of alfalfa. His owner
divides the alfalfa into portions that are 2/7 pounds, how many portions will Terrance get to eat?
Hint: How many 2/7’s are in 2/3?
Now – Press PLAY and watch the math tutorial video below. Then copy these strategies into your notes.
Challenge 2 – Work with Me
Singing Camels
Kami & Connie Camel love to sing. They walk around their paddock and sing to all the on lookers. Kami & Connie’s paddock is 8/9 miles long.
If Kami& Connie break the paddock into portions that are 3/4 miles, how many portions will our “Singing Camels” walk and sing?
Hint: How many 3/4’s are in 8/9?
Now – Gather your materials and press PLAY. We’ll solve this problem together while you watch the math tutorial video below.
Challenge 3 – On Your Own
Honovi the Honey Lov’n Grizzley Bear
Honovi is always on the lookout for bees. She follows them back to the hive and collects as much honey as possible. Yesterday, Hovoni found a hive. While the bees crawled under her fur and stung
again and again, Honovi stole 4/5 pounds of honey. If Hovoni divides the honey into portions that are each 1/8 pound, how many portions can she make?
Hint: How many 1/8’s are in 4/5?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Each of my books includes 10 word problems like the three above. The books also include another 16 Drill & Kill problems like the 4 in challenge 11.
Drill & Kill
This is where we Drill until we Kill all our mistakes!
I call this section of the book, Drill & Kill, because we will drill this concept until we are perfect, and we kill any mistakes!
The following problems can all be solved with the same strategies we used to solve the first ten problems.
1. Solve all four problems on each page.
2. Watch the math tutorial video & correct your work.
3. Review your work with your parent or teacher.
If you get all 4 problems correct, your parent or teacher may tell you that you’re ready to move to the next book/article within this series.
Good Luck!
Drill & Kill
Challenge – 11
Problem 1 Problem 2
2/3 ÷ 5/8? 3/4 ÷ 2/7?
Problem 3 Problem 4
4/5 ÷ 3/6? 5/6 ÷ 4/9?
Once you complete the problem – Hit PLAY on the math tutorial video below. Good Luck!
Please read on!
Need Help with Division?
I have a series on division that is specifically designed for children who do not have all of their multiplication facts memorized. The series is designed in a similar manner to this series on
multiplication, where each and every problem is linked to a video tutorial. Here is a link to my series on Division.
Want More Tutorials?
TeachersDungeon is an Educational Fantasy Game. It is 100% FREE! The game is set to the Common Core Educational Standards, and is web-based, so it can be played on any device. Many of the
questions are accompanied by tutorials like the ones you saw here.
One Last Thing
If you like this post and found it helpful, please leave a brief comment. As a teacher, perhaps the greatest reward I receive is from parents, children, and fellow teachers who use my strategies of
education and succeed.
My mission in life and as an educator is to make people feel empowered, self-assured, and happy about who they are in this world! We all have gifts to bestow upon our world. Go forth and do so, and
know that you are awesome!
Have a fantastic day – Brian McCoy
4 Comments
1. Is there any updates coming soon from this site?
1. Hi Selo –
I am sorry for the delay. At the end of the school year (last year), I was asked to create a series on my method of teaching division. We have a number of students at my school who struggle
with long division, and I have created an alternative called area division. I created the series, but it took me all summer to complete it.
Now school is back in session, and I am teaching full time, but I will do my best to complete this series on fraction.
Thanks for reminder that I need to get back to work on fractions.
In the meantime – if you need help with division, check out Area Division
Have a great day!
2. Hi Brian, your “11-Steps to FULLY Understand Fractions” is so brilliant. You have a very profound knowledge on fractions. You have presented a lot of ideas on how to deal with fractions. This is
very helpful to all kind of students and also to all teachers. Regarding adding and subtracting fractions, let me also share my idea on how to deal with fractions.
Adding and Subtracting fractions maybe difficult at first but keeping on practicing will make it easier in the long run. To add and subtract fractions successfully is to make the rules stick to
your memory. So I have to mention again the rules here.
Rules are:
Same denominator:
Add both numerators then reduce. The result would be the final answer.
Different denominator (4 steps):
1. Multiply the numerator of first fraction to the denominator of second fraction. The result is the new numerator of first fraction.
2. Multiply the numerator of the second fraction to the denominator of first fraction. The result is the new numerator of second fraction.
3. Multiply both denominators. The result is the common denominator for two fractions.
4. Add the two new numerators. The result is the answer.
To make it stick to your memory:
Rules for subtraction:
Same denominator:
Subtract second numerator from first then reduce. The result would be the final answer.
Different denominator (4 steps):
1. Multiply the numerator of first fraction to the denominator of second fraction. The result is the new numerator of first fraction.
2. Multiply the numerator of the second fraction to the denominator of first fraction. The result is the new numerator of second fraction.
3. Multiply both denominators. The result is the common denominator for two fractions.
4. Subtract new second numerator from first new numerator. The result is now the answer.
To make it stick to your memory:
Same numerator:
Add two fractions 50 times.
Subtract two fractions 50 times.
Different denominator:
Add two fractions 100 times.
Subtract two fractions 100 times.
To check if your answer is right and your step by step solution is correct:
Use fraction calculator with button from http://www.fractioncalc.com to be sure that your fraction solution is correct.
The key here is to make the rules implanted into the minds of the students so that they will never forget.
1. Hello Anne –
Thank you for your kind words and great advice.
I really like your suggestion to practice working with fractions until it become easy. I also like your suggestion to solve problems and then check your work with the the fractions calculator
Have a fantastic day – Brian
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://teachersdungeon.com/blog/11-steps-to-understanding-fractions/","timestamp":"2024-11-12T00:38:38Z","content_type":"text/html","content_length":"175530","record_id":"<urn:uuid:4987ac75-2e35-4f85-921f-8833eb6a7fe0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00360.warc.gz"} |
Limits of (randomly) growing Schröder trees and exchangeability
We consider finite rooted ordered trees in which every internal node has at least two children, sometimes called Schröder trees; the size |t| of such a tree t is the number of its leaves. An
important concept with trees is that of inducing subtrees. Given a tree t of size k and a larger tree t’ of size n\geq k we define 0 \leq \theta(t,t’)\leq 1 to be the probability of obtaining t as a
randomly induced subtree of size k in t’. One can think of \theta(t,t’) to be the density of the pattern t in t’. In this talk we consider two closely related questions concerning the nature of \
1. A sequences of trees (t_n)_n with |t_n|\rightarrow\infty is called \theta-convergent, if \theta(t,t_n) converges for every fixed tree t. The limit of (t_n)_n is the function t\mapsto \lim_n\theta
(t,t_n). What limits exist?
2. A Markov chain (X_n)_n with X_n being a random tree of size n is called a \theta-chain if P(X_k=t|X_n=t’)=\theta(t,t’) for all k \leq n. What \theta-chains exist?
Similar questions have been treated for many different types of discrete structures (words, permutations, graphs \dots); binary Schröder trees (Catalan trees) are considered in [1]. We present a De
Finetti-type representation for \theta-chains and a homeomorphic description of limits of \theta-convergent sequences involving certain tree-like compact subsets of the square [0,1]^2. Questions and
results are closely linked to the study of exchangeable hierarchies, see [2].
[1] Evans, Grübel and Wakolbinger. “Doob-Martin boundary of Rémy’s tree growth chain”. The Annals of Probability, 2017.
[2] Forman, Haulk and Pitman. “A representation of exchangeable hierarchies by sampling from random real trees”. Prob.Theory and related Fields, 2017.
[3] Gerstenberg. “Exchangeable interval hypergraphs and limits of ordered discrete structures”. arXiv, 2018. | {"url":"https://talks.ox.ac.uk/talks/id/16131e1b-de82-4227-ad4a-ab46c536a9a0/","timestamp":"2024-11-01T23:53:40Z","content_type":"text/html","content_length":"16507","record_id":"<urn:uuid:122bc5b8-3a3f-4aa6-a2a3-38d6d039a9a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00791.warc.gz"} |
1. Trigonometric Functions
Trigonometric Functions
This is a trigonometric functions calculator that accepts real and complex numbers.
input pi for `pi` and sqrt(2) for square root of 2 for example.
This online tool calculates trigonometric functions : sine, cosine, tangent, cotangent, secant, and cosecant for a given angle in degrees or radians. Example: cos (2*pi/3).
Trigonometric functions (also called circular functions or angle functions) can be defined with different approaches explained below :
- definitions using a right triangle
- definitions using the trigonometric circle
- definitions implying complex numbers
Right triangle and trigonometric functions
Opposite side of angle Â: a
Adjacent side of angle Â: b
Hypotenuse of the angle Â: c
sin  = opposite side/hypotenuse = a/c
cos  = adjacent side/hypotenuse = b/c
tan  = opposite side/adjacent side = a/b
cot A = adjacent/opposite = b/a
sec A = hypotenuse/adjacent = c/b
Note that:
sec  = 1/cos Â
csc A = hypotenuse/opposite = c/a
Note that:
csc  = 1/sin Â
Trigonometric Circle
We assume that an orthogonal coordinate system is defined by Ox and Oy axes.
The trigonometric circle is centered on point O and has radius 1. It is oriented counterclockwise i.e. from Ox to Oy.
Let M be a point of the circle and `alpha` a radians measurement of the angle (Ox, OM) then,
- the cosine of alpha, noted cos(`alpha`) is the horizontal coordinate of M.
- the sinus of alpha, noted sin (`alpha`) is the vertical coordinate of M.
Trigonometric functions applied to complex numbers
Commonly used trigonometric functions can be extended to complex numbers as following,
We suppose that a and b are real numbers.
`sin(a + b*i) = sin(a)*cosh(b) + i cos(a)*sinh(b)`
`cos(a + b*i) = cos(a)*cosh(b) - i sin(a)*sinh(b)`
`tan(a + b*i) = (sin(2*a)+ i*sinh(2*b))/(cos(2*a)+cosh(2*b))`
See also
Conversion of angle measurement
Inverse trigonometric functions | {"url":"https://www.123calculus.com/en/trigonometric-functions-page-1-12-100.html","timestamp":"2024-11-02T02:01:32Z","content_type":"text/html","content_length":"18914","record_id":"<urn:uuid:a53aae85-1bf7-4d49-9377-00b9ca723ccc>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00792.warc.gz"} |
9.5: Simple analysis of 2x2 repeated measures design
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Normally in a chapter about factorial designs we would introduce you to Factorial ANOVAs, which are totally a thing. We will introduce you to them soon. But, before we do that, we are going to show
you how to analyze a 2x2 repeated measures ANOVA design with paired-samples t-tests. This is probably something you won’t do very often. However, it turns out the answers you get from this method are
the same ones you would get from an ANOVA.
Admittedly, if you found the explanation of ANOVA complicated, it will just appear even more complicated for factorial designs. So, our purpose here is to delay the complication, and show you with
t-tests what it is that the Factorial ANOVA is doing. More important, when you do the analysis with t-tests, you have to be very careful to make all of the comparisons in the right way. As a result,
you will get some experience learning how to know what it is you want to know from factorial designs. Once you know what you want to know, you can use the ANOVA to find out the answers, and then you
will also know what answers to look for after you run the ANOVA. Isn’t new knowledge fun!
The first thing we need to do is define main effects and interactions. Whenever you conduct a Factorial design, you will also have the opportunity to analyze main effects and interactions. However,
the number of main effects and interactions you get to analyse depends on the number of IVs in the design.
Main effects
Formally, main effects are the mean differences for a single Independent variable. There is always one main effect for each IV. A 2x2 design has 2 IVs, so there are two main effects. In our example,
there is one main effect for distraction, and one main effect for reward. We will often ask if the main effect of some IV is significant. This refers to a statistical question: Were the differences
between the means for that IV likely or unlikely to be caused by chance (sampling error).
If you had a 2x2x2 design, you would measure three main effects, one for each IV. If you had a 3x3x3 design, you would still only have 3 IVs, so you would have three main effects.
We find that the interaction concept is one of the most confusing concepts for factorial designs. Formally, we might say an interaction occurs whenever the effect of one IV has an influence on the
size of the effect for another IV. That’s probably not very helpful. In more concrete terms, using our example, we found that the reward IV had an effect on the size of the distraction effect. The
distraction effect was larger when there was no-reward, and it was smaller when there was a reward. So, there was an interaction.
We might also say an interaction occurs when the difference between the differences are different! Yikes. Let’s explain. There was a difference in spot-the-difference performance between the
distraction and no-distraction condition, this is called the distraction effect (it is a difference measure). The reward manipulation changed the size of the distraction effect, that means there was
difference in the size of the distraction effect. The distraction effect is itself a measure of differences. So, we did find that the difference (in the distraction effect) between the differences
(the two measures of the distraction effect between the reward conditions) were different. When you start to write down explanations of what interactions are, you find out why they come across as
complicated. We’ll leave our definition of interaction like this for now. Don’t worry, we’ll go through lots of examples to help firm up this concept for you.
The number of interactions in the design also depend on the number of IVs. For a 2x2 design there is only 1 interaction. The interaction between IV1 and IV2. This occurs when the effect of say IV2
(whether there is a difference between the levels of IV2) changes across the levels of IV1. We could write this in reverse, and ask if the effect of IV1 (whether there is a difference between the
levels of IV1) changes across the levels of IV2. However, just because we can write this two ways, does not mean there are two interactions. We’ll see in a bit, that no matter how do the calculation
to see if the difference scores–measure of effect for one IV– change across the levels of the other IV, we always get the same answer. That is why there is only one interaction for a 2x2. Similarly,
there is only one interaction for a 3x3, because there again we only have two IVs (each with three levels). Only when we get up to designs with more than 2 IVs, do we find more possible interactions.
A design with three IVS, has four interactions. If the IVs are labelled A, B, and C, then we have three 2-way interactions (AB, AC, and BC), and one three-way interaction (ABC). We hold off on this
stuff for much later.
Looking at the data
It is most helpful to see some data in order to understand how we will analyze it. Let’s imagine we ran our fake attention study. We will have five people in the study, and they will participate in
all conditions, so it will be a fully repeated-measures design. The data could look like this:
No Reward Reward
No Distraction Distraction No Distraction Distraction
subject A B C D
Note: Number of differences spotted for each subject in each condition.
Main effect of Distraction
The main effect of distraction compares the overall means for all scores in the no-distraction and distraction conditions, collapsing over the reward conditions.
The yellow columns show the no-distraction scores for each subject. The blue columns show the distraction scores for each subject.
The overall means for for each subject, for the two distraction conditions are shown to the right. For example, subject 1 had a 10 and 12 in the no-distraction condition, so their mean is 11.
We are interested in the main effect of distraction. This is the difference between the AC column (average of subject scores in the no-distraction condition) and the BD column (average of the subject
scores in the distraction condition). These differences for each subjecct are shown in the last green column. The overall means, averaging over subjects are in the bottom green row.
Just looking at the means, we can see there was a main effect of Distraction, the mean for the no-distraction condition was 11.1, and the mean for the distraction condition was 6.8. The size of the
main effect was 4.3 (the difference between 11.1 and 6.8).
Now, what if we wanted to know if this main effect of distraction (the difference of 4.3) could have been caused by chance, or sampling error. You could do two things. You could run a paired samples
\(t\)-test between the mean no-distraction scores for each subject (column AC) and the mean distraction scores for each subject (column BD). Or, you could run a one-sample \(t\)-test on the
difference scores column, testing against a mean difference of 0. Either way you will get the same answer.
Here’s the paired samples version:
A <- c(10,8,11,9,10) #nD_nR
B <- c(5,4,3,4,2) #D_nR
C <- c(12,13,14,11,13) #nD_R
D <- c(9,8,10,11,12) #D_R
AC<- (A+C)/2
BD<- (B+D)/2
t.test(AC,BD, paired=TRUE,var.equal=TRUE)
Paired t-test
data: AC and BD
t = 7.6615, df = 4, p-value = 0.00156
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
2.741724 5.858276
sample estimates:
mean of the differences
Here’s the one sample version:
A <- c(10,8,11,9,10) #nD_nR
B <- c(5,4,3,4,2) #D_nR
C <- c(12,13,14,11,13) #nD_R
D <- c(9,8,10,11,12) #D_R
AC<- (A+C)/2
BD<- (B+D)/2
t.test(AC-BD, mu=0)
One Sample t-test
data: AC - BD
t = 7.6615, df = 4, p-value = 0.00156
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
2.741724 5.858276
sample estimates:
mean of x
If we were to write-up our results for the main effect of distraction we could say something like this:
The main effect of distraction was significant, \(t\)(4) = 7.66, \(p\) = 0.001. The mean number of differences spotted was higher in the no-distraction condition (M = 11.1) than the distraction
condition (M = 6.8).
Main effect of Reward
The main effect of reward compares the overall means for all scores in the no-reward and reward conditions, collapsing over the reward conditions.
The yellow columns show the no-reward scores for each subject. The blue columns show the reward scores for each subject.
The overall means for for each subject, for the two reward conditions are shown to the right. For example, subject 1 had a 10 and 5 in the no-reward condition, so their mean is 7.5.
We are interested in the main effect of reward. This is the difference between the AB column (average of subject scores in the no-reward condition) and the CD column (average of the subject scores in
the reward condition). These differences for each subjecct are shown in the last green column. The overall means, averaging over subjects are in the bottom green row.
Just looking at the means, we can see there was a main effect of reward. The mean number of differences spotted was 11.3 in the reward condition, and 6.6 in the no-reward condition. So, the size of
the main effectd of reward was 4.7.
Is a difference of this size likely o unlikey due to chance? We could conduct a paired-samples \(t\)-test on the AB vs. CD means, or a one-sample \(t\)-test on the difference scores. They both give
the same answer:
Here’s the paired samples version:
A <- c(10,8,11,9,10) #nD_nR
B <- c(5,4,3,4,2) #D_nR
C <- c(12,13,14,11,13) #nD_R
D <- c(9,8,10,11,12) #D_R
AB<- (A+B)/2
CD<- (C+D)/2
t.test(CD,AB, paired=TRUE,var.equal=TRUE)
Paired t-test
data: CD and AB
t = 8.3742, df = 4, p-value = 0.001112
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
3.141724 6.258276
sample estimates:
mean of the differences
Here’s the one sample version:
A <- c(10,8,11,9,10) #nD_nR
B <- c(5,4,3,4,2) #D_nR
C <- c(12,13,14,11,13) #nD_R
D <- c(9,8,10,11,12) #D_R
AB<- (A+B)/2
CD<- (C+D)/2
t.test(CD-AB, mu=0)
One Sample t-test
data: CD - AB
t = 8.3742, df = 4, p-value = 0.001112
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
3.141724 6.258276
sample estimates:
mean of x
If we were to write-up our results for the main effect of reward we could say something like this:
The main effect of reward was significant, t(4) = 8.37, p = 0.001. The mean number of differences spotted was higher in the reward condition (M = 11.3) than the no-reward condition (M = 6.6).
Interaction between Distraction and Reward
Now we are ready to look at the interaction. Remember, the whole point of this fake study was what? Can you remember?
Here’s a reminder. We wanted to know if giving rewards versus not would change the size of the distraction effect.
Notice, neither the main effect of distraction, or the main effect of reward, which we just went through the process of computing, answers this question.
In order to answer the question we need to do two things. First, compute distraction effect for each subject when they were in the no-reward condition. Second, compute the distraction effect for each
subject when they were in the reward condition.
Then, we can compare the two distraction effects and see if they are different. The comparison between the two distraction effects is what we call the interaction effect. Remember, this is a
difference between two difference scores. We first get the difference scores for the distraction effects in the no-reward and reward conditions. Then we find the difference scores between the two
distraction effects. This difference of differences is the interaction effect (green column in the table)
The mean distraction effects in the no-reward (6) and reward (2.6) conditions were different. This difference is the interaction effect. The size of the interaction effect was 3.4.
How can we test whether the interaction effect was likely or unlikely due to chance? We could run another paired-sample \(t\)-test between the two distraction effect measures for each subject, or a
one sample \(t\)-test on the green column (representing the difference between the differences). Both of these \(t\)-tests will give the same results:
Here’s the paired samples version:
A <- c(10,8,11,9,10) #nD_nR
B <- c(5,4,3,4,2) #D_nR
C <- c(12,13,14,11,13) #nD_R
D <- c(9,8,10,11,12) #D_R
A_B <- A-B
C_D <- C-D
t.test(A_B,C_D, paired=TRUE,var.equal=TRUE)
Paired t-test
data: A_B and C_D
t = 2.493, df = 4, p-value = 0.06727
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.3865663 7.1865663
sample estimates:
mean of the differences
Here’s the one sample version:
A <- c(10,8,11,9,10) #nD_nR
B <- c(5,4,3,4,2) #D_nR
C <- c(12,13,14,11,13) #nD_R
D <- c(9,8,10,11,12) #D_R
A_B <- A-B
C_D <- C-D
t.test(A_B-C_D, mu=0)
One Sample t-test
data: A_B - C_D
t = 2.493, df = 4, p-value = 0.06727
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
-0.3865663 7.1865663
sample estimates:
mean of x
Oh look, the interaction was not significant. At least, if we had set our alpha criterion to 0.05, it would not have met that criteria. We could write up the results like this. The two-way
interaction between between distraction and reward was not significant, \(t\)(4) = 2.493, \(p\) = 0.067.
Often times when a result is “not significant” according to the alpha criteria, the pattern among the means is not described further. One reason for this practice is that the researcher is treating
the means as if they are not different (because there was an above alpha probability that the observed idfferences were due to chance). If they are not different, then there is no pattern to report.
There are differences in opinion among reasonable and expert statisticians on what should or should not be reported. Let’s say we wanted to report the observed mean differences, we would write
something like this:
The two-way interaction between between distraction and reward was not significant, t(4) = 2.493, p = 0.067. The mean distraction effect in the no-reward condition was 6 and the mean distraction
effect in the reward condition was 2.6.
Writing it all up
We have completed an analysis of a 2x2 repeated measures design using paired-samples \(t\)-tests. Here is what a full write-up of the results could look like.
The main effect of distraction was significant, \(t\)(4) = 7.66, \(p\) = 0.001. The mean number of differences spotted was higher in the no-distraction condition (M = 11.1) than the distraction
condition (M = 6.8).
The main effect of reward was significant, \(t\)(4) = 8.37, \(p\) = 0.001. The mean number of differences spotted was higher in the reward condition (M = 11.3) than the no-reward condition (M = 6.6).
The two-way interaction between between distraction and reward was not significant, \(t\)(4) = 2.493, \(p\) = 0.067. The mean distraction effect in the no-reward condition was 6 and the mean
distraction effect in the reward condition was 2.6.
Interim Summary. We went through this exercise to show you how to break up the data into individual comparisons of interest. Generally speaking, a 2x2 repeated measures design would not be anlayzed
with three paired-samples \(t\)-test. This is because it is more convenient to use the repeated measures ANOVA for this task. We will do this in a moment to show you that they give the same results.
And, by the same results, what we will show is that the \(p\)-values for each main effect, and the interaction, are the same. The ANOVA will give us \(F\)-values rather than \(t\) values. It turns
out that in this situation, the \(F\)-values are related to the \(t\) values. In fact, \(t^2 = F\).
2x2 Repeated Measures ANOVA
We just showed how a 2x2 repeated measures design can be analyzed using paired-sampled \(t\)-tests. We broke up the analysis into three parts. The main effect for distraction, the main effect for
reward, and the 2-way interaction between distraction and reward. We claimed the results of the paired-samples \(t\)-test analysis would mirror what we would find if we conducted the analysis using
an ANOVA. Let’s show that the results are the same. Here are the results from the 2x2 repeated-measures ANOVA, using the aov function in R.
A <- c(10,8,11,9,10) #nD_nR
B <- c(5,4,3,4,2) #D_nR
C <- c(12,13,14,11,13) #nD_R
D <- c(9,8,10,11,12) #D_R
Number_spotted <- c(A, B, C, D)
Distraction <- rep(rep(c("No Distraction", "Distraction"), each=5),2)
Reward <- rep(c("No Reward","Reward"),each=10)
Subjects <- rep(1:5,4)
Distraction <- as.factor(Distraction)
Reward <- as.factor(Reward)
Subjects <- as.factor(Subjects)
rm_df <- data.frame(Subjects, Distraction, Reward, Number_spotted)
aov_summary <- summary(aov(Number_spotted~Distraction*Reward +
Df Sum Sq Mean Sq F value Pr(>F)
Distraction 1 92.45 92.450 58.698413 F)" style="vertical-align:middle;">0.0015600
Distraction:Reward 1 14.45 14.450 6.215054 F)" style="vertical-align:middle;">0.0672681
Residuals 4 3.70 0.925 NA F)" style="vertical-align:middle;">NA
Residuals 4 6.30 1.575 NA F)" style="vertical-align:middle;">NA
Residuals 4 9.30 2.325 NA F)" style="vertical-align:middle;">NA
Residuals1 4 6.30 1.575 NA F)" style="vertical-align:middle;">NA
Reward 1 110.45 110.450 70.126984 F)" style="vertical-align:middle;">0.0011122
Let’s compare these results with the paired-samples \(t\)-tests.
Main effect of Distraction: Using the paired samples \(t\)-test, we found \(t\)(4) =7.6615, \(p\)=0.00156. Using the ANOVA we found, \(F\)(1,4) = 58.69, \(p\)=0.00156. See, the \(p\)-values are the
same, and \(t^2 = 7.6615^2 = 58.69 = F\).
Main effect of Reward: Using the paired samples \(t\)-test, we found \(t\)(4) =8.3742, \(p\)=0.001112. Using the ANOVA we found, \(F\)(1,4) = 70.126, \(p\)=0.001112. See, the \(p\)-values are the
same, and \(t^2 = 8.3742^2 = 70.12 = F\).
Interaction effect: Using the paired samples \(t\)-test, we found \(t\)(4) =2.493, \(p\)=0.06727. Using the ANOVA we found, \(F\)(1,4) = 6.215, \(p\)=0.06727. See, the \(p\)-values are the same, and
\(t^2 = 2.493^2 = 6.215 = F\).
There you have it. The results from a 2x2 repeated measures ANOVA are the same as you would get if you used paired-samples \(t\)-tests for the main effects and interactions. | {"url":"https://stats.libretexts.org/Bookshelves/Applied_Statistics/Answering_Questions_with_Data_-__Introductory_Statistics_for_Psychology_Students_(Crump)/09%3A_Factorial_ANOVA/9.05%3A_Simple_analysis_of_2x2_repeated_measures_design","timestamp":"2024-11-09T22:22:25Z","content_type":"text/html","content_length":"160503","record_id":"<urn:uuid:d4fa6cb6-08c6-4c27-9b2c-0f98162ddab9>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00272.warc.gz"} |
Fast Fading Channel Estimation using Kalman Filter for Turbo Receivers
Volume 03, Issue 04 (April 2014)
Fast Fading Channel Estimation using Kalman Filter for Turbo Receivers
DOI : 10.17577/IJERTV3IS041372
Download Full-Text PDF Cite this Publication
L. D. Angel, R. Eveline Pregitha, 2014, Fast Fading Channel Estimation using Kalman Filter for Turbo Receivers, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 03, Issue 04
(April 2014),
• Open Access
• Total Downloads : 251
• Authors : L. D. Angel, R. Eveline Pregitha
• Paper ID : IJERTV3IS041372
• Volume & Issue : Volume 03, Issue 04 (April 2014)
• Published (First Online): 23-04-2014
• ISSN (Online) : 2278-0181
• Publisher Name : IJERT
• License: This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version
Fast Fading Channel Estimation using Kalman Filter for Turbo Receivers
L. D. Angel, *R. Eveline Pregitha
PG Student, Department of Electronics and Communication Engineering, James college of Engineering and Technology, Navalcaud. Tamilnadu, India
*Department of Electronics and Communication Engineering, James college of Engineering and Technology, Navalcaud, Tamilnadu, India
Abstract – An efficient, low complex and near optimal approach to pilot assisted fast fading channel is estimated for single carrier modulation with a turbo receiver. This method is applicable to
higher order modulation schemes, where the detector is sensitive to the estimation error. A doubly selective fast fading channel is estimated using a fixed lag Kalman filter. The Kalman filtering is
followed by a zero phase low pass filter which acts as a smoother. A method for designing the smoother is introduced. Block processing method is used to reduce the transient effects of the zero phase
filter at the edges of the symbol blocks. A first-order autoregressive (AR) model is fitted to the channel variations. For 64 Quadrature Amplitude Modulation (QAM), a complex exponential basis
expansion model (CE-BEM) is exploited to capture the varying gains. The long memory of the smoother reduces the error-rate floor associated with low-order channel estimation models. The applicability
of the method to 64-QAM scheme is shown through simulation experiment.
Key words- Basis expansion models (BEMs), channel estimation, fast-fading channels, Kalman filters (KFs), turbo equalization.
1. INTRODUCTION
Channel estimation technique is widely used in turbo receivers to estimate the parameters of time and frequency selective fading channels in wireless channels. In turbo receivers the soft
information is exchanged iteratively between the equalizer and decorder inorder to improve the performance [1]. Blind and semiblind techniques for channel estimation is employed for slow fading
channels but their performance decreases when applied to higher order modulation schemes where the channel estimation needs to be more accurate.
An efficient low complexity channel is estimated with a near optimal error performance. It can be used in high speed and high data rate vehicular communication systems. Other applications include
control signaling for remote operated aerial vehicles, high reliability and rapid
communications for emergency vehicles, video conferencing on high speed trains, etc., where communication is to be performed over rapidly varying channels.
The channel estimation and equalization is done separately using two cascaded low order kalman filters. A Zero Phase Filter (ZPF ) is used as a smoother inorder to suppress the out of band
estimation errors from the channel estimator KFs output. The combination of ZPF and KF reduce the estimation error. The smoothing function is applied to each channel path independently, hence
this method efficiently fits on multicore implementations.
The outline of this paper is as follows. In section II, the proposed model and the properties are discussed. In section III the simulation result of the proposed model is shown. In Section IV,
some concluding remarks are given.
2. PROPOSED SYSTEM
In this section, the formulation of the proposed model is stated. The transmitter structure, channel model, receiver, Zero phase filter design, Iterative equalizer and decoder is proposed.
1. Transmitter
A bit-interleaved coded modulation system transmitting over a time varying fading channel is considered as shown in Fig 1. The notation for single carrier signaling and samples are used from
[6]. A block of independent data bits {b(k),k= 1,2,Nd} is encoded by a convolutional encoder with code rate R. The encoded sequence c(k) is given to a bitwise random interleaver (.) of length
Ni, generating the interleaved coded sequence
{c(k),k = 1,2 Ni}. The interleaved data are modulated according to some constellation X, mapping every Nmod into a constellation point. Following the time multiplexed training scheme in [15],
a sequence of pilot symbols lp is periodically inserted per ls data symbols to form a transmit sequence {s(n),n = 1,2,N}. The symbol sequence s(n) is assumed to be zero mean and have unit
mean power.
2. Channel Model
Fig. 1. Transmitter structure
the decoder generated in the previous iteration is fed to the
A doubly selective multipath channel can be modeled as a linear time varying FIR filter with L+1 taps. The discrete time signal at the receiver input can be expressed as
channel estimation block.
In the decoder, LMe {c(k)} is deinterleaved to provide the soft input to the SISO convolutional decoder. The SISO decoder produces LLR information on coded bits, which is denoted with LDa {c
(k)}. This LLR
y(n) =
; + ()
information is then used to generate updated symbol estimates s(n) and their variance (n), which are used by
= gT(n)s(n) +v(n) (1)
for n=1,2,N, where v(n) denotes the Gaussian zero mean complex white noise with variance 2v, and
g(n) := [g(n;0) g(n;1). g(n;L)]T (2)
Here, a complex exponential basis expansion model (CE- BEM) with Q=1 basis function is used for 4-QAM and 64- QAMThe channel impulse response g(n;l) is expressed in terms of the complex
exponential basis expansion model coefficients hq(n;l) as
the channel estimator and the equalizer. At the same time, the extrinsic information LDe {c(k)} is extracted to be fed back to the SISO demapper to further improve the next- round decisions
on data symbol.
D.Zero Phase Filter
The IIR filter of the ZPF is designed to match the characteristics of the fading process. The IIR filter parameters to be determined are the passband edge frequency fp, passband ripple Rp,
stopband attenuation Rs, and cutoff edge frequency fa. The channel gain variations are band limited to Doppler frequency fD. Thus, one sets fp
= fD. The Doppler frequency is considered to have a
(; ) ejqn (3)
maximum value. The impact of this over estimation on the performance will be demonstrated through simulation.
Using (1) and (3), the received signal is given as
Consider a single channel path. The input to the ZPF is the estimated channel gain g(n, l) from the KF,
3. Receiver
(; )ejwqns(n-l)+v(n) (4)
which is given by
g(n; l) = g(n; l) + e(n; l) (5)
The turbo receiver consist of channel estimator, equalizer and convolutional decoder modules each exchanging soft information with each other. Channel estimation is done with pilots. The pilots
contain zero information for the decoder and are removed from the equalizer output before being forwarded to the decoder.
Data symbols are detected with the equalizer. The output of the equalizer, including the data symbol estimates s(n) and their estimated variance 2(n), is to generate the extrinsic LLRs for the
coded bits LMe {c(k)}. The extrinsic information on a given bit is obtained by subtracting the input LLR from the output LLR to block the positive feedback preventing convergence. For each
iteration, the extrinsic information from the detector channel estimation block is fed to the decoder, whereas the information from
where e(n; l) denotes the estimation error of the KF for path l. The estimation error e(n; l) is assumed to be uncorrelated with g(n, l) and have constant PSD, i.e., See(f; l) = 2v, uniformly
distributed over the normalizd frequency range of [1/2, 1/2]. At each iteration, the estimated channel gains from the KF are passed through the ZPF with the forward (and backward) magnitude
response A(f). The PSD of the output of the ZPF for propagation path l, i.e., Sgg(f; l), can be written as
Sgg(f; l) = A4(f) [Sgg(f; l) + See(f; l)] (6)
where Sgg(f; l) is the PSD of g(n; l), band limited to [fD, fD].
Fig. 2. Receiver structure
3. SIMULATION RESULTS
The performance of this method is demonstrated by evaluating the BER versus SNR for 64-QAM constellations.
Fig. 3. BER versus Eb/N0 for a 64-QAM receiver
In Fig. 3, the case of a 64-QAM scheme is shown, where Q=1 and Q=3 models performances are compared with the perfect CSI receiver. The EKF method did not reliably converge for the 64-QAM scheme;
therefore, the corresponding results are not included in this figure. It can be seen that the Q=1 model performs almost as well as the CE-BEM for BER < 10-6 but starts to deteriorate afterward.
With the proposed method, all the BER curves are within 0.3dB of the perfect-channel receiver. In addition, the system convergence is fast. The average number of iterations it takes for system to
converge was approximately three iterations per trial for the case of the 64-QAM receiver.
The 64-QAM setup was also used to contrast the performance of the ZPF with that of FIR and IIR filters in Fig. 4, to justify the use of a ZPF. The IIR filter is designed using the same
specifications as the IIR component of the ZPF, except for the passband ripple and stopband
attenuation in decibels being doubled (since the magnitude response of a ZPF is equivalent to two cascaded component IIR filters). The FIR filter was designed using the least squares method,
where the
parameters were selected to be the same as the ZPF designed previously. It can be seen that it requires 2000 taps for an FIR filter to achieve the same performance as a ZPF with only a
fifth-order component IIR filter, which is an obvious cost savings. Moreover, the IIR filter introduces a phase distortion that significantly degrades the performance of the receiver.
Fig. 4. BER versus Eb/N0 for a 64-QAM scheme, comparing ZPF, ordinary IIR, and FIR filters.
As shown, the proposed method provides excellent performance at the SNR values required for low error reception.
4. CONCLUSION
A low cost ZPF is applied to the output of a channel estimator KF to accurately estimate a fast fading channel in a turbo receiver. The long memory of the smoother can reduce the estimation error
less than 2dB of
the wiener bound without using high order KFs. The simulation results for 64-QAM receiver is shown.
1. M. Speth, S. Fechtel, G. Fock, and H. Meyr, Optimum receiver design for wireless broad-band systems using OFDM. I, IEEE Trans. Commun., vol. 47, no. 11, pp. 1668-1677, Nov. 1999.
2. X. Huang and H. C. Wu, Robust and efficient intercarrier interference mitigation for OFDM systems in time-varying fading channels, IEEE Trans. Veh. Technol., vol. 56, no. 5. Pp. 2517- 2528, Sep.
3. J. K. Tugnait, S. He. and H. Kim, Doubly selective channel estimation using exponential basis models and subblock tracking, IEEE Trans. Signal process., vol. 58, no. 3, pp. 1275-1289, Mar. 2010.
4. X. Li and T. F. Wong, Turbo equalization with nonlinear Kalman filtering for time-varying frequency-selective fading channels, IEEE Trans. Wireless commun., vol. 6, no. 2, pp. 691-700, Feb. 2007.
5. H. Kim and J. K. Tugnait, Turbo equalization for doubly-selective fading channels using nonlinear Kalman filtering and basis expansion models, IEEE Trans. Wireless Commun., vol. 9, no. 6, pp.
2076-2087, Jun 2010.
6. P. Wan, M. McGuire, and X. Dong, Near-optimal channel estimation for OFDM in fast-fading channels, IEEE Trans. Veh. Technol., vol. 60, no. 8, pp. 3780-3791, Oct 2011.
7. M. Russell and G. Stuber, Interchannel interference analysis of OFDM in a mobile environment, in Proc. IEEE 45th Veh. Technol, Conf., Jul. 1995, vol. 2, pp. 820-824.
8. Q. Guo, L. Ping, and D. Huang, A low-complexity iterative channel estimarion and detection technique for doubly selective channels, IEEE Trans. Wireless Commun., vol. 8, no. 8, pp. 4340-4349,
Aug. 2009.
9. M. K. Tsatsanis and Z. Xu, Pilot symbol assisted modulation in frequency selective fading wireless channels, IEEE Trans. Signal Process., vol. 48, no. 8, pp. 2353-2365, Aug. 2000.
10. V. Kafedziski and D. Morrell, Optimal adaptive equalization of multipath fading channels, in Conf. Rec. 28th Asilomar Conf. Signals, Syst., Comput., 1994, vol. 2, pp. 1443-1447.
11. K. E. Baddour and N. C. Beaulieu, Autoregressive modeling for fading channel simulation, IEEE Trans. Wireless Commun., vol. 4, no. 4, pp. 1650-1662, Jul. 2005.
12. G. Giannakis and C. Tepedelenlioglu, Basis expansion models and diversity techniques for blind identification and equalization of time- varying channels, Proc. IEEE, vol. 86, no. 10, pp.
1969-1986, Oct. 1998.
13. C. Komninakis, C. Fragouli, A. H. Sayed, and R. D. Wesel, Multi- input multi-output fading channel tracking and equalization using Kalman estimation, IEEE Trans. Signal Process., vol. 50, no. 5,
pp. 1065-1076, May 2002.
14. X. Ma, G. B. Giannakis, and S. Ohno, Optimal training for block transmissions over doubly selective wireless fading channels, IEEE Trans. Signal Process., vol. 51, no. 5, pp. 1351-1366, May 2003.
15. J. Hua, L. Meng, X. Xu, D. Wang, and X. You, Novel scheme for joint estimation of SNR, Doppler, and carrier frequency offset in double selective wireless channels, IEEE Trans. Veh. Technol., vol.
58, no. 3, pp. 1204-1217, Mar. 2009.
You must be logged in to post a comment. | {"url":"https://www.ijert.org/fast-fading-channel-estimation-using-kalman-filter-for-turbo-receivers","timestamp":"2024-11-01T22:56:07Z","content_type":"text/html","content_length":"75927","record_id":"<urn:uuid:b677c4e3-d2eb-4a4a-98de-5cc42d18fcc0>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00589.warc.gz"} |
Top 20 SAT Mathematics Tutors Near Me in Oxford
Top SAT Mathematics Tutors serving Oxford
Sean: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...are so many aspects to taking these exams ranging from test taking strategies, the material itself, as well as preparation right before the tests themselves. I believe that every student requires
a different teaching strategy, and I believe in treating every student as an individual rather than a collective. In terms of me personally, I...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Math
• SAT Writing and Language
• SAT Reading
• +28 subjects
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Math
• SAT Writing and Language
• SAT Reading
• +99 subjects
Ben: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...quickly feel like it is too late to catch up. My goal is to work together to pick up the pieces that were lost or are close, but not quite there: learning the way you want to learn and the way you
learn best rather than having topics taught at you. For standardized testing: I...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Reading
• SAT Writing and Language
• SAT Math
• +50 subjects
Joy: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...the student in times when she faltered that she can think through a math problem. As long as she felt grounded enough to access her innate understanding of the world around us, she progressed.
Later on, my extensive experience in the classroom as a Jumpstart Corps Member taught me to adapt my teaching style to...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Reading
• SAT Math
• SAT Writing and Language
• +99 subjects
Taariq: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...at Duke. I have also been a middle school and high school tutor throughout my time at Duke through the Program in Education. One of my crowning achievements at Duke was winning the DT Stallings
Award, which is awarded to a Duke senior who has shown sustained and dedicated service to tutoring local school children....
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Reading
• SAT Math
• SAT Writing and Language
• +27 subjects
Mark: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...am a recent Yale graduate with a B.S in chemical engineering. I have over 5 years of experience tutoring a wide range of subjects, and I am very passionate about math and science. My favorite part
of tutoring is instilling confidence into students and making them feel that they can understand and enjoy a subject.
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Reading
• SAT Math
• Physical Science
• +39 subjects
Parisa: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...and I also teach debate in local schools in Charlottesville. I enjoy working with students of all ages and have experience teaching students from the elementary school level to the college level.
I've worked one-on-one both with friends and with my students on various subjects, including writing, economics, math, and standardized testing. Some qualifications: I...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Reading
• SAT Writing and Language
• SAT Math
• +26 subjects
Cameron: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...college I tutored two kids in 6th through 8th grade math. I really enjoyed helping them with their homework assignments, preparing them for exams, and watching them improve in their courses.
During this time, I learned how to be a very patient instructor and an effective communicator. As my dad was a high school math...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Writing and Language
• SAT Math
• Pharmacology
• +24 subjects
Diptesh: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...mathematics and science tutoring as well as standardized exam preparation. In the past, I have helped students achieve significant improvements and success on the SAT, SHSAT, Regents and other
exams! I truly believe education is a mutual learning and teaching opportunity and enjoy helping my students achieve a thorough understanding (those "Aha!" moments) and appreciation...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Reading
• SAT Writing and Language
• SAT Math
• +88 subjects
Linda: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...goal is a matter of hard work and dedication rather than intelligence. I work with all students of varying levels. For those that do not do well under the pressure of standardized tests, I strive
to make test-taking less anxiety-inducing and more approachable for my students. For students looking to gain an extra boost in...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Writing and Language
• SAT Math
• SAT Reading
• +13 subjects
Rachel: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...main duty is to work with a student as an individual and to adjust my own tutoring to fit the way they learn best. I believe that all students have what it takes to succeed in school, its just a
matter of finding the style of learning that is most effective for them. What works...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Writing and Language
• SAT Reading
• SAT Math
• +89 subjects
Spenser: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...I work with to perform to the best of their abilities on the standardized test that they are preparing to take. I hope that I can play a part in helping my students to attend the higher education
institution of their dreams, and I will work as hard as I need to in order to...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Writing and Language
• SAT Reading
• SAT Math
• +17 subjects
Emerson: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...equations might help somebody in the short term, but understanding why that word means what it means, or what an equation is actually depicting is worth the extra effort. Besides test-prep I think
I can be a great help in writing and editing, as I have experience in everything from short, timed essays such as...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Math
• SAT Reading
• SAT Writing and Language
• +66 subjects
Daniel: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...away. He's never sat down and broken it down. I never was that guy and I will never be that guy. I will always be that guy that stays after class to asks questions because I didn't get it right
away. I will always be that guy that is in office hours from start to...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Math
• Pre-Algebra
• Test Prep
• +31 subjects
Jesica: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...and I love every minute of it. I like tackling new mathematics and helping students view their work at different angles that might help them come to an understanding of the subject. Being able to
do math, not just arithmetic, is an ability that every student has and I believe that together we can figure...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Math
• Technology and Coding
• Calculus 3
• +34 subjects
Mark: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...tools and skills. I work well with others, and I can connect with almost anybody. I am a natural leader. I'm a great motivator, and I have given several motivational speeches throughout my career.
My tutoring goals are simple: to equip students with skills to achieve their own goals, to brighten their day, and to...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Math
• Algebra 3/4
• Algebra 2
• +29 subjects
David: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...I like to present material in an engaging and relevant manner so that we can all learn and achieve success. One-on-one tutoring is an effective technique, particularly for explaining and modeling
math, and I enjoy seeing students attitudes toward school brighten when they see immediate progress in their work. As a writing tutor, I like...
Education & Certification
• Drake University - Bachelor of Science, Education, History
• State Certified Teacher
Subject Expertise
• SAT Mathematics
• SAT Writing and Language
• SAT Math
• SAT Reading
• +37 subjects
Nicolas: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...Building off that experience, I will be working for Varsity Tutors, helping people prepare for the LSAT as well as the SAT and the ACT. I have really enjoyed my experience assisting students in
test preparation. In my experience, students first approach tests in an ad hoc fashion, and often with a fair bit of...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Reading
• SAT Math
• LSAT Logical Reasoning
• +26 subjects
Max: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...is passionate about education, particularly in helping high school students achieve their academic goals as they work their way towards college. I volunteer in New Haven as a math teacher and
coach at a public middle school (my team of 7th graders just won the 2016 New Haven regional championship!), and I've been tutoring middle...
Education & Certification
Subject Expertise
• SAT Mathematics
• SAT Reading
• SAT Writing and Language
• SAT Math
• +104 subjects
Isabel: Oxford SAT Mathematics tutor
Certified SAT Mathematics Tutor in Oxford
...and I believe an understanding of both can help us appreciate our surroundings. My favorite subjects to tutor are algebra and geometry because it is so easy to make use of them in everyday life.
When I tutor, I like to have my student identify what is confusing, work through the problem with my student,...
Education & Certification
• Harvard College - Bachelors, History of Art and Architecture
Subject Expertise
• SAT Mathematics
• SAT Math
• Geometry
• Arithmetic
• +11 subjects
Private SAT Mathematics Tutoring in Oxford
Receive personally tailored SAT Mathematics lessons from exceptional tutors in a one-on-one setting. We help you connect with the best tutor for your particular needs while offering flexible
scheduling to fit your busy life.
Your Personalized Tutoring Program and Instructor
Identify Needs
Our knowledgeable directors help you choose your tutor with your learning profile and personality in mind.
Customize Learning
Your tutor can customize your lessons and present concepts in engaging easy-to-understand-ways.
Increased Results
You can learn more efficiently and effectively because the teaching style is tailored to you.
Online Convenience
With the flexibility of online tutoring, your tutor can be arranged to meet at a time that suits you.
Call us today to connect with a top Oxford SAT Mathematics tutor | {"url":"https://www.varsitytutors.com/gb/sat_mathematics-tutors-oxford","timestamp":"2024-11-12T16:03:06Z","content_type":"text/html","content_length":"610060","record_id":"<urn:uuid:0b2050de-8580-41ee-b8e6-76f8dd26ae5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00468.warc.gz"} |
5 Best Ways to Get Indices of True Values in a Binary List in Python
π ‘ Problem Formulation: Python developers often work with binary lists, where 0s represent False and 1s represent True values. The challenge is to extract the indices of all the True (1) values
within such a list. For example, given [1, 0, 1, 0, 1], the desired output would be [0, 2, 4].
Method 1: Using a for-loop with enumerate
This traditional method involves iterating over the list with a for-loop using the enumerate function to return the index and value, and collecting indices where values are true. It’s straightforward
and easily understandable even by beginners.
Here’s an example:
binary_list = [1, 0, 1, 0, 1]
indices = []
for index, value in enumerate(binary_list):
if value:
Output: [0, 2, 4]
This snippet declares a binary list and then iterates over it using a for-loop combined with enumerate, which yields both the index and value. If a true value (1) is found, the index is added to the
result list.
Method 2: Using a list comprehension
List comprehensions provide a more succinct and Pythonic alternative to for-loops, allowing the task to be completed in a single line of code. This method excels in readability and efficiency for
shorter lists.
Here’s an example:
binary_list = [1, 0, 1, 0, 1]
indices = [i for i, val in enumerate(binary_list) if val]
Output: [0, 2, 4]
The example features the use of a list comprehension to iterate over the enumerated binary list, selecting indices (i) where the corresponding value (val) is true (non-zero).
Method 3: Using numpy.where
The numpy library provides the numpy.where function, which is highly optimized and best suited for large datasets. This is the go-to method for performance-critical applications.
Here’s an example:
import numpy as np
binary_list = np.array([1, 0, 1, 0, 1])
indices = np.where(binary_list == 1)[0]
Output: array([0, 2, 4])
By leveraging the numpy library, this code first converts the list into a numpy array, then uses np.where to find indices where entries equal 1, returning a numpy array with the results.
Method 4: Using the filter and lambda function combination
Combining filter with a lambda function allows for a functional programming approach to filtering indices. It’s a less common method but useful for scenarios that might later involve more complex
filtering conditions.
Here’s an example:
binary_list = [1, 0, 1, 0, 1]
indices = list(filter(lambda i: binary_list[i], range(len(binary_list))))
Output: [0, 2, 4]
The code example employs a lambda function to check each index in the binary list for a true value within a filter function call, yielding a filtered object, which is then converted to a list of
Bonus One-Liner Method 5: Using enumerate with a generator
Employing a generator expression with enumerate creates an iterator that yields indices of true values in the binary list, reducing memory usage for large lists.
Here’s an example:
binary_list = [1, 0, 1, 0, 1]
indices = (i for i, val in enumerate(binary_list) if val)
indices = list(indices)
Output: [0, 2, 4]
This compact example uses a generator expression in combination with enumerate for a memory-efficient resolution, and finally, converts the generator to a list.
• Method 1: Using a for-loop with enumerate. Straightforward and ideal for beginners. Not as succinct as other methods.
• Method 2: Using a list comprehension. More Pythonic and readable for small to medium lists. Efficiency might decrease with larger datasets.
• Method 3: Using numpy.where. Highly efficient for large arrays. Requires numpy and additional memory for converting lists to numpy arrays.
• Method 4: Using the filter and lambda function combination. Good for expandable filtering conditions. May have less performance and readability compared to list comprehensions.
• Method 5: Using enumerate with a generator. Memory-efficient for very large lists. Less immediate in terms of understanding compared to a list comprehension. | {"url":"https://blog.finxter.com/5-best-ways-to-get-indices-of-true-values-in-a-binary-list-in-python/","timestamp":"2024-11-06T10:54:49Z","content_type":"text/html","content_length":"70290","record_id":"<urn:uuid:80c6d186-9548-4603-a803-0de2651e9517>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00355.warc.gz"} |
Practice Coding with the exercise "Ascii Graph"
The program:
Display the given list of points in an orthonormal basis as an ASCII graph.
The X axis is represented by dash characters -.
The Y axis is represented by vertical bar (pipe) characters |.
The origin of the graph is represented by a plus character +.
Every point of the given list is represented by a star character *.
Every empty cell of the graph is represented by a dot character ..
If one of the given points is on an axis, the star character * must be chosen over the one corresponding to the axis.
The size of the graph to display is defined by the following rules:
The minimal coordinate on the X axis is: (Minimal X axis coordinate of any given point and/or origin) - 1
The maximal coordinate on the X axis is: (Maximal X axis coordinate of any given point and/or origin) + 1
The minimal coordinate on the Y axis is: (Minimal Y axis coordinate of any given point and/or origin) - 1
The maximal coordinate on the Y axis is: (Maximal Y axis coordinate of any given point and/or origin) + 1
Line 1: An integer N, the number of points on the graph
Next N lines: Two integers x and y, separated by a space, for the coordinates of every given points.
Strings, each one representing one ligne of the ASCII graph.
0 ≤ N ≤ 20
-20 ≤ x ≤ 20
-20 ≤ y ≤ 20
A higher resolution is required to access the IDE | {"url":"https://www.codingame.com/training/medium/ascii-graph","timestamp":"2024-11-10T11:04:27Z","content_type":"text/html","content_length":"143618","record_id":"<urn:uuid:c92a25e6-4f8b-4ddc-a3a5-b80fc77c9e44>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00741.warc.gz"} |
5 girls and 10 boys sit at random in a row having 15 chairs num... | Filo
girls and boys sit at random in a row having chairs numbered as to . The probability that end seats are occupied by the girls and between any two girls odd number of boys sit, is
Not the question you're searching for?
+ Ask your question
There are four gaps in between the girls where the boys can sit. Let the number of boys in these gaps be ,
The number of solutions of above equation
coefficient of in
Thus boys and girls can sit in ways
Total ways
Hence, the required probability
Was this solution helpful?
Found 8 tutors discussing this question
Discuss this question LIVE for FREE
12 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Probability
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text girls and boys sit at random in a row having chairs numbered as to . The probability that end seats are occupied by the girls and between any two girls odd number of boys sit, is
Updated On Dec 3, 2022
Topic Probability
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 2
Upvotes 170
Avg. Video Duration 7 min | {"url":"https://askfilo.com/math-question-answers/5-girls-and-10-boys-sit-at-random-in-a-row-having-15-chairs-numbered-as-1-to-15","timestamp":"2024-11-12T12:40:46Z","content_type":"text/html","content_length":"299839","record_id":"<urn:uuid:9c6d68dc-7c3a-4cd4-b666-02d784292814>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00425.warc.gz"} |
A Design And Analysis Of Computer Experiments-based Approach To Approximate Infinite Horizon Dynamic Programming With Continuous State Spaces
Graduation Semester and Year
Document Type
Degree Name
Doctor of Philosophy in Industrial Engineering
Industrial and Manufacturing Systems Engineering
First Advisor
Victoria Chen
Dynamic programming (DP) is an optimization approach that transforms a complex problem into a sequence of simpler sub-problems at different points in stage. The original DP approach used Bellman's
equation to compute the "cost-to-go' function. This method is useful when considering a few states and decisions. However, when dealing with high-dimensional data set with continuous state space, the
limit called 'curse of dimensionality' obstructs the solution as the size of the state space grows exponentially. Given recent advances in computational power, approximate dynamic programming (ADP)
is introduced by not seeking to compute the future value function exactly and at each point of the state space; rather opting for an approximation of the future value function in the domain of the
state space. Two main components of ADP method which have been challenged among existing ADP studies are discretization of the state space and estimation of the cost-to-go or future value
function.The first part of this dissertation research seeks to develop a solution method to solve an infinite horizon dynamic programming called Design and Analysis of Computer Experiment (DACE)
-based Approach to ADP. Multivariate Adaptive Regression Splines (MARS) which is a flexible, nonparametric statistical modeling tool is used to approximate future value functions in stochastic
dynamic programming (SDP) problems with continuous state variables. The training data set is updated sequentially based on the conditions. This sequential grid discretization explores the state space
and provides a statistically parsimonious ADP methodology which `adaptively' captures the important variables from the state space. There are 3 different algorithms presented in this dissertation
based on the conditions of sampling process of the training data set. Comparisons are presented on a forward simulation with 12 time periods. The second part of the dissertation research is to
develop a batch mode Reinforcement Learning (RL) using MARS as an approximator to solve the same problem with the first part. The main difference between these two methods is the input variables to
approximate future value function. In batch mode RL method, the state-action space is used, thus the estimated function (output) is a function of both state and action variables. By contrast,
DACE-based ADP used only state variable and the estimated future function is based only on state variables. The study on state-action discretization is presented in this dissertation. Two different
designs are used, including Monte Carlo sampling and Sobol' sequence design. Comparisons are presented on the same forward simulation.The third part is to develop a two-stage framework for Adaptive
Design for Controllability of a System of Plug-in Hybrid Electric Vehicle Charging Stations Case Study. The second-stage dynamic control problem is formulated and initially solved by mean value
problem using linear programming. After that a DACE approach is used to develop a metamodel of the second stage solution based on the possible solution from the first stage. Then the metamodel will
be turned into the first stage and at this point the final solution will be made. DACE helps reduce time-consuming computer models by replacing the loop between first and second stage with a
constraint generated from the gradient of the approximation function. Moreover, the metamodel can give more accessible description to the second stage.
Engineering | Operations Research, Systems Engineering and Industrial Engineering
Degree granted by The University of Texas at Arlington
Recommended Citation
Kulvanitchaiyanunt, Asama, "A Design And Analysis Of Computer Experiments-based Approach To Approximate Infinite Horizon Dynamic Programming With Continuous State Spaces" (2014). Industrial,
Manufacturing, and Systems Engineering Dissertations. 35. | {"url":"https://mavmatrix.uta.edu/industrialmanusys_dissertations/35/","timestamp":"2024-11-10T07:35:38Z","content_type":"text/html","content_length":"45104","record_id":"<urn:uuid:fd9d158d-44d3-4307-8757-194bbd37cd17>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00884.warc.gz"} |
Lego Blocks | HackerRank
You have an infinite number of 4 types of lego blocks of sizes given as (depth x height x width):
d h w
Using these blocks, you want to make a wall of height and width . Features of the wall are:
- The wall should not have any holes in it.
- The wall you build should be one solid structure, so there should not be a straight vertical break across all rows of bricks.
- The bricks must be laid horizontally.
How many ways can the wall be built?
The height is and the width is . Here are some configurations:
These are not all of the valid permutations. There are valid permutations in all.
Function Description
Complete the legoBlocks function in the editor below.
legoBlocks has the following parameter(s):
• int n: the height of the wall
• int m: the width of the wall
- int: the number of valid wall formations modulo
The first line contains the number of test cases .
Each of the next lines contains two space-separated integers and .
STDIN Function
----- --------
4 t = 4
2 2 n = 2, m = 2
3 2 n = 3, m = 2
2 3 n = 2, m = 3
4 4 n = 4, m = 4
For the first case, we can have:
For the second case, each row of the wall can contain either two blocks of width 1, or one block of width 2. However, the wall where all rows contain two blocks of width 1 is not a solid one as it
can be divided vertically. Thus, the number of ways is and . | {"url":"https://www.hackerrank.com/challenges/one-month-preparation-kit-lego-blocks/problem","timestamp":"2024-11-14T02:01:52Z","content_type":"text/html","content_length":"891326","record_id":"<urn:uuid:9c67ce29-d6dd-40dd-b85a-f8481290ac0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00625.warc.gz"} |
Divide and Conquer: Easy Methods to Help You Solve Fractions - Dynalabs News
Fractions can be a daunting topic for many, whether you’re a seasoned mathematician or a novice learner. The mere mention of fractions can induce stress, confusion, and frustration, often leading to
a sense of defeat before even attempting to solve them. But fret not, dear reader, for there are methods to help you conquer these tricky little numbers. In this article, we’ll explore easy and
effective tactics that will reduce your anxiety and make solving fractions feel like a breeze. So, grab a pencil and paper and let’s divide and conquer.
The Art of Fractional Division: A Primer for Beginners
Many people find fractions to be a confusing and daunting subject. However, with some guidance and practice, dividing fractions can be easy.
When dividing fractions, the first step is to flip the second fraction and multiply it by the first. This can be remembered by the acronym KCF, which stands for “Keep, Change, Flip”.
A simple example: dividing 2/3 by 1/4. Using the KCF method, we keep the first fraction and change the division symbol to multiplication. Then, we flip the second fraction to become 4/1. The equation
now becomes: 2/3 x 4/1 = 8/3, or 2 and 2/3.
Practice with simple examples and gradually move on to more complex ones to build your skills in the art of fractional division.
Simplifying Fractions: Tips and Hacks for Quick Calculation
When fractions have large numbers or complicated expressions, they can be simplified to make calculations easier.
The first step in simplification is to find a common factor between the numerator and the denominator, and then divide both by that factor.
Another way to simplify fractions is to divide both the numerator and denominator by their greatest common factor (GCF). To find the GFC, list all the factors of both numbers, and identify the
largest factor they have in common. For example, the GCF of 12 and 16 is 4.
It is also useful to memorize common fractions and their simplified equivalents, such as 1/2 = 2/4 = 4/8.
When working with mixed numbers, convert them into improper fractions and simplify before converting back if needed.
With practice, simplifying fractions will become second nature and can save time and effort in calculations.
Adding and Subtracting Fractions: Tricks to Take the Hassle Out
Adding and subtracting fractions can be tricky, especially when the denominators are different. However, there are some tricks to make this process easier.
When adding or subtracting fractions with the same denominator, simply add or subtract the numerators and keep the denominator the same.
When adding or subtracting fractions with different denominators, first find a common denominator. This can be done by finding the least common multiple (LCM) of the denominators and multiplying both
the numerator and denominator of each fraction by the appropriate factor.
An alternative method is to use the cross-product method, which involves multiplying the numerator of one fraction by the denominator of the other, and vice versa. Then add or subtract the resulting
numerators and simplify if needed.
With regular practice, adding and subtracting fractions can become much less of a hassle.
Multiplying and Dividing Fractions: Methods for Accurate Results
Multiplying and dividing fractions can seem complicated, but with the right approach, accuracy can be achieved without much struggle.
To multiply fractions, simply multiply the numerators and the denominators. If the resulting fraction can be simplified, do so.
Dividing fractions involves flipping the second fraction and multiplying it by the first, using the same KCF method used in fractional division.
Another method for division involves multiplying the first fraction by the reciprocal (flipped) of the second fraction. This is done by flipping the second fraction and multiplying it by the first.
The resulting fraction can then be simplified.
To ensure accuracy when multiplying and dividing fractions, double-check that all the numbers are correctly copied and written, simplify if possible, and check for common factors before presenting
your answer.
Comparing Fractions: A Guide to Comparing Apples with Oranges
When comparing fractions, it is useful to find a common denominator to make the comparison easier.
If the denominators are the same, compare the numerators. If one is larger, then the fraction is also.
For fractions with different denominators, find a common denominator and compare the numerators. Another option is to convert fractions to decimals or percentages to make comparison easier.
A third approach is to convert fractions to a common unit, such as time or money. For example, 3/4 is equivalent to 75 cents if 1 whole is $1.
Practice comparing fractions with a variety of denominators and numerators to gain mastery.
Conquering Complex Fractions: Techniques for Tackling Tricky Problems
Complex fractions may appear daunting at first, but with some practice and patience, they can be mastered.
A complex fraction is one that has one or more fractions in the numerator and/or denominator. Start by simplifying all expressions in the numerator and denominator as much as possible.
Another technique is to multiply through the fraction by the same denominator as found in the denominator of the complex fraction. When multiplying a whole number by a fraction, always convert the
whole number to a fraction with a denominator equal to the one used to multiply the other fraction.
Combining steps or operations can be helpful when dealing with complex fractions, such as combining multiplication and division in the same step.
Practice with a variety of complex fractions to increase confidence and skill.
Mastering Fractional Equations: A Comprehensive Tutorial for Success
Fractional equations are mathematical expressions involving fractions and variables. To solve them, it is important to follow a few key steps.
First, clear the fractions by multiplying both sides of the equation by the common denominator. This transforms the equation into one with integers only.
Next, simplify the resulting equation and solve for the variable. If fractional solutions are still present, simplify further or convert them to decimals or mixed numbers.
It is important to check for extraneous solutions, which result from operations that cancel out a solution. These may occur when clearing the fractions, or when combining like terms.
Practice with fractional equations to develop mastery and confidence in this important mathematical concept.
Putting it all Together: Solving Real-World Problems with Fractional Math
Fractional math is not just an abstract concept; it has practical applications in everyday life.
For example, when cooking or baking, fractions are used to measure ingredients. When figuring out how much each person will get of a food, such as pizza, fractions are also used.
In construction, fractions are used for measurements. In finance, they are used for interest rates and percentages.
When working with real-world problems, start by identifying the question to be answered and the information provided. Then, use the appropriate fractional math method to solve the problem.
With practice, fractional math can be mastered and used to solve a wide variety of real-world problems and challenges.
In Inconclusion, the world of fractions may seem intimidating and overwhelming, but with these easy methods on how to solve them, you can undoubtedly conquer them. By taking the divide and conquer
approach and breaking down the problem into manageable steps, you too can become a master of fractions. Remember to always simplify and reduce, use equivalent fractions when necessary, and never be
afraid to seek help or use a calculator. Practice makes perfect, and with these tips, you will soon be solving fractions like a pro. So go forth, embrace the world of fractions, and divide and | {"url":"https://dynalabs.net/853/divide-and-conquer-easy-methods-to-help-you-solve-fractions.html","timestamp":"2024-11-04T14:04:00Z","content_type":"text/html","content_length":"54135","record_id":"<urn:uuid:d595dd6d-de23-4927-9099-e7ac71531a98>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00179.warc.gz"} |
5084 Speed Of A Common Snail to
How many Millimeters Per Second in 5084 Speed Of A Common Snail? How to convert 5084 Speed Of A Common Snail to Millimeters Per Second(mm/s) ? What is 5084 Speed Of A Common Snail in Millimeters Per
Second? Convert 5084 Speed Of A Common Snail to mm/s. 5084 Speed Of A Common Snail to Millimeters Per Second(mm/s) conversion. 5084 Speed Of A Common Snail equals 5084 Millimeters Per Second, or 5084
Speed Of A Common Snail = 5084 mm/s.
The URL of this page is: https://www.unithelper.com/speed/5084-snail-mm_s/ | {"url":"https://www.unithelper.com/speed/5084-snail-mm_s/","timestamp":"2024-11-03T00:08:55Z","content_type":"text/html","content_length":"8775","record_id":"<urn:uuid:d5632af8-0111-4ac5-9a09-b4e69c4aeec3>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00284.warc.gz"} |
TM PeriodDuration
From Earth Science Information Partners (ESIP)
The duration datatype represents a duration of time. The lexical representation for duration is the [ISO 8601] extended format PnYn MnDTnH nMnS, where nY represents the number of years, nM the number
of months, nD the number of days, 'T' is the date/time separator, nH the number of hours, nM the number of minutes and nS the number of seconds. The number of seconds can include decimal digits to
arbitrary precision.
This represents a frequency of every 6 hours.
• P = period
• 0Y = 0 years
• 0M = 0 month
• 0D = 0 days
• T = time
• 6H = 6 hours
• 0M = 0 minutes
• 0S = 0 seconds
Please contribute! | {"url":"https://wiki.esipfed.org/TM_PeriodDuration","timestamp":"2024-11-02T18:14:10Z","content_type":"text/html","content_length":"26665","record_id":"<urn:uuid:e415bd5f-d509-4e08-b605-c6a2ee801f1a>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00860.warc.gz"} |
Add a Horizontal Line to an Excel Chart - Peltier Tech
A common task is to add a horizontal line to an Excel chart. The horizontal line may reference some target value or limit, and adding the horizontal line makes it easy to see where values are above
and below this reference value. Seems easy enough, but often the result is less than ideal. This tutorial shows how to add horizontal lines to several common types of Excel chart.
We won’t even talk about trying to draw lines using the items on the Shapes menu. Since they are drawn freehand (or free-mouse), they aren’t positioned accurately. Since they are independent of the
chart’s data, they may not move when the data changes. And sometimes they just seem to move whenever they feel like it.
The examples below show how to make combination charts, where an XY-Scatter-type series is added as a horizontal line to another type of chart.
Add a Horizontal Line to an XY Scatter Chart
An XY Scatter chart is the easiest case. Here is a simple XY chart.
Let’s say we want a horizontal line at Y = 2.5. It should span the chart, starting at X = 0 and ending at X = 6.
This is easy, a line simply connects two points, right?
We set up a dummy range with our initial and final X and Y values (below, to the left of the top chart), copy the range, select the chart, and use Paste Special to add the data to the chart (see
below for details on Paste Special).
When the data is first added, the autoscaled X axis changes its maximum from 6 to 8, so the line doesn’t span the entire chart. We have to format the axis and type 6 into the box for Maximum. We
probably also want to remove the markers from our horizontal line.
Paste Special
If you don’t use Paste Special often, it might be hard to find. If you copy a range and use the right click menu on a chart, the only option is a regular Paste, and Excel doesn’t always correctly
guess how it should paste the data. So I always use Paste Special.
To find Paste Special, click on the down arrow on the Paste button on the Home tab of Excel’s ribbon. Paste Special is at the bottom of the pop-up menu.
You can also use the Excel 97-2003 menu-based shortcut, which is Alt + E + S (for Edit menu > Paste Special).
The tooltip below Paste Special in the menu indicates that you could also use Ctrl + Alt + V, but this shortcut doesn’t do anything for charts.
When the Paste Special dialog appears, make sure you select these options: Add Cells as a New Series, Y Values in Columns, Series Names in First Row, Categories (X Values) in First Column.
Click OK and the new series will appear in the chart.
Add a Horizontal Line to a Column or Line Chart
When you add a horizontal line to a chart that is not an XY Scatter chart type, it gets a bit more complicated. Partly it’s complicated because we will be making a combination chart, with columns,
lines, or areas for our data along with an XY Scatter type series for the horizontal line. Partly it’s complicated because the category (X) axis of most Excel charts is not a value axis.
As with the XY Scatter chart in the first example, we need to figure out what to use for X and Y values for the line we’re going to add. The Y values are easy, but the X values require a little
understanding of how Excel’s category axes work. Since the category axes of column and line charts work the same way, let’s do them together, starting with the following simple column and line
Note in the charts above that the first and last category labels aren’t positioned at the corners of the plot area, but are moved inwards slightly. This is because column and line charts use a
default setting of Between Tick Marks for the Axis Position property. We can change the Axis Position to On Tick Marks, below, and the first and last category labels line up with the ends of the
category axis. The line chart looks okay, but we have cut off the outer halves of the first and last columns.
Let’s focus on a column chart (the line chart works identically), and use category labels of 1 through 5 instead of A through E. Excel doesn’t recognize these categories as numerical values, but we
can think of them as labeling the categories with numbers.
Now let’s label the points between the categories. Not only do we have halfway points between the categories (1.5, 2.5, 3.5, 4.5), we also have a half category label below the first category (0.5)
and another after the last category (5.5).
If the Axis Position property were set to On Tick Marks, our horizontal line starts at 1 (the first category number of 1) and ends at 5 (the last category number of 5). This would be wrong for a
column chart, but might be acceptable for a line chart.
Here is our desired horizontal line, stretching from 0.5 to 5.5
So let’s use this data and the same approach that we used for the scatter chart, at the beginning of this tutorial.
Copy the range, and paste special as new series. We’ve added another set of columns or another line to the chart.
Right click on the added series, and choose Change Series Chart Type from the pop-up menu.
In the Change Chart Type dialog, select the XY Scatter With Straight Lines And Markers chart type. We’re using markers to temporarily mark the ends of the line, and we’ll remove the markers later; in
general we will change directly to XY Scatter With Straight Lines.
The new series don’t line up at all, though, because Excel decided we should plot the scatter series on the secondary axes. We could rescale the secondary axes, then hide them, but that makes a
complicated situation even more complicated.
So we need to uncheck the Secondary Axis box next to the Scatter series in the Change Chart Type dialog.
And now everything lines up as expected: the markers on the horizontal lines are at the edges of the plot area.
We should remove those markers now, and in the future select the chart type without markers.
“Lazy” Horizontal Line
You may ask why not make a combination column-line chart, since column charts and line charts use the same axis type. And many charts with horizontal lines use exactly this approach. I call it the
“lazy” approach, because it’s easier, but it provides a line that doesn’t extend beyond all the data to the sides of the chart.
Start with your chart data, and add a column of values for the horizontal line. You get a column chart with a second set of columns, or a line chart with a second line.
Change the chart type of the added series to a line chart without markers. Doesn’t look very good for the column chart (left) since the horizontal line ends at the centerlines of the first and last
column. You could probably get away with it for the line chart, even though the horizontal line doesn’t extend to the sides of the chart.
If we change the Axis Position so the vertical axis crosses On Tick Marks, the horizontal lines for both charts span the entire chart width. In the column chart, this comes at the expense of the
outer halves of the first and last columns. The line chart looks okay, though.
Add a Horizontal Line to an Area Chart
As with the previous examples, we need to figure out what to use for X and Y values for the line we’re going to add. The category axis of an area chart works the same as the category axis of a column
or line chart, but the default settings are different. Let’s start with the following simple area chart.
Notice that the first and last category labels are aligned with the corners of the plot area and the filled area series extends to the sides of the plot area. This is because the default setting of
the Axis Position property is On Tick Marks. We can change it to Between Tick Marks, which makes the area chart look a bit strange.
Below is the data for our horizontal line, which will start at 1 (the first category number of 1) and end at 5 (the last category number of 5), without the half-category cushion at either end. Copy
the data, select the chart, and Paste Special to add the data as a new series.
Right click on the added series, and change its chart type to XY Scatter With Straight Lines And Markers (again, the markers are temporary). The resulting line extends to the edges of the plotted
area, but Excel changed the Axis Position to Between Tick Marks.
Change the Axis Position setting back to On Tick Marks, and remove the markers from the line.
“Lazy” Horizontal Line
In the column chart, and perhaps for the line chart, the “lazy” approach did not give a suitable horizontal line, since the line did not extend to the edges of the plot area. Let’s see how it works
for an area chart.
Make a chart with the actual data and the horizontal line data.
Right click on the second series, and change its chart type to a line. Excel changed the Axis Position property to Between Tick Marks, like it did when we changed the added series above to XY
Change the Axis Position back to On Tick Marks, and the chart is finished.
For the area chart, the appearance of the lazy horizontal line is identical to the more complicated line that uses an XY Scatter series. Since it’s easier and just as good, it’s probably better to
use the lazy approach.
An alert reader noted in the comments that the line produced by this method is placed in front of the bars, and it might be better to place such reference lines behind the data. I have written a new
post describing an approach that does just this: Horizontal Line Behind Columns in an Excel Chart.
I showed similar approaches in an old post, Add a Target Line.
More Combination Chart Articles on the Peltier Tech Blog
1. Juan Aguero says
Thank you very much for the tutorial. In Add a Horizontal Line to a Column or Line Chart section, how did you change the horizontal axis from number to text (A,B,C…)
2. Jon Peltier says
Juan –
To begin with, the range I used to populate the chart had the letters in the first column, and Excel used them for the axis labels. In the middle somewhere I changed the letters to numbers in the
worksheet, so the chart showed the numbers instead. Then later I changed the numbers in the sheet back to letters.
3. Song says
Thank you for the tutorial.
For column chart, I usually add a horizontal line by adding a line chart with trendline, and format the trendline, then it looks like the horizontal line extends to the sides of the chart.
4. Jon Peltier says
Song –
Thanks for sharing your approach. It’s much easier than dealing with an XY series on top of a column chart. The benefit of the XY series is the actual data points extend to the edges of the
chart, so it’s easy to add a label out there.
5. Jan Carlsson says
Hi Jon,
Is there any way to move the reference line behind the bars, instead of “cutting through” them? I switched the series order, which was the only way I could think of, but it didn’t help. The
reason for my request is based on best practice derived from Stephen Few. It might seem like a non-problem, but it does distract the perception of the bars to some extent. (Note – I’m not using
the “lazy” ref line solution). Regards // Janne
6. Jon Peltier says
Hi Janne –
No matter how you change the order of series, or even put the bars on the secondary axis and keep the lines on the primary, the lines always appear in front of the columns.
It’s a complicated workaround, but I can use the border of an area chart series. Area charts are plotted behind all other series. But you need to use a date axis so the sides and baseline of the
area chart series are outside the plot area. When I get a chance I’ll write it up here.
7. Jonny says
Hi Jon,
Do you know how i would be able to make a single cell into a horizontal line for my graph?
I have data which is for a year long (365 cells) and i am trying to make that one cell be a horizontal line for all of that data but I cannot do it. Even when I try to use paste special, it does
not work. It is coming up as a tiny dot compared to a line.
8. Jon Peltier says
Jonny –
You know the expression, “Two Points make a line.” One cell can give you one point, or often just half a point (the Y value). Write a simple link formula so in an adjacent cell (e.g., =C17), so
both cells have the value of the cell you want to use; alternatively, off in a vacant area of the worksheet link two cells to this particular cell. Do yourself a favor and also use two cells for
the X values, as shown in this article.
This is an example of another saying, “spend five minutes with your data first, or spend five hours with your chart later.”
9. Jon Peltier says
Janne –
Rather than squeeze a new tutorial into a reply to your comment, I have written a new blog post:
Horizontal Line Behind Columns in an Excel Chart
10. Chris says
Hi Jon,
How would I be able to add a horizontal line across my chart, from a data point generated by a function, rather than from just integers I enter into the sheet?
Eg. the function take a sum based on the previous cell [sum(220-H5)]
11. Jon Peltier says
Chris –
If you add the data points for your line and use an XY Scatter type to plot the line, you can use whatever formulas you want. The Y values are easy, and the X values are defined by the X axis
scale of the category axis.
12. David says
How about a horizontal line in an existing scatter plot with two sets of data already charted, and the line represents the current date so that every time it is opened, the line moves to
represent the date?
13. Jon Peltier says
David –
Scatter plots are easy, since the X and Y are just what you see along the axes. So you add a few formulas to determine the X and Y values of the endpoints of your line, then add a series using
these cells.
14. Roger Plant says
I have created both a horizontal line and a vertical line overlaying a chart in Excel. I want to be able to move both lines so the cross hairs lie over a particular point in the chart. How do I
do this in Excel VBA?
15. Mark Barnard says
Mr. Peltier,
Thank you for this tutorial. Very easy to follow and understand. I added my limit line easily.
Is there any way to label my line.
Thank you for the effort you’ve extended in your tutorials. They’re wonderful.
16. Brett says
Hi, I am trying to do this, but the problem is that my X Axis is not number values, It is dates. I track data across a 6 week period and so of course the dates change every week. I have 5
different charts on one tab, each representing a category. All categories except one have a goal of 95 and the one has a goal of 100. Data appears from week to week above the charts and those
data values are found on the Y Axis. Basically I need a goal line to go across all the weeks. Is there any help with this when it is dates?
17. Kris says
For the section ‘Add a Horizontal Line to a Column or Line Chart’ I am confused at the following section:
“Now let’s label the points between the categories. Not only do we have halfway points between the categories, we also have a half category label below the first category and another after the
last category.”
How do you label the points between the categories?
18. Jon Peltier says
Generally you can’t label the in-between points on the axis. To illustrate the halfway points in this article, I faked it by creating a chart with categories at 0.5, 1.0, 1.5, etc. (instead of 1,
2, 3, etc.) that had blank values for the halfway categories.
[…] In Add a Horizontal Line to an Excel Chart, I showed how to add a horizontal line to an Excel chart. I started with this data and column chart… […] | {"url":"https://peltiertech.com/add-horizontal-line-to-excel-chart/","timestamp":"2024-11-06T02:15:25Z","content_type":"text/html","content_length":"109156","record_id":"<urn:uuid:4e7fa8ea-bb43-4da9-bd0e-8f566623d4ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00588.warc.gz"} |
Drawing Light ... – George Grammatikakis: “Coma Berenices” – The unification of interactions - TO EN
Coma Berenices
The unification of interactions
A dream of Physics, which seems largely unjustified, has always been the unification of the four fundamental interactions. that is to say the formation of a uniform mathematical form, capable of
describing the total and each interaction separately, as a special case. As much as this may seem bold, it has a precedent. While magnetism and electricity are different phenomena on the surface,
they constitute one and only interaction; and they are described with absolute accuracy by four equations, the elegant Maxwell equations of electromagnetism.
In seeking to unify the interactions Einstein himself spent much of his life; and this unification has no justification or aim, other than the belief in the deeper symmetry and simplicity of the
natural world. It is therefore noteworthy that particularly in recent years the attempt to unify the interactions has already shown impressive progress.
First and foremost, the mechanism of the interactions seems to be common, since it is essentially based on the exchange of a "carrier" among the elementary particles. The properties of this
carrier, which sometimes has mass or load and sometimes not, give each interaction its distinct character. This happens, however, as long as the energies of the particles are those of the current
Universe; in higher energies the carriers of the interactions will start simulating each other. Then the interactions acquire the same force and can be described with the same mathematical
language. (...)
Persistent theoretical searches soon led to the formulation of the Grand Unified Theories. (...) What remained was an obvious direction. The Grand Unified Theories expanded, in terms of their
applications, to the conditions prevailing during the first split second after the Big Bang. It was exactly then when the energy of the "experiment" for the creation of the Universe was the one
required by the theory for the unification of at least the three interactions. | {"url":"https://to-en.gr/afos/Grammatikakis/p1_en.htm","timestamp":"2024-11-11T04:04:42Z","content_type":"text/html","content_length":"6618","record_id":"<urn:uuid:2b0498c3-7005-43d7-8ea3-0d2346a80ffe>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00689.warc.gz"} |
Comments on The 20% Statistician: Justify Your Alpha by Decreasing Alpha Levels as a Function of the Sample SizeThanks Daniel. What do I do when the sample size i...This is excellent. I was wondering how to justify ...Hi Daniel, how this standarization of P-values bas...As the blog explains, this is about solving a prob...Hi Daniel, I couldn't help thinking about your...
tag:blogger.com,1999:blog-987850932434001559.post9099819490920201929..comments2024-11-11T20:33:55.028+01:00Daniel Lakenshttp://www.blogger.com/profile/
18143834258497875354noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-987850932434001559.post-17984756573008569432023-09-10T21:37:56.366+02:002023-09-10T21:37:56.366+02:00Thanks Daniel. What do
I do when the sample size is 20,000 (as when using data from AP Votecast)?Anonymoushttps://www.blogger.com/profile/
04768013683767362765noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-80821085515631438842021-09-09T19:13:04.024+02:002021-09-09T19:13:04.024+02:00This is excellent. I was
wondering how to justify a smaller alpha level in my analysis of data using 100000+ records. Thank you!Anonymoushttps://www.blogger.com/profile/
18433171966396716090noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-47945549444676608522021-07-31T09:02:26.692+02:002021-07-31T09:02:26.692+02:00This comment has been removed by
a blog administrator.Sajjad Ahmedhttps://www.blogger.com/profile/
08974601470220195378noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-80025694200887734322019-05-14T20:28:46.107+02:002019-05-14T20:28:46.107+02:00Hi Daniel, how this
standarization of P-values based on sample size can be coupled to the multiple-testing adjustment by Bonferroni or BH? Dodgerhttps://www.blogger.com/profile/
00995569797695061139noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-46935573936434022722018-12-07T15:59:28.123+01:002018-12-07T15:59:28.123+01:00As the blog explains, this is
about solving a problem with large N - so not intended to be used to increase the alpha for smaller N. Standardization for 100 is a pretty random choice - for these N's, there is no substantial
mismatch yet, according to Good. He mentions it is just a useful thing to all use - but feel free to use another number. Or devise another scaling. Daniel Lakenshttps://www.blogger.com/profile/
18143834258497875354noreply@blogger.comtag:blogger.com,1999:blog-987850932434001559.post-76644472876885543582018-12-05T21:56:48.744+01:002018-12-05T21:56:48.744+01:00Hi Daniel, I couldn't help
thinking about your idea of scaling alpha by the square root of the sample size divided by the constant 100. I completely fail to understand you choice of constant, which obviously assigns a false
positive rate higher than the traditional criterion of p < 0.05 to independent frequentist null hypothesis tests with a sample size below that arbitrary constant. Wouldn't you prefer an
adaptive false positive rate that starts with the traditional criterion or any other initial probability and decreases with sample size, for example alpha = alpha/log(n) or alpha = alpha/n^(1/3) ?<br
/><br />Best,<br />Martin DietzAnonymoushttps://www.blogger.com/profile/06915342196835755035noreply@blogger.com | {"url":"https://daniellakens.blogspot.com/feeds/9099819490920201929/comments/default","timestamp":"2024-11-12T02:55:19Z","content_type":"application/atom+xml","content_length":"13290","record_id":"<urn:uuid:0edd2564-5216-498b-bd22-77f0d3dd461d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00274.warc.gz"} |
Starter Amp Draw Chart
Starter Amp Draw Chart - And, 150 to 175 amps, when cranking a v6 engine. Average amp draws for different vehicles and engines. The reason i ask is yesterday while working in the bilge, i noticed
that the factory wire harness (battery to engine starter) is only #4 awg. Web the amount of current that a starter solenoid draws can vary depending on the size and type of the vehicle, as well as
the specific design of the solenoid. Use an ammeter/voltage meter to test the starter's current draw.
However, starter solenoids typically draw between 50 and 150 amps of current. Ambient temp, 15 degrees and thick oil, hydraulic pumps not de stroking, other accessories driven by the engine, over the
years i've had locked up alternators and a/c compressors that gave the same won't crank or. Only way to get an accurate answer is if someone tested both starters on the same vehicle, on the same day.
If the starter is only 75% efficient, it would take 166 amps to get the same 2 hp. The 50 amps would be under full load turning a cold 90hp engine over. Set the voltage meter to int 18v and adjust
the ammeter to read zero. Noname (old starter) 0.060 ohms to 0.090 ohms.
All About Motor Soft Starters What They Are and How They Work
Web #1 does anyone know what the average amperage draw of a starter for a 1983 johnson 115 hp motor? Oil and ambient temperature the type of oil used in a vehicle has an impact on the number of amps
that will be required to start it. Smaller engines will require lesser amps, while bigger.
starter amp draw chart waltoneckhoff
Disable the fuel or ignition so the engine will not start during the test. Web 300 amps is probably a good rule of thumb. I believe the typical starting amperage for a truck is around 800a (12v
system) on a 70 degree day with good batteries and cables. Use an ammeter/voltage meter to test the.
Amp Draw Chart
Set the voltage meter to measure battery voltage (12.6v). Use an ammeter/voltage meter to test the starter's current draw. Knowing how much current a starter is using will tell you if it's working
properly. Web a typical starter will draw anywhere between 50 to 200 amps based on the factors mentioned above. The 50 amps.
DIY Auto Service Starter Diagnosis and Repair AxleAddict
Use an ammeter/voltage meter to test the starter's current draw. Web a good starter will normally draw 125 to 150 amps, when cranking a four cylinder engine. Knowing how much current a starter is
using will tell you if it's working properly. But you have to enter in some other variables that can change the.
How Many Amps Does A Starter Draw?
Web if you have been wondering how many amps a starter draws, we have the answers for you in this article. Noname (old starter) 0.060 ohms to 0.090 ohms. If the applied voltage drops to 10 volts
because of the load of our 75% efficient starter, it will now take 199 amps to make 2.
Understand Your Electric Requirements Before You Install
If the starter is only 75% efficient, it would take 166 amps to get the same 2 hp. With that in mind it makes sense that in a small hp motor the starter would require less than the full 50 amps.
Ambient temp, 15 degrees and thick oil, hydraulic pumps not de stroking, other accessories.
starter amp draw chart waltoneckhoff
What size do i get to start a typical 13 to 15 liter diesel engine in cold weather? 00, 0, 1, 2, 3, 4, 5, 6 and 7. Set the voltage meter to measure battery voltage (12.6v). Knowing how much current a
starter is using will tell you if it's working properly. The amperes needed.
Amp Draw Chart
Oil and ambient temperature the type of oil used in a vehicle has an impact on the number of amps that will be required to start it. Noname (old starter) 0.060 ohms to 0.090 ohms. And, 150 to 175
amps, when cranking a v6 engine. Only way to get an accurate answer is if someone.
StarDelta (YΔ) Starter, Power & control wiringEngineering Tech
Only way to get an accurate answer is if someone tested both starters on the same vehicle, on the same day. I am moving my batteries to the front of my boat and i am trying to determine the proper
battery cable size for the run back to the engine. What size do i get.
Nema Motor Starter Size Chart earthbase
Average amp draws for different vehicles and engines. The fact the newer starter spins faster and starts the motor sooner. With that in mind it makes sense that in a small hp motor the starter would
require less than the full 50 amps. I am moving my batteries to the front of my boat and.
Starter Amp Draw Chart 00, 0, 1, 2, 3, 4, 5, 6 and 7. Web testing starter motor current draw. Web when the starter motor is called to action, a tremendous amount of current is needed. I am moving my
batteries to the front of my boat and i am trying to determine the proper battery cable size for the run back to the engine. The 50 amps would be under full load turning a cold 90hp engine over.
Web This Test Requires The Specifications Of The Vehicle.
Ambient temp, 15 degrees and thick oil, hydraulic pumps not de stroking, other accessories driven by the engine, over the years i've had locked up alternators and a/c compressors that gave the same
won't crank or. The current draw depends on, the type of starter and the application. Web 300 amps is probably a good rule of thumb. Web a good starter will normally draw 125 to 150 amps, when
cranking a four cylinder engine.
Would Lead Me To Believe The Newer Starter Would Draw Less Amps Overall.
Does this sound like high current draw, or does it sound about rite? E ezeke supreme mariner joined sep 19, 2003 messages 12,532 oct 6, 2010 #2 re: I am moving my batteries to the front of my boat
and i am trying to determine the proper battery cable size for the run back to the engine. Observe both testers (ammeter/voltage meter) as you crank the engine.
I Believe The Typical Starting Amperage For A Truck Is Around 800A (12V System) On A 70 Degree Day With Good Batteries And Cables.
Web when the starter motor is called to action, a tremendous amount of current is needed. Web a faulty starter can have a high amp draw that may be traced to a damaged starter solenoid. Maximum
current in copper and aluminum wire. Oil and ambient temperature the type of oil used in a vehicle has an impact on the number of amps that will be required to start it.
At 746 Watts/Hp, A 100% Efficient 2 Hp Starter Would Draw About 125 Amps At 12V.
Set the voltage meter to int 18v and adjust the ammeter to read zero. The fact the newer starter spins faster and starts the motor sooner. Set the voltage meter to measure battery voltage (12.6v).
But you have to enter in some other variables that can change the amount of draw.
Starter Amp Draw Chart Related Post : | {"url":"https://sandbox.independent.com/view/starter-amp-draw-chart.html","timestamp":"2024-11-08T02:49:53Z","content_type":"application/xhtml+xml","content_length":"24424","record_id":"<urn:uuid:bd555efe-3bd5-40ef-9530-7564ae859cb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00688.warc.gz"} |
Free Group Study Rooms with Timer & Music | FiveableFluids | AP Physics 2 Class Notes | Fiveable
You are probably familiar with the quantity of force, the interaction that causes a change in an object’s motion. In fluid dynamics, we also look at the property of pressure, which is the ratio of
the force to the perpendicular area of the object. This is typically measured in atmospheres (atm) 😺
The pressure exerted on an object submerged in a liquid is known as hydrostatic pressure. If we take the weight of the object and divide it by its area, we get the hydrostatic pressure 🚿 which equals
pgh. It is important to take notice that the pressure felt by an object only depends on the density of the liquid and how deep the object is submerged from the top of the liquid. It does not depend
at all on the mass of the object.
The total, or absolute, pressure an object experiences has 2 components. Gauge pressure, which is due to the liquid, and atmospheric pressure, which is due to the atmosphere 🤓
Therefore total pressure equals gauge pressure (pgh) added to the atmospheric pressure. Pay attention to the fact that gauge pressure depends on the density of the fluid and how deep the object is
from the top of the fluid. If the atmospheric pressure is not given to you in the problem, you can assume it to be 1 atmosphere.
Here are some key points about the pressure equation:
• The pressure equation is a mathematical equation that describes the relationship between the pressure of a fluid and its density, velocity, and height above a reference point.
• The general form of the pressure equation is:
P = ρgh + 1/2 * ρv^2
Where P is the pressure, ρ is the density of the fluid, g is the acceleration due to gravity, h is the height above the reference point, and v is the velocity of the fluid.
• The first term on the right side of the equation (ρgh) is the hydrostatic pressure, which is the pressure exerted by a static fluid due to its weight.
• The second term on the right side of the equation (1/2 * ρv^2) is the dynamic pressure, which is the pressure exerted by a moving fluid due to its kinetic energy.
• The pressure equation is used to analyze a variety of fluid flow problems, including fluid flow through pipes and fluid flow over surfaces.
• The pressure equation can be used to calculate the pressure at any point in a fluid, given the density, velocity, and height of the fluid at that point.
Q. What is the gauge pressure in an open fish tank if the absolute pressure is 5 atms?
A. Absolute pressure is the sum of the atmosphere and gauge pressure. Since the atmospheric pressure due to air can be assumed to be 1 atmosphere and the absolute pressure is 5 atms the gauge
pressure is 5-1= 4 atms.
Caution: atmospheric pressure won't always be 1, especially if you're in a closed or sealed container (like in closed pipes). Atmospheric pressure can be assumed to be one in a container that is open
to the environment!
It is also important to keep in mind that unlike force, pressure is a scalar. Pressure is defined to be at the point perpendicular to the surface. Therefore, going left or right across a liquid won’t
affect the pressure the object feels.
Pascal’s principle states that pressure at a point in a fluid is equal in all directions. This fact is used to lift heavy objects using hydraulic lifts 🏋️♀️ Since the pressures have to be equal, we
can apply a small force to a small area and obtain a much larger push back from the liquid over a much larger area.
Here are some key points about Pascal's principle:
• Pascal's principle states that the pressure applied to a confined fluid is transmitted undiminished to every part of the fluid and to the walls of the container.
• This means that if you apply a force to one part of a confined fluid, the pressure will increase throughout the entire fluid, and the force will be transmitted to every point in the fluid and to
the walls of the container.
• Pascal's principle is a fundamental principle of fluid mechanics, and it is the basis for many practical devices, including hydraulic lifts, hydraulic brakes, and hydraulic presses.
• Pascal's principle can be derived from the laws of physics, specifically the law of conservation of energy and the equation of state for a fluid.
• Pascal's principle is often used in conjunction with other principles, such as the principle of buoyancy and the principle of continuity, to analyze and solve problems involving fluid flow.
Pressure and velocity are inversely correlated. This is one of the most asked and frequently missed principles on exams. Areas of low pressure have high velocity, and areas of high pressure have low
velocity. Our intuition would tell us that fast-moving things would have more pressure, but, instead, this principle focuses on the pressure on the walls of the container. Slow-moving liquids exert
greater pressure on the walls of the container than faster-moving liquids. This is called the Bernoulli effect.
The Bernoulli effect is what leads to air flow and the phenomenon known as lift, allowing for inventions like the airplane. So the next time you fly on an airplane, remember to thank Bernoulli. ✈️
The concept of pressure is first discussed in this unit with regards to fluids, but it also comes up later in the Unit 2 and Thermodynamics. In this section, we mostly focus on the interactions of an
object in liquids. Later we will shift our focus to gases.
Here are some key points about the Bernoulli effect:
• The Bernoulli effect is a principle of fluid dynamics that states that an increase in the velocity of a fluid results in a decrease in the pressure of the fluid.
• The Bernoulli effect is a result of the conservation of energy in a fluid system. As the velocity of the fluid increases, the kinetic energy of the fluid increases, and the potential energy of
the fluid decreases. This results in a decrease in the pressure of the fluid.
• The Bernoulli effect can be observed in a variety of situations, including fluid flow through pipes, fluid flow over surfaces, and fluid flow through a venturi tube.
• The Bernoulli effect is often used to explain the lift generated by an airplane wing and the lift generated by a golf ball.
• The Bernoulli effect can be described mathematically using the Bernoulli equation, which relates the pressure, velocity, and height of a fluid in a system. | {"url":"https://hours-zltil9zhf-thinkfiveable.vercel.app/ap-physics-2/unit-1/ap-physics-2-fluids-pressure-forces/study-guide/kXdIyZDAwqOTSmAo0Ar7","timestamp":"2024-11-11T08:04:16Z","content_type":"text/html","content_length":"156037","record_id":"<urn:uuid:cbaa17cd-f303-4a28-84b2-bca1864541e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00008.warc.gz"} |
Hinge Force - Important Concepts and Tips for JEE
A hinge is a mechanical bearing that interconnects two rigid bodies, usually authorising only a restricted angle of revolving between them. Two rigid bodies are connected by an ideal hinge revolve
corresponding to each other about a fixed axis of rotation. All other translations or rotations are averted. This is the reason a hinge has only one degree of freedom. Hinges may be constructed with
malleable material or from moving components. In biological terms, many linkages or joints are hinges, like the elbow joint. Let us have a detailed discussion about what is hinge force, how to find
hinge force and the direction of hinge force in this article.
Derivation of Hinge Force Formula
Consider a rigid isosceles triangle frame constructed by three uniform thin rods (mass = m & length = l) that is free to revolve evenly in the vertical plane. The frame is hinged at one of its
vertices H.
Impulsive Hinge
The frame is free from rest from the position that is illustrated in the above figure. The individual weights act downwards at points x, y, and z as shown in the figure which is w = mg
The moment concerning the x-axis is $M_{x}=\dfrac{1}{2} m gl$
The moment concerning the y-axis is $M_{y}=\dfrac{1}{2} m g l \sin 30^{0}$
The moment concerning the z-axis is $M_{z}=\dfrac{\sqrt{3}}{2} m g l \cos 30^{0}$
All the three torques applied by the weights are in a clockwise direction about the point H.
$\tau_{\text {total }}=m g\left(\dfrac{1}{2}+\dfrac{1}{2} \sin 30^{0}+\dfrac{\sqrt{3}}{2} l \cos 30^{\circ}\right)$
$\tau_{\text {total }}= m g l\left(\dfrac{1}{2}+\dfrac{1}{2} \times \dfrac{1}{2}+\dfrac{\sqrt{3}}{2} \times \dfrac{\sqrt{3}}{2}\right)$
$\tau_{\text {total }} =\dfrac{3}{2} m g l$
Now, the moment of inertia (MI) will be
MI of $I_{H A}=\dfrac{m l^{2}}{3}$
MI of $I_{H B}=\dfrac{m l^{2}}{3}$
MI of $I_{A B}=\dfrac{m l^{2}}{12}+m\left(\dfrac{\sqrt{3}}{2} l\right)^{2}=\dfrac{5 m l^{2}}{6}$
So, the total moment of inertia will be
$I_{\text {total }}=I_{H A}+I_{H B}+I_{A B}=\dfrac{m l^{2}}{3}+\dfrac{m l^{2}}{3}+\dfrac{5 m l^{2}}{6}=\dfrac{3}{2} m l^{2}$
Equating the $\tau_{total}$ to the I[total] which is multiplied by $\omega$ we get:
$ &\dfrac{3}{2} m g l=\dfrac{3}{2} m l^{2} \omega \\ \\ &\Rightarrow \omega=\dfrac{g}{l} $
Now for the hinge force, we have
$N_{1}=3 m g \cos 60^{\circ}=\dfrac{3}{2} m g$
$3 m g \sin 60^{\circ}-N_{2}=3 m \times \dfrac{g}{l} \times \dfrac{1}{\sqrt{3}}$
$\Rightarrow N_{2}=\dfrac{\sqrt{3}}{2} m g$
So, the formula of hinge force will be $\sqrt{N_{1}^{2}+N_{2}^{2}}=\sqrt{3 m g}$
Hinge Force Diagram
Various Hinge Diagrams
In this hinge diagram, there are many hinge arrangements. Let us talk about all these in a very simple manner.
Butterfly Hinge
A butterfly hinge is a type of strap hinge which promotes leaf plates that are arranged in a decorative manner and favours the wings of a butterfly. It is invariably mounted on the surface.
Frequently, it is constructed with brass or any other decorative metal and is mostly used on ornate boxes or attractive cabinets. It is also called the parliament hinge.
Butt Hinge
Butt hinges are also known as mortise hinges. The butt hinge is mostly used in a couple of doors. It is constructed by two identical leaves of metal interconnected by a central pin and barrel system.
One leaf of the butt hinge is mortised or sunken towards the door and the other is attached to the pillar. When duly positioned, the two leaves of a butt hinge should perch flush against each other,
which permits the door to perch perfectly flush with the pillar.
These types of barrel hinges are arranged like an H and are extensively used on flush-mounted doors. Small H hinges of measurement 3–4 inches or 76–102 mm are used in cabinet hinges, larger
hinges of about 6–7 inches or 150–180 mm, on the other hand, are used for passage doors or cupboards doors.
HL Hinge
These are common for cupboard doors, room doors, and passage doors.
Strap Hinge
A strap hinge is a hinge which has a long flutter by which it is buckled to the surface of a door and the abutting wall. It is compared with a butt hinge.
Rattail Hinge
A rattail hinge is a flexible type of hinge in which the pin is enlarged so that it can be fastened to the sheathing of a door.
Snipe Hinge
Early American pilgrim fixture hinges containing a couple of half-round iron laces double back like cotter pins, connected by the eyes, and grapple into the logs at the serrated outer ends.
Let us now understand what is meant by hinge reaction.
Hinge Reaction
The determination of hinge reaction is important to simplify the analysis that each support utilises on the trap door across each axis of a Cartesian coordinate system. Each reaction is presumed to
be acting in a positive direction.
Coordinate System
Support 1:
The first support emerged to be a standard hinge at a quick glance. Upon further examination, it is clear that this support is actually a bearing with a circular shaft that can glide along the z-axis
(which standard hinges cannot). This type of support has two unknown reaction forces and two unknown reaction moments along the coordinate axes that are perpendicular to the axis of the bearing.
However, there are no reaction forces or moments along the axis of the bearing. Hence, the bearing can both translate and revolve along this axis.
Support 1 of a Trap Door
This is the image of support 1. Now let us see the image of how the reaction would be at support 1.
Reaction at Support 1
Support 2:
The second support is a standard hinge. There are two unknown reaction forces and two unknown reaction moments along the coordinate axes that are right-angled to the axis of the hinge. There is also
a reaction force along the axis of the hinge (i.e. does not glide along the z-axis). However, there is no reaction moment along the axis of the hinge. Hence, the hinge may only revolve about this
Support 2 Diagram of the Trap Door
This is the image of support 2. Now let us see the image of how the reaction would be at support 2.
Reactions in support 2
This figure is the reaction at support 2 and now let us see support 3.
Support 3:
The third support is the bolt. This support does not allow any rotation around any axis. Similar to support 2, there are also two unknown reaction forces and two unknown reaction moments along the
coordinate axes. However, they are perpendicular to the axis of bearing. There is also a reaction moment along the axis of the bolt. Hence, the bolt only allows translation along the axis of the
Support 3 Diagram
This is the image of support 3. Now let us see the image of how the reaction would be at support 3.
Reaction in Support 3
This figure is the reaction at support 3. Now let us understand the direction of hinge force with the help of a diagram.
Direction of Hinge Force
The understanding of the direction of a hinge force can be simply facilitated by the diagram shown below.
Direction of a Hinged Force
Since the bar in the 1st diagram is in equilibrium along the horizontal, the reaction at the hinge must be opposite to the horizontal component of applied force at the free end, as we understood
earlier by the reaction forces acting on a hinged end. However, if numerous forces are acting, we can presume a general situation with a horizontal and vertical component. Applying equilibrium
conditions will tell us if the assumed direction was right or wrong. Positive answers infer the correct assumption and negative infers the opposite.
In the second diagram, the only non-hinge force with a horizontal element is the force that is labelled as T in the diagram. Consequently, the horizontal element of the hinge force is required to be
equal and opposite to the horizontal element of T.
The hinge force is important in engineering mechanics to illustrate the free body diagram of a frame which is hinged. The article clearly describes the different types of hinge joints and a brief
idea about all these types along with the different reaction forces which are acting on a hinged body. Through the different support arrangements, we can now understand the different reaction moments
which are along the coordinate axis. These reaction moments are perpendicular as well as parallel according to the support arrangements. We can also gain a descriptive idea about the hinge force from
this article.
FAQs on Hinge Force - JEE Important Topic
1. Is there any normal force acting on the hinge?
By definition, normal force is always right-angled to the common surface at which the objects make a contact. If there is no hinge, this surface will be the wall. There can also be a frictional force
in which case the reaction force at the wall can point in any orientation away from the wall. With a hinge, the normal force between the hinge and the rod can point in any orientation, even towards
the wall or away from the wall.
2. How much weight can a hinge hold?
The hinges which perform the standard duty for the low or the medium-frequency doors can hold the weight up to 200 lb, without frame or door reinforcement. However, the hinges that perform a heavy
duty for high-frequency doors can hold the weight up to 200 lb. For doors with a medium frequency, the hinges can hold up to 400 lb, both without frame or door reinforcement. Whereas, for
low-frequency doors, the hinges used can hold up to 600 lb weight. | {"url":"https://www.vedantu.com/jee-main/physics-hinge-force","timestamp":"2024-11-11T03:29:20Z","content_type":"text/html","content_length":"306945","record_id":"<urn:uuid:ae999a0f-9149-41b8-968e-c0df506ff2bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00572.warc.gz"} |
Lesson 21
Predicting Populations
Lesson Narrative
In this lesson, students use the main function types they have studied thus far in the course (linear and exponential) to model different populations. In the first activity, data for three city
populations are given and students are asked to produce a linear or exponential model for each (if appropriate) and then make predictions for populations at other dates. The cities have been chosen
so that one is well modeled by an exponential model, another by a linear model, and the third by neither.
In the second activity, students examine world population. The task is more open-ended and only limited data is provided, likely requiring students to gather more data. In addition, the world
population is not consistently well modeled by a linear or exponential function, but for certain periods of time, exponential and/or linear functions can be appropriate (in particular, in recent
years the growth has been strikingly linear).
Students engage in different parts of the modeling cycle (MP4). This can be adjusted further, for example, by choosing other cities for the first activity and having students find the data. They will
have to think carefully (MP1) about how to choose an appropriate linear or exponential model because none of the data is exactly exponential or linear. They will also attend to precision (MP6) in
choosing the parameters in their models.
Learning Goals
Teacher Facing
• Choose and write a linear or exponential function to model real-world data.
• Determine and explain (in writing) how well a function models the given data.
• Use given population data to calculate or estimate growth rates and make predictions.
Student Facing
Let's use linear and exponential models to represent and understand population changes.
Required Preparation
If students are to present their mathematical models on visual displays, prepare tools for creating visual displays.
Acquire devices that can run Desmos (recommended) or other graphing technology. It is ideal if each student has their own device. (Desmos is available under Math Tools.)
Student Facing
• I can determine how well a chosen model fits the given information.
• I can determine whether to use a linear function or an exponential function to model real-world data.
CCSS Standards
Building Towards
Additional Resources
Google Slides For access, consult one of our IM Certified Partners.
PowerPoint Slides For access, consult one of our IM Certified Partners. | {"url":"https://curriculum.illustrativemathematics.org/HS/teachers/1/5/21/preparation.html","timestamp":"2024-11-03T06:49:15Z","content_type":"text/html","content_length":"79014","record_id":"<urn:uuid:e5156a67-a106-462e-b29b-55af9426f975>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00102.warc.gz"} |
Chromatic semitone
The chromatic semitone is the difference between two adjacent notes on the chromatic scale and is thus approximately one twelfth of an octave.
For example, the difference between C and C# is one chromatic semitone.
The term "semitone" usually makes sense only when discussing notes on the chromatic scale and scales that use notes from the chromatic scale, such as common heptatonic scales as the major scale or
the natural minor scale. Thus, "semitone" usually means specifically the "chromatic semitone".
The chromatic scale is a scale of 12 semitones covering one octave. One octave (for example, A = 440 Hz to A = 880 Hz) is bound by two frequencies the ratio between which is 2 (880 / 440 = 2). One
semitone should be approximately the ratio of 2^1/12. If the twelve semitones on the chromatic scale are equal then the chromatic scale is equal tempered and each semitone is exactly 2^1/12. In other
words, the ratio of the frequencies of C and C# in the same octave is C# / C = 2^1/12. Similarly, D / Db = 2^1/12 and so on. If the chromatic scale is not equal tempered, such as if it is a just
tempered scale, then the semitones are not equal to each other and are only approximately, and not precisely, equal to 2^1/12.
See also:
Scale, Scale (index) | {"url":"https://www.recordingblogs.com/wiki/chromatic-semitone","timestamp":"2024-11-07T18:58:56Z","content_type":"text/html","content_length":"21495","record_id":"<urn:uuid:ca4d0f0f-5123-4c0c-a7f3-a86091226ceb>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00315.warc.gz"} |
Mathematica: Mathematica is a popular software tool used for advanced mathematics and computation. It provides a comprehensive set of tools for symbolic and numerical computations, visualization, and
MATLAB: MATLAB is a widely used tool for numerical computation, visualization, and programming. It is particularly useful for engineering, science, and mathematical applications.
Maple: Maple is another popular tool for advanced mathematics and computation. It provides a variety of tools for symbolic and numerical computations, visualization, and programming.
SageMath: SageMath is a free, open-source software tool for advanced mathematics and computation. It provides a variety of tools for symbolic and numerical computations, visualization, and
Maxima: Maxima is a free, open-source computer algebra system that is particularly useful for symbolic computations in mathematics, science, and engineering.
GeoGebra: GeoGebra is a free, open-source dynamic mathematics software tool that combines geometry, algebra, and calculus. It provides an interactive environment for exploring mathematical concepts
and visualizing mathematical relationships.
R: <a href="http://makeatea.com/earl-grey-tea-benefits/">R</a> is a popular programming language and software environment for statistical computing and graphics. It provides a wide range of tools for
data analysis, modeling, and visualization.
These are just a few examples of the many interesting software tools available for computer math. Depending on your needs and interests, there may be other tools that are better suited to your
particular application. | {"url":"https://mathisfunforum.com/extern.php?action=feed&tid=28848&type=atom","timestamp":"2024-11-09T22:37:47Z","content_type":"application/atom+xml","content_length":"5018","record_id":"<urn:uuid:f464b0d5-6a35-45b7-a4cc-dbad3c120f1a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00656.warc.gz"} |
Interpreting Results of Partial Least Squares
17.7.2.2 Interpreting Results of Partial Least Squares
Partial Least Squares Report Sheet
Cross Validation
This report only appears when you check to do Cross Validation. It gives summary statistics for fitting models using from 0 to the specified maximum number of extracted factors. If there are more
than 15 independent variables, then we restrict the maximum number of extracted factors can only be up to 15; otherwise, the number of original independent variables can be up to. We can see one
table Cross Validation Summary and another plot PRESS Plot are contained in this report section. These results will tell you the optimal number of factors of great interest. We have many other
methods, such as K-fold cross-validation, 2-fold cross-validation, Repeated random sub-sampling validation and Leave-one-out cross-validation, to do cross validation. However, as for Partial Least
Squares, we usually choose Leave-one-out method, which uses a single observation as validation data and leave the remains as training data each time, with the process stopping until each observation
has already been treated as validation data.
Cross Validation Summary
In this table, Root mean PRESS is the root mean of PRESS, which is the predicted residual sum of squares. From the table, we generally can see the values of Root mean PRESS start to (non-strictly)
decrease to a minimum root mean and then increase to some value. At the time the minimum root mean is reached, the number of factors involved is the so-called Optimal number of factors. Actually, the
information of most interest could be found in the notes below the table.
PRESS Plot
This plot shows directly how the minimum root mean is reached.
Variance Explained
This section includes one table Percent of Variance and two plots Variance Explained Plot.
Percent of Variance
This table offers results of the percent variation and cumulative percent variation explained for both X and Y. We can see that the more factors are involved the larger percent value for both X
Effects and Y Responses.
Variance Explained Plot
These two plots show Variance Explained for X Effects(%) and Variance Explained for Y Responses(%) respectively.
Coefficients Plots
For each response in Y, the corresponding plot shows the coefficients of X based on the original data.
Variable Importance
This plot explains each predictor variable using the mean variance in responses.This VIP value is a measure of the importance of a variable. We can see there is a reference line, which equals to 0.8,
in the plot. A variable is considered 'important' if its VIP value is greater than 0.8.
Loadings Plot
The Loading Plot is a plot of the relationship between original variables and subspace dimensions. It is used for interpreting relationships among variables.
Scores Plot
The score plot is a projection of data onto subspace. It is used for interpreting relations among observations.
Diagnostics Plots
For each response in Y, we have linear fit plot, residual scatter plots and normal percentile plot. These plots are used for model diagnosis.
Distance Plots
The distance plots show distances to both X and Y model for the ith observation.
T Square Plot
The van der Voet $T^2$ statistics are used to test whether models with various numbers of extracted factors significantly differ from the optimum model. And this T Square Plot shows scatter of these | {"url":"https://d2mvzyuse3lwjc.cloudfront.net/doc/Origin-Help/PLS-Interpreting-Results","timestamp":"2024-11-05T17:18:12Z","content_type":"text/html","content_length":"137583","record_id":"<urn:uuid:7efcba1a-c9b9-488c-8b32-471610c97b7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00762.warc.gz"} |
Fast Fourier Transform (FFT) - Electrical Engineering Textbooks
Fast Fourier Transform (FFT)
One wonders if the DFT can be computed faster: Does another computational procedure -- an algorithm -- exist that can compute the same quantity, but more efficiently. We could seek methods that
reduce the constant of proportionality, but do not change the DFT's complexity
. Here, we have something more dramatic in mind: Can the computations be restructured so that a smaller complexity results?
In 1965, IBM researcher Jim Cooley and Princeton faculty member John Tukey developed what is now known as the Fast Fourier Transform (FFT). It is an algorithm for computing that DFT that has order
for certain length inputs. Now when the length of data doubles, the spectral computational time will not quadruple as with the DFT algorithm; instead, it approximately doubles. Later research showed
that no algorithm for computing the DFT could have a smaller complexity than the FFT. Surprisingly, historical work has shown that Gauss in the early nineteenth century developed the same algorithm,
but did not publish it! After the FFT's rediscovery, not only was the computation of a signal's spectrum greatly speeded, but also the added feature of algorithm meant that computations had
flexibility not available to analog implementations.
Before developing the FFT, let's try to appreciate the algorithm's impact. Suppose a short-length transform takes 1 ms. We want to calculate a transform of a signal that is 10 times longer. Compare
how much longer a straightforward implementation of the DFT would take in comparison to an FFT, both of which compute exactly the same quantity.
If a DFT required 1ms to compute, and signal having ten times the duration would require 100ms to compute. Using the FFT, a 1ms computing time would increase by a factor of about
, a factor of 3 less than the DFT would have needed.
To derive the FFT, we assume that the signal's duration is a power of two:
. Consider what happens to the even-numbered and odd-numbered elements of the sequence in the DFT calculation.
Each term in square brackets has the form of a
-length DFT. The first one is a DFT of the even-numbered elements, and the second of the odd-numbered elements. The first DFT is combined with the second multiplied by the complex exponential
. The half-length transforms are each evaluated at frequency indices
. Normally, the number of frequency indices in a DFT calculation range between zero and the transform length minus one. The computational advantage of the FFT comes from recognizing the periodic
nature of the discrete Fourier transform. The FFT simply reuses the computations made in the half-length transforms and combines them through additions and the multiplication by
, which is not periodic over
. Figure 1 illustrates this decomposition. As it stands, we now compute two length-
transforms (complexity
), multiply one of them by the complex exponential (complexity
), and add the results (complexity
). At this point, the total complexity is still dominated by the half-length DFT calculations, but the proportionality coefficient has been reduced.
Now for the fun. Because
, each of the half-length transforms can be reduced to two quarter-length transforms, each of these to two eighth-length ones, etc. This decomposition continues until we are left with length-2
transforms. This transform is quite simple, involving only additions. Thus, the first stage of the FFT has
length-2 transforms (see the bottom part of Figure 1). Pairs of these transforms are combined by adding one to the other multiplied by a complex exponential. Each pair requires 4 additions and 2
multiplications, giving a total number of computations equaling
. This number of computations does not change from stage to stage. Because the number of stages, the number of times the length can be divided by two, equals
, the number of arithmetic operations equals
, which makes the complexity of the FFT
Length-8 DFT decomposition
Doing an example will make computational savings more obvious. Let's look at the details of a length-8 DFT. As shown on Figure 2, we first decompose the DFT into two length-4 DFTs, with the outputs
added and subtracted together in pairs. Considering Figure 2 as the frequency index goes from 0 through 7, we recycle values from the length-4 DFTs into the final calculation because of the
periodicity of the DFT output. Examining how pairs of outputs are collected together, we create the basic computational element known as a butterfly (Figure 2).
By considering together the computations involving common output frequencies from the two half-length DFTs, we see that the two complex multiplies are related to each other, and we can reduce our
computational work even further. By further decomposing the length-4 DFTs into two length-2 DFTs and combining their outputs, we arrive at the diagram summarizing the length-8 fast Fourier transform
(Figure 1). Although most of the complex multiplies are quite simple (multiplying by
means swapping real and imaginary parts and changing their signs), let's count those for purposes of evaluating the complexity as full complex multiplies. We have
complex multiplies and
complex additions for each stage and
stages, making the number of basic computations
as predicted.
Note that the ordering of the input sequence in the two parts of Figure 1 aren't quite the same. Why not? How is the ordering determined?
The upper panel has not used the FFT algorithm to compute the length-4 DFTs while the lower one has. The ordering is determined by the algorithm.
Other "fast" algorithms were discovered, all of which make use of how many common factors the transform length NN has. In number theory, the number of prime factors a given integer has measures how
composite it is. The numbers 16 and 81 are highly composite (equaling
respectively), the number 18 is less so (
), and 17 not at all (it's prime). In over thirty years of Fourier transform algorithm development, the original Cooley-Tukey algorithm is far and away the most frequently used. It is so
computationally efficient that power-of-two transform lengths are frequently used regardless of what the actual length of the data.
Suppose the length of the signal were
? How would you compute the spectrum of this signal using the Cooley-Tukey algorithm? What would the length
of the transform be?
The transform can have any greater than or equal to the actual duration of the signal. We simply “pad” the signal with zero-valued samples until a computationally advantageous signal length results.
Recall that the FFT is an algorithm to compute the DFT. Extending the length of the signal this way merely means we are sampling the frequency axis more finely than required. To use the Cooley-Tukey
algorithm, the length of the resulting zero-padded signal can be 512, 1024, etc. samples long.
Explore CircuitBread
Get the latest tools and tutorials, fresh from the toaster. | {"url":"https://www.circuitbread.com/textbooks/fundamentals-of-electrical-engineering-i/digital-signal-processing/fast-fourier-transform-fft","timestamp":"2024-11-03T05:50:10Z","content_type":"text/html","content_length":"957071","record_id":"<urn:uuid:9fc36044-524c-4e47-8f04-0f49f1a8cffb>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00895.warc.gz"} |
Investment Valuation Ratio (IVR) | TypesInvestment Valuation Ratio (IVR) | Types
An Investment Valuation Ratio or also call the Assessment Ratio is the ratio of Investments use to top rate stocks from a company. This Valuation ratio is quite popular and widely use by investors to
determine his decision in investing in the stock market because they can clearly give an overview about the relationship between the cost of investment with the benefits that he obtain. With this
Investment Valuation Ratios, investors can determine whether a company's stock "expensive" or "cheap".
The types of Investment Valuation Ratio
There are several types of commonly use Valuation Ratios to assess the "value" of a company, including the Dividend Yield, the Price to Earnings Ratio (PER), Price to Sales Ratio (PSR), Price to Book
Value Ratio (PBVR) and Price Cash Flow Ratio (PCFR) .
Dividend Yield
Dividend Yield or Dividend Yield Ratio is a ratio that compares the amount of investment valuation cash dividend distribute to shareholders with a stock price (Dividend per Share/Market Value per
Share). A Dividend Yield that is usually represent by a percentage (%). It is often use to calculate the cash (cash) they get from the results of their investments in stocks. With this Dividend
Yield, investors can find out how much the dividend would they get on each dollar they invest in a stock.
Price to Earnings Ratio (PER)
The price to Earnings Ratio or often abbreviate with PER (p/e Ratio) is the investment valuation ratios compare. The price per sheet stock companies today with earnings per shares (Price per Share/
Earning per Share). By calculating p/e Ratio or Price Earnings Ratio, we can figure out how big the price paid by the market want to against the income or profits of an enterprise. Price to Earnings
Ratio is often call the price ratio against income.
Price to Sales Ratio (PSR)
The price to Sales Ratio (PSR or the P/S Ratio) is a valuation ratio that compares stock price investment company with its annual sales (Price per share/Revenue per share). The same as the Price to
Earnings Ratio (PER) and Price/Earnings to Growth Ratio (PEG), Price to Sales Ratio (PSR) is typically also use to measure the value of the shares of a company. In the language of Indonesia, Price to
Sales Ratio is often also call the price ratio against sales.
Price to Book Value Ratio (PBVR)
The Price to Book Value (PBV) is the ratio of the valuation of investments that are often use by investors. To compare the market value of shares of the company with the value of his book. PBV ratio
indicates how many shareholders who fund the net asset of the company. This ratio helps investors to compare the market value or stock prices. They pay per share with the size of the traditional
values of a company. Price to Book Value Ratio is refer to as the ratio of the price of the book.
Price to Cash Flow Ratio (PCFR)
The Price to Cash Flow Ratio (PCFR or P/CF Ratio) is the ratio of the investment valuation. Which compares the share price of a company with the cash flow of the company. In other words, Price to
Cash Flow Ratio indicates the amount of money payable by investors. To the cash flow generate by the company. Price Cash Flow Ratio is often refer to as the ratio of the price against the cash flow.
No comments: | {"url":"https://www.excel-pmt.com/2019/01/investment-valuation-ratio-ivr-types.html","timestamp":"2024-11-11T00:56:10Z","content_type":"application/xhtml+xml","content_length":"143835","record_id":"<urn:uuid:521bebee-1a92-438c-8f2f-3f5e2045b85a>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00809.warc.gz"} |
Copyright (c) 2007 Yitzak Gale Eric Kidd
License BSD-style (see the file LICENSE)
Maintainer R.Paterson@city.ac.uk
Stability experimental
Portability portable
Safe Haskell Safe
Language Haskell98
The MaybeT monad transformer extends a monad with the ability to exit the computation without returning a value.
A sequence of actions produces a value only if all the actions in the sequence do. If one exits, the rest of the sequence is skipped and the composite action exits.
For a variant allowing a range of exception values, see Control.Monad.Trans.Except.
The MaybeT monad transformer
newtype MaybeT m a Source #
The parameterizable maybe monad, obtained by composing an arbitrary monad with the Maybe monad.
Computations are actions that may produce a value or exit.
The return function yields a computation that produces that value, while >>= sequences two subcomputations, exiting if either computation does.
• runMaybeT :: m (Maybe a)
MonadTrans MaybeT Source #
Monad m => Monad (MaybeT m) Source #
Functor m => Functor (MaybeT m) Source #
MonadFix m => MonadFix (MaybeT m) Source #
Monad m => MonadFail (MaybeT m) Source #
(Functor m, Monad m) => Applicative (MaybeT m) Source #
Foldable f => Foldable (MaybeT f) Source #
Traversable f => Traversable (MaybeT f) Source #
Eq1 m => Eq1 (MaybeT m) Source #
Ord1 m => Ord1 (MaybeT m) Source #
Read1 m => Read1 (MaybeT m) Source #
Show1 m => Show1 (MaybeT m) Source #
MonadZip m => MonadZip (MaybeT m) Source #
MonadIO m => MonadIO (MaybeT m) Source #
(Functor m, Monad m) => Alternative (MaybeT m) Source #
Monad m => MonadPlus (MaybeT m) Source #
(Eq1 m, Eq a) => Eq (MaybeT m a) Source #
(Ord1 m, Ord a) => Ord (MaybeT m a) Source #
(Read1 m, Read a) => Read (MaybeT m a) Source #
(Show1 m, Show a) => Show (MaybeT m a) Source #
Lifting other operations | {"url":"https://hackage.haskell.org/package/transformers-0.5.2.0/docs/Control-Monad-Trans-Maybe.html","timestamp":"2024-11-02T09:29:12Z","content_type":"application/xhtml+xml","content_length":"47914","record_id":"<urn:uuid:0b9a67aa-f08d-4d1b-a373-df29d987902a>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00805.warc.gz"} |
STAT20060: A Casino Offers A Variety Of Gambling Opportunities To Its Customers, Including An Extensive Range Of “One-Armed Bandit” Machines: Statistics & Probability Assignment, UCD, Ireland - FIND A TUTOR ONLINE!
STAT20060: A Casino Offers A Variety Of Gambling Opportunities To Its Customers, Including An Extensive Range Of “One-Armed Bandit” Machines: Statistics & Probability Assignment, UCD, Ireland
University University College Dublin (UCD)
Subject STAT20060: Statistics & Probability
Q1 A casino offers a variety of gambling opportunities to its customers, including an extensive range of “one-armed bandit” machines. These machines sometimes pay out a jackpot and the mean time
between such pay-outs for the casino is 3 hours and 14 minutes. If these can reasonably be modeled with a Poisson distribution how likely is it that the number of jackpots in a 10-hour period will be
at least 4?
Q2 The amount paid out in a jackpot by the one-armed bandits can be modeled as being normally distributed with a mean €121 and a standard deviation €48. What is the probability that the next jackpot
pay-out will be between €83 and €155 inclusive?
Q3 A bet based on a card game is available to players whereby a number of cards are removed by the dealer from a deck of 52, leaving 10 hearts, 9 diamonds, and 12 clubs. From these, a total of 7cards
are drawn and the player wins if the number of hearts selected is 3. How likely is it that a player will win such a bet?
Q4 A different card game involves the dealer taking any 7 cards from the pack. The player must blindly select and replace one of these 7 cards a total of 3 times. They win the game if they manage to
select any card more than once. How likely is it that a player will win this game?
Q5 Players can participate in a daily lotto game where they pick 5 numbers from the set 1, 2 ,… 8. The winning selection is determined by drawing balls from a drum – any player whose selection
matches the drawn ball is a winner. If 120 players have entered this draw on one particular day what is the probability that there will be at least 4 winners?
Probability assignment 2022 instructions
Often when we wish to estimate the probability of an event we can employ constructs such as probability models, counting principles etc. to get an exact answer. If this approach fails an alternative
might be to simulate the experiment in question and estimate the probability empirically.
The probability assignment requires that you take five problems in probability and analyze the probability of a particular event resulting from a (statistical) experiment. In particular, you must
calculate the exact or true probability (obtained using probability models/counting principles, etc.) , develop a computer simulation of repeated instances of the experiment and estimate the
probability empirically. You should carry out an assessment of how good the estimate it by quantifying its precision (as measured by the standard deviation)
and its accuracy (how close the estimate is to the true value). Finally, an error check statistic should be calculated, which you will find useful in verifying that your workings are correct.
The particular questions you will be required to consider can be found on Moodle – simply locate the file with your name / ID. The five will be based around one of a number of possible scenarios
(e.g. holiday resort, film festival, etc.). You will likely have the same scenario as some of your classmates but the
parameters, and hence the answers will be different.
Each of the five problems will be tackled by developing a function in R to be called q1() for question 1, q2() for question 2, etc. The five values to be determined for each question should be
returned from the function.
The post STAT20060: A Casino Offers A Variety Of Gambling Opportunities To Its Customers, Including An Extensive Range Of “One-Armed Bandit” Machines: Statistics & Probability Assignment, UCD,
Ireland appeared first on BlueOrigin EssayWriters. | {"url":"https://answers.essaypanel.com/stat20060-a-casino-offers-a-variety-of-gambling-opportunities-to-its-customers-including-an-extensive-range-of-one-armed-bandit-machines-statistics-probability-assignment-ucd/","timestamp":"2024-11-11T04:57:30Z","content_type":"text/html","content_length":"80583","record_id":"<urn:uuid:25d103c0-0a4b-48fa-9b91-d3e00800a234>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00073.warc.gz"} |
Shrinkage Optimization in Talc- and Glass-Fiber-Reinforced Polypropylene Composites
Department of Mechanical Engineering, Yonsei University, Seoul 03722, Korea
Author to whom correspondence should be addressed.
Submission received: 8 February 2019 / Revised: 28 February 2019 / Accepted: 4 March 2019 / Published: 6 March 2019
The shrinkage of reinforced polymer composites in injection molding varies, depending on the properties of the reinforcing agent. Therefore, the study of optimal reinforcement conditions, to minimize
shrinkage when talc and glass fibers (GF) (which are commonly used as reinforcements) are incorporated into polypropylene (PP), is required. In this study, we investigated the effect of reinforcement
factors, such as reinforcement type, reinforcement content, and reinforcement particle size, on the shrinkage, and optimized these factors to minimize the shrinkage of the PP composites. We measured
the shrinkage of injection-molded samples, and, based on the measured values, the optimal conditions were obtained through analysis of variance (ANOVA), the Taguchi method, and regression analysis.
It was found that reinforcement type had the largest influence on shrinkage among the three factors, followed by reinforcement content. In contrast, the reinforcement size was not significant,
compared to the other two factors. If the reinforcement size was set as an uncontrollable factor, the optimum condition for minimizing directional shrinkage was the incorporation of 20 wt % GF and
that for differential shrinkage was the incorporation of 20 wt % talc. In addition, a shrinkage prediction method was proposed, in which two reinforcing agents were incorporated into PP, for the
optimization of various dependent variables. The results of this study are expected to provide answers about which reinforcement agent should be selected and incorporated to minimize the shrinkage of
PP composites.
1. Introduction
Injection molding is used in the production of complex plastic products [
]. In the case of plastics, in this process, molten resin is injected into a mold and is then left to cool and solidify into the final plastic product. During this process, the cooling in mold causes
shrinkage, and the product is reduced in size, compared to the mold dimensions. This phenomenon occurs as a result of the difference in the cooling rate at the surface of the product and that of the
interior [
]. If an injection-molded part experiences a large direction-dependent shrinkage deviation, its dimensional stability is seriously affected. This might also cause warpage in this part. Therefore,
various studies have been conducted to reduce such shrinkage and warpage problems [
Zafar et al. applied a microcellular foaming process to reduce linear and volumetric shrinkage of the injection-molded parts [
]. The authors used acetal copolymer as a material, and by applying the foaming process, they were able to reduce the shrinkage and weight of the injection-molded samples. Jin et al. carried out a
finite element analysis to predict the residual stress and distortion in smartphone baseplates manufactured by die-casting and injection-molding processes [
]. The results of the finite element analyses were compared with the actual experimental values, and it was found that the thickness of the plate caused an uneven residual stress, which, in turn,
created local distortions. Bensingh et al. analyzed the volumetric shrinkage and deflection in an injection-molded Bi-aspheric lens, using a computer numerical simulation [
]. Polycarbonate was used as the material, and the optimal injection-molding process conditions to minimize volumetric shrinkage were identified. In addition, experiments with the optimal process
parameters were carried out, and it was found that the injection-molded Bi-aspheric lens had a shallow and steep surface profile accuracy.
Direction-dependent shrinkage deviation was more varied, when a reinforcing agent was incorporated into a polymer, thus forming a polymer composite, compared to that of the polymer alone. In
particular, a reinforcing agent with a high aspect ratio exhibits a large shrinkage deviation, according to the direction in which it aligns with the flow of the polymer [
]. As a result, shrinkage in the direction parallel to that of the polymer flow (that is, the flow direction (FD)) is different from that in the direction perpendicular to that of polymer flow (that
is, the transverse direction (TD)). Depending on the reinforcement type, polymer composites exhibit different shrinkage tendencies, so the choice of reinforcement is very important. Therefore, many
studies have been conducted to confirm the shrinkage of reinforced polymers [
]. For example, Juraeva et al. predicted the mechanical properties and volumetric shrinkage of injection-molded automobile components [
]. They investigated various polymer composites compounded from six types of base materials (polyoxymethylene, polyamide, polyphthalamide, polyphenylene, and polyetherimide) and two types of
reinforcements (glass and carbon fibers), which were simulated to obtain the optimum volumetric shrinkage, tensile strength, and flexural strength, among other factors. In addition, Cadena-Perez et
al. measured the shrinkage and warpage of the glass-fiber-reinforced polypropylene (PP), as a function of the compatibilizers used in the injection molding [
]. They confirmed that the shrinkage and warpage were reduced by increasing the content of the compatibilizing agents.
Many studies have been carried out on the shrinkage of polymer composites containing only one reinforcing agent to date, but there is a lack of research into the differences in the shrinkage of
reinforced polymer composites, depending on the type of reinforcement used in injection molding. Therefore, in this study, we aimed to investigate how shrinkage changes with the reinforcement type,
reinforcement content, and reinforcement size, when talc and glass fiber (GF) are incorporated with the base material, PP. Talc and GF are commonly used reinforcements in industry and are worthy of
analysis. PP is worthy of analysis as a semi-crystalline polymer resin that has a thermal behavior that is different from that of amorphous polymers [
]. We compared the variation in the shrinkage trends at different locations in the injection-molded parts. This is because the shrinkage tendencies of talc- and GF-reinforced PP composites, in
different regions, have not been extensively studied. In addition, optimization using the Taguchi method, which sets the reinforcement particle size as a noise factor, and regression analysis as a
function of the reinforcement content, are new approaches that have not been reported before.
For this purpose, talc-reinforced PP and GF-reinforced PP were injection-molded under different conditions, and the differences in the shrinkage in the FD and TD, as well as shrinkage deviation
between directions, were measured. On the basis of the experimental results, the optimal conditions for shrinkage prevention were identified through analysis of variance (ANOVA), the Taguchi method,
and regression analysis. In particular, in this study, the particle size of the reinforcing agent was set as an uncontrollable factor (noise factor in the Taguchi method). This was expected to
provide robust conditions for factors that are not precisely controlled, such as the particle size, in the actual injection-molding process. It was expected that this study would provide accurate
guidelines for minimizing shrinkage in the injection molding of polymer composites.
2. Materials and Methods
2.1. Materials
PP supplied by Hanwha Total (Seoul, Korea) was used as the base material. Two types of talc (Hanwha Total, Seoul, Korea) were used as a reinforcing material. Talc is a thin plate-like reinforcing
agent [
]. The average particle sizes (D50) of the two types of talc were 5.2 and 1.8 μm (denoted big and small, respectively). Each type had a specific gravity of 2.7. In addition, two types of GF (Owens
Corning, Toledo, Ohio, America) were used as reinforcing materials. GF is an elongated thread-like reinforcing agent [
]. The average nominal diameters of the two types of GF were 13 and 10 μm (denoted big and small, respectively), and the chop lengths were both 4.3 mm. In addition, the specific gravities were both
2.6. In accordance with the reinforcement factors, the base material and the reinforcing agents were compounded, using a twin-screw extruder (TEK 25, SMPLATEK, Ansan, Korea) to produce
talc-reinforced PP composites (PP/T) and glass-fiber-reinforced PP composites (PP/GF).
2.2. Sample Preparation
The test specimen for shrinkage was prepared in the shape of a thin rectangle of dimensions 150 mm × 100 mm × 1.8 mm. Test samples (parts) were injection molded with a 120-ton injection molding
machine (WOOJIN SELEX-E120, Chungcheong, Korea). The injection temperature was 200 °C and the mold temperature was 40 °C. The injection pressure was 8 MPa. In addition, the holding pressure was 6.4
MPa (80% of the injection pressure) and the back pressure was 2 MPa. The injection speed was 40 mm/s, and the rotation speed of the injection molding screw was 60 rpm. The cooling time of the test
samples was 40 s. More than seven samples were injection-molded, for each experimental condition.
2.3. Shrinkage Measurements
Forty-eight hours after injection molding, the shrinkage was calculated by measuring the length of the test specimen. The lengths of seven samples were measured for each experimental condition, using
Vernier calipers, and the maximum and minimum values were excluded. The shrinkage was calculated by measuring the lengths of five test samples at each condition, and the shrinkage was determined as
$Shrinkage ( % ) = l mold − l part l mold ∗ 100 ,$
$l mold$
represents the mold length and
$l part$
represents the part length. Locations where shrinkage was measured and averaged on the test specimen are shown in
Figure 1
. There were two shrinkage measurement locations for the FD (red line in
Figure 1
) and TD (blue line in
Figure 1
). Differential shrinkage, which represents the deviation of the shrinkage between flow and transverse directions, was calculated using Equation (2).
$Differential shrinkage ( % ) = / S h r i n k a g e i n F D − S h r i n k a g e i n T D /$
2.4. Distribution of Reinforcements
We confirmed the distribution of the reinforcements (talc and GF) in the base material, PP, using X-ray micro-computed tomography (Micro-CT). Using Micro-CT (Bruker Co., Billerical, Skyscan 1272,
Massachusetts, United States), we captured cross-sectional images of the skin-core structure, to examine the distribution of the reinforcements and their orientation in PP.
2.5. Design of Experiments
The experiments were designed to find the optimal type and content of reinforcement that would minimize shrinkage. There were three factor types in the experiment. The type and content of the
reinforcement were controllable factors, whereas the size of the reinforcement was set as an uncontrollable noise factor (
Table 1
). In this study, we conducted experiments on all conditions, in random order (
Table 2
). The shrinkage in the FD and TD, as well as the differential shrinkage, were compared at each condition.
2.6. Optimization Methods
From the experimental results, ANOVA, the Taguchi method, and regression analysis were applied to find the optimum conditions for preventing the shrinkage in the FD and TD, as well as the
differential shrinkage.
ANOVA is a method to identify which factors are more influential among others. The dispersion of the characteristic values is expressed as a sum of squares and is divided by the number of degrees of
freedom of each error, and the variance calculated by each factor is compared with the variance of the error [
The Taguchi method is a method for finding a robust optimal condition from the noise factors, which cannot be controlled by adjusting the level of the controllable factors. This robust condition can
be calculated from the high signal-to-noise (S/N) ratio. A smaller shrinkage in the FD and TD, and a smaller differential shrinkage indicate better values; the corresponding S/N ratio equation is
shown in Equation (3) [
$S / N ratio ( dB ) = − 10 l o g 10 ∑ i = 1 n y i 2 n$
is the number of replications and
is the experimental value.
Regression analysis is an analytical method that determines the correlation between several independent variables and dependent variables. A model showing the correlation between variables can be
obtained by statistical methods and can be used to predict untested experimental conditions [
3. Results and Discussion
3.1. Effect of the Reinforcement on Shrinkage
We measured and analyzed the shrinkage in three directions, upon the incorporation of talc and GF into PP, in which the reinforcement size factor was small.
Figure 2
Figure 3
show the shrinkage tendencies in FD and TD. The shrinkage was reduced with the addition of both talc and GF. The shrinkage of PP decreased in all directions, as the content of both talc and GF
increased. Therefore, it was reasonable to add talc or GF, to reduce the shrinkage of PP. The reason for this was that both types of reinforcement had a lower coefficient of thermal expansion than
the polymer matrix [
In particular, the addition of GF significantly reduced the shrinkage of PP in the two directions, compared with the addition of talc. The shrinkages of PP/T were 1.257% and 1.216% (FD and TD), and
the shrinkages of PP/GF were 0.277% and 0.462% (FD and TD), at a reinforcement content of 20 wt %. It could be seen that the shrinkages of PP/T were about 4.54 times (FD) and 2.63 times (TD) higher
than those of PP/GF. Therefore, the incorporation of GF into PP was more effective than the incorporation of talc, for minimizing shrinkage.
The shrinkage of PP/GF was reduced sharply at a GF content of 5 wt %, but the shrinkage of PP/T showed a relatively slight reduction at a talc content of 5 wt %. When the reinforcement content was 5
wt %, the shrinkage of PP/T decreased by 0.006% (FD) and 0.016% (TD), compared to that of PP alone, but the shrinkage of PP/GF decreased by 0.967% (FD) and 0.606% (TD), compared to that of PP. If the
shrinkage of PP by the incorporation of a small amount of reinforcement (5 wt %) was required, GF was more useful than talc. This was expected because GF (5 × 10
1/C) had a lower thermal expansion coefficient than talc (10
1/C) [
The shrinkage in the thickness direction decreased when the reinforcements were incorporated, as compared to when no reinforcement was added. However, no constant trend was observed with the
increasing reinforcement content (
Figure 4
), indicating that the shrinkage in the thickness direction was not significantly affected by the reinforcement content. Since the reinforcements were oriented close to the FD, and the thickness was
much smaller than the dimensions of the FD and TD, the minimization of shrinkage in the thickness direction, upon the addition of reinforcement, was insignificant. In addition, the shrinkage in the
thickness direction (more than 4%) was much larger than the shrinkage in the FD or TD as a whole. This trend was similar to that observed in other studies on the shrinkage of PP [
The differential shrinkage of PP/GF was about five times larger than that of PP/T, on average (
Figure 5
). When talc was incorporated, the differential shrinkage was similar to that of PP without reinforcement. The differential shrinkage of PP with no incorporated reinforcement was 0.063%, and the
differential shrinkages of PP/T were from 0.041% (at 20 wt %) to 0.073% (at 5 wt %). The base material, PP, itself underwent anisotropic shrinkage (high differential shrinkage), and this could be
improved by incorporating talc. PP is a semicrystalline polymer and is different from the amorphous polymers, which undergo isotropic shrinkage. Semicrystalline polymers show a sharp melting
transition during melting and partial crystallization, during cooling [
]. As the crystallization progressed, forming a regular lattice chain, the volume of PP reduced sharply. Therefore, the shrinkage of PP became larger as the crystallized portions increased. On the
other hand, when GF was incorporated, the differential shrinkage was much higher than that of PP without reinforcement. The differential shrinkages of PP/GF were 0.186% (at 20 wt %) to 0.321% (at 15
wt %).
In other words, as the content of talc increased, the shrinkage of PP/T decreased, similarly, in both directions (
Figure 6
). The shrinkage of PP/T decreased by 0.111% (FD) and 0.105% (TD), on average, for every 5 wt % increase in talc content. On the other hand, the shrinkage of PP/GF in the FD, decreased more than that
in the TD, as the GF contents increased (
Figure 7
). The shrinkage of PP/GF decreased by 0.356% (FD) and 0.294% (TD), on average, for every 5 wt % increase in GF content.
As the differential shrinkage increased, warpage of the injection molded parts occurred, so it was best to minimize the differential shrinkage. Compared with talc, GF minimized the shrinkage to a
greater extent, but the effect on the differential shrinkage was rather worse. Therefore, considering warpage, talc was a better choice than GF (
Figure 8
). This was because the GF had a relatively higher aspect ratio than talc, which made it easier to align along the FD. The shrinkage in FD was greatly reduced if the orientation of GF was close to
FD. However, since there was no remarkable reduction in shrinkage in the TD, a differential shrinkage occurred between the directions, and this worsened as the aspect ratio increased [
3.2. Shrinkage as a Function of Location
We confirmed the tendency of shrinkage as a function of the measurement location, when talc and GF were incorporated into PP, where the reinforcement size factor was small. As shown in
Figure 9
Figure 10
, the shrinkages on the left and right sides of the FD and bottom (near the gate) and top (far from the gate) of the TD were compared (the geometric measurement locations are shown in
Figure 1
). As a whole, the addition of the reinforcement increased the shrinkage difference. In addition, the difference in shrinkage (by position) in the TD was larger than that in the FD. The shrinkage
measurement positions in the FD were isotropic from the gate and exhibited a comparably similar reinforcement orientation. However, the shrinkage measurement positions in the TD were at different
distances from the gate and exhibited a different reinforcement orientation.
In the FD, PP/T, and PP/GF showed a similar shrinkage in the left and right positions, but the shrinkage difference of PP/GF was smaller than that of PP/T. The average shrinkage difference of PP/T
was 0.123%, while that of PP/GF was 0.022%. GF, with a high aspect ratio, was more uniformly orientated in the FD and exhibited a more uniform distribution, relative to the randomly oriented talc.
In the TD, PP/T showed a similar shrinkage on the bottom and top, although the shrinkage of PP/GF at the two positions were different. The average shrinkage difference of PP/T was 0.165%, while that
of PP/GF was 0.259%. It could be seen that the shrinkage difference of PP/GF was larger than that of PP/T, which could be attributed to the orientation of GF.
3.3. Distribution of Reinforcement
We have investigated the different shrinkage trends of PP/T and PP/GF that was caused by the reinforcement orientation. For this, we examined the reinforcement distributions by Micro-CT (
Figure 11
Figure 12
Figure 13
). The reinforcement content was 20 wt %, and the size was small in this measurement.
It is clear from
Figure 11
that all the talc in the cross-sections of PP/T were randomly distributed. There was no uniform distribution that appeared in the flow direction, transverse direction, and thickness direction.
On the other hand, GF in the cross-sections of PP/GF was very distinctly oriented (
Figure 12
). In particular, glass fibers exhibited different orientations in the skin layers near the surface of the sample and in the core inside it (
Figure 13
). The skin layers were divided into (i) skin layers without the core of the skin layers and (ii) a core skin layer. GF in the layers showed different orientations. They were randomly oriented in the
skin layers, except in the core of the skin layers, where they were oriented parallel to the flow direction in the core of the skin layer. In the core, which is the central layer of the sample, they
were oriented perpendicular to the flow direction. The orientation of GF was constant along each direction and the layers, and it was analyzed that this orientation caused a significant differential
shrinkage between the FD and the TD. This fiber orientation in the injection mold was similar to that observed in other studies [
3.4. Analysis of Variance (ANOVA)
One-way ANOVA was performed for the three factors (reinforcement type, reinforcement content, and reinforcement size) and the three outcomes (shrinkages in the FD and TD, and the differential
shrinkage) of the reinforcements. The number of replications was five. F-tests were conducted to determine whether the mean values for each level of the factor were the same or not. If the p-value of
a factor was less than 0.05, the factor could be assumed to be significant.
Table 3
shows the ANOVA results for the shrinkage in the FD. In the case of the reinforcement factor, the
-value was 0, which was the most significant factor among the three. In addition, the content and size of reinforcement were not significant factors because the
-values were greater than 0.05. The ANOVA results for shrinkage in the TD showed that the reinforcement factor (
-value: 0) was the most significant factor (
Table 4
). The reinforcement content was the next most important factor (
-value: 0.005), and the reinforcement size was not a significant factor.
The ANOVA results for differential shrinkage are shown in
Table 5
. The
-value of the reinforcement factor was 0, making it the most significant factor for differential shrinkage. The other two factors were relatively insignificant (
-values: 0.290 and 0.567).
The analysis results for the three shrinkage values indicated that the type of reinforcement was the most significant factor (average p-value: 0). The next most significant factor was the
reinforcement content, but the influence was not significant, compared to the effect of the reinforcement type. Finally, the reinforcement size was found to be an insignificant factor, overall.
Therefore, to improve the shrinkage of PP, the factors should be considered in the order of reinforcement type and reinforcement content. The reinforcement size does not require consideration unless
there is a large difference between the size of the particles.
Two-way ANOVA was performed to examine how the two variables (reinforcement type and reinforcement content) affected shrinkage and whether there was any interaction between these factors.
First, the influence of the factors on the shrinkage in the FD and TD was analyzed, and the order or influence was reinforcement type, reinforcement content, and interaction between reinforcement
type and reinforcement content. As the
-values of all three factors were less than 0.05, the factors were all significant for shrinkage in the FD and TD (
Table 6
Table 7
). However, the
-values of the factors were all close to zero; thus, the importance of the factors could be identified by the difference in the value of the F-ratios. The influence of each of the three factors was
very different, based on F-ratio values. The F-ratios of the reinforcement type factor were 9652.18% (FD) and 3827.91% (TD), those of the reinforcement content factor were 243.19% (FD) and 239.12%
(TD), and those of the interaction between reinforcement type and reinforcement content were 5.64% (FD) and 8.13% (TD). This difference could be confirmed by comparing the main effects (
Figure 14
Figure 15
Next, the influence of the three factors on differential shrinkage was confirmed in the order of reinforcement type, reinforcement content, and the interaction between reinforcement type and
reinforcement content. As the
-values of the three factors were less than 0.05, all factors were significant for differential shrinkage (
Table 8
). The F-ratio of each factor showed that influence of the reinforcement type was overwhelming, but the effect of the reinforcement content was relatively small, compared to the effect on the
shrinkage in the FD and TD. The F-ratio of the reinforcement factor was 708.69%, that of the reinforcement content factor was 13.37%, and that of the interaction between the reinforcement type and
reinforcement content was 6.10%. This meant that the variation in the differential shrinkage was not large and did not depend on the reinforcement content. Thus, differential shrinkage could not be
controlled by varying the reinforcement content. This difference in influence could also be seen from the main effect comparison (
Figure 16
Finally, we examined the interaction between the reinforcement type and reinforcement content for the three outcomes. The magnitude of the interaction was readily apparent in the interaction plot (
Figure 17
). In the interaction plot, a small interaction is indicated by parallel lines, whereas intersecting lines indicate a large interaction [
]. The interaction plots were almost parallel, so the interaction between factors was low.
3.5. The Taguchi Method
The optimal conditions for the controllable factors for shrinkage were analyzed by the Taguchi method. The controllable factors were the reinforcement type (A) and the reinforcement content (B). The
noise factor was the reinforcement size (N). The levels of the factors were 2 for A (talc and GF), 4 for B (5, 10, 15, and 20 wt %), and 2 for N (big and small size).
The optimum conditions were calculated, based on the condition that the S/N ratio was high.
Table 9
Table 10
Table 11
show all shrinkage values and S/N ratios for each noise factor. In the case of shrinkage in the FD and TD, it was found that the optimum condition was obtained when 20 wt % of GF was incorporated
into the PP. The S/N ratios of shrinkage in FD and TD were the largest (11.358 dB (FD) and 6.621 dB (TD)) at condition number 8 with 20 wt % incorporated GF. On the other hand, differential shrinkage
was found to be optimum when 20 wt % of talc was incorporated into the PP. The S/N ratio of differential shrinkage was the largest (26.755 dB) at condition number 4, with 20 wt % talc incorporated.
These results suggest that GF was a better reinforcing agent than talc, for the reduction of directional shrinkage, whereas talc was a better reinforcement than GF, for the reduction of differential
3.6. Regression Analysis
According to the above analysis, if we want to minimize shrinkage in the FD and TD, we can incorporate as much GF as possible, and, if we want to minimize differential shrinkage, we can incorporate
as much talc as possible. However, if we want to minimize both directional shrinkages and the differential shrinkage, both reinforcing agents must be added.
To find the optimum mixed reinforcing conditions, regression analysis was performed with the reinforcement content as a variable. For the regression analysis, only the small size reinforcing agents
were used. Regression equations for the shrinkage of PP/T in the FD and TD and those of PP/GF in the FD and TD were calculated as a function of the reinforcement content. The experimental values of
PP/T showed a good fit with the linear regression model, and the experimental values of PP/GF showed a high fitness with the exponential non-linear regression model (
Figure 18
Figure 19
). The regression equation for the PP composite, reinforced with talc and GF, was as follows:
$S ∥ = w t a l c w t o t a l S t a l c ∥ + w G F w t o t a l S G F ∥ ,$
$S ⊥ = w t a l c w t o t a l S t a l c ⊥ + w G F w t o t a l S G F ⊥ .$
$S ∥$
is the total shrinkage in the FD, and
$S ⊥$
is the total shrinkage in the FD.
$S t a l c ∥$
$S t a l c ⊥$
are the shrinkage regression equations for PP/T in the FD and TD.
$S G F ∥$
$S G F ⊥$
are the shrinkage regression equations for PP/GF in the FD and TD. In addition,
$w t a l c$
$w G F$
represent the content of incorporated talc and GF, and
$w t o t a l$
is the total reinforcement content (
$w t o t a l > 0 )$
In industry, there are restrictions on the optimal conditions for various factors. For example, in a case where the reinforcement content is limited to 20 wt %, the shrinkage in the FD and TD should
be 1.2 % or less, and the differential shrinkage should be 0.1 % or less, the incorporation of one kind of reinforcement, such as talc or GF, will be unsatisfactory. However, the addition of 11 wt %
of talc and 9 wt % of GF, can satisfy the required condition (
Table 12
). As another example, if the shrinkage in the FD and TD was 1% or less, the differential shrinkage should have been 0.15% or less, and the total reinforcing agent content should not have exceeded 20
wt %; again, this condition could not be achieved with a single reinforcing agent. However, the addition of 7 wt % of talc and 13 wt % of GF into the PP, could achieve this value. In this way, we
could predict optimal conditions for various dependent variables, using the regression model.
4. Conclusions
In this study, we investigated the optimal reinforcement factors to minimize the shrinkage of talc- and GF-reinforced PP composites, after injection molding. The reinforcement factors were
reinforcement type (talc and GF), reinforcement content (0, 5, 10, 15, and 20 wt %), and reinforcement size (big and small). The type and content of the reinforcement were controllable variables, and
reinforcement size was set as an uncontrollable variable. The samples for all experimental conditions were injection-molded to measure the differential shrinkage, as well as the shrinkages in the
flow direction, transverse direction, and thickness direction. In addition, the shrinkage of PP/T and PP/GF, at different measurement locations and the distribution of reinforcements in the samples,
were confirmed. For the measured shrinkage results, optimal conditions were obtained by applying ANOVA, the Taguchi method, and regression analysis. It was confirmed that the reinforcement type was
the most influential factor of reinforcement type, reinforcement content, and reinforcement size. Of these factors, the reinforcement size was found to be relatively insignificant, compared to the
other factors. Therefore, when a reinforcing agent was incorporated into PP to minimize the shrinkage of PP, the factors should be considered in order of reinforcement type, reinforcement content,
and reinforcement size. The optimum condition with minimum shrinkage in the FD and TD for the noise factor were found to be 20 wt % of GF, and the optimal condition with minimum differential
shrinkage was found to be 20 wt % of talc. Finally, regression analysis models were prepared to identify the conditions where the shrinkage in the FD and TD, as well as the differential shrinkage,
satisfy specified values, simultaneously. It is expected that this study will be helpful for understanding the optimum reinforcement factors to minimize the shrinkage of PP composites.
Author Contributions
Conceptualization, Y.R.; methodology, B.C.K.; software, Y.R.; formal analysis, Y.R.; investigation, Y.R.; resources, B.C.K.; data curation, J.S.S.; writing—original draft preparation, Y.R.;
writing—review and editing, J.S.S.; visualization, Y.R.; supervision, S.W.C.; project administration, S.W.C.; funding acquisition, S.W.C.
This research was funded by Korea Institute for Advancement of Technology, grant number P0005775 and the APC was funded by Korea Institute for Advancement of Technology.
Conflicts of Interest
The authors declare no conflict of interest.
Figure 2. Shrinkage of talc-reinforced polypropylene composites (PP/T) and glass-fiber-reinforced polypropylene composites (PP/GF) in the flow direction, depending on the reinforcement type.
Figure 9. Shrinkage in the flow direction, as a function of the measurement location: (a) PP/T and (b) PP/GF.
Figure 10. Shrinkage in the transverse direction, as a function of the measurement location: (a) PP/T and (b) PP/GF.
Figure 17. Interaction plot for the mean (a) shrinkage in the flow direction, (b) shrinkage in the transverse direction, and (c) differential shrinkage.
Figure 18. Regression analysis plot for the shrinkage of PP/T: (a) Shrinkage in the flow direction, and (b) shrinkage in the transverse direction.
Figure 19. Regression analysis plot for the shrinkage of PP/GF: (a) Shrinkage in the flow direction, and (b) shrinkage in the transverse direction.
Level Controllable Factor Noise Factor
Reinforcement Type Reinforcement Content (wt %) Reinforcement Size
1 Talc 5 Big
2 Glass fiber 10 Small
Number Reinforcement Type Reinforcement Content (wt %) Reinforcement Size
1 - 0 -
2 Talc 5 Big
3 Talc 5 Small
4 Talc 10 Big
5 Talc 10 Small
6 Talc 15 Big
7 Talc 15 Small
8 Talc 20 Big
9 Talc 20 Small
10 Glass fiber 5 Big
11 Glass fiber 5 Small
12 Glass fiber 10 Big
13 Glass fiber 10 Small
14 Glass fiber 15 Big
15 Glass fiber 15 Small
16 Glass fiber 20 Big
17 Glass fiber 20 Small
Factor Degrees of Freedom Sum of Squares Mean Square F-Ratio (%) p-Value
Reinforcement type 1 22.197 22.197 919.82 0.000
Reinforcement content 3 1.678 0.5593 1.90 0.137
Reinforcement size 1 0.0026 0.0026 0.01 0.927
Factor Degrees of Freedom Sum of Squares Mean Square F-Ratio (%) p-Value
Reinforcement type 1 10.866 10.866 366.91 0.000
Reinforcement content 3 2.036 0.6788 4.63 0.005
Reinforcement size 1 0.0237 0.0237 0.14 0.709
Factor Degrees of Freedom Sum of Squares Mean Square F-Ratio (%) p-Value
Reinforcement type 1 0.8248 0.8248 423.83 0.000
Reinforcement content 3 0.04670 0.01557 1.27 0.290
Reinforcement size 1 0.004117 0.004117 0.33 0.567
Factor Degrees of Freedom Sum of Squares Mean Square F-Ratio (%) p-Value
Reinforcement type 1 22.197 22.197 9652.18 0.000
Reinforcement content 3 1.6778 0.5593 243.19 0.000
Reinforcement typeⅹcontent 3 0.0389 0.0130 5.64 0.002
Factor Degrees of Freedom Sum of Squares Mean Square F-Ratio (%) p-Value
Reinforcement type 1 10.866 10.866 3827.91 0.000
Reinforcement content 3 2.0363 0.6788 239.12 0.000
Reinforcement typeⅹcontent 3 0.0693 0.0231 8.13 0.000
Factor Degrees of Freedom Sum of Squares Mean Square F-Ratio (%) p-Value
Reinforcement type 1 0.8249 0.8249 708.69 0.000
Reinforcement content 3 0.0467 0.01557 13.37 0.000
Reinforcement typeⅹcontent 3 0.0213 0.0071 6.10 0.001
No. A B (wt %) Shrinkage (%) S/N Ratio (dB)
1 Talc 5 N1 1.664 1.618 1.644 1.587 1.598 −4.394
N2 1.705 1.700 1.654 1.700 1.710
2 Talc 10 N1 1.592 1.577 1.577 1.582 1.592 −3.835
N2 1.526 1.516 1.526 1.547 1.511
3 Talc 15 N1 1.414 1.373 1.419 1.414 1.399 −3.021
N2 1.419 1.444 1.434 1.439 1.404
4 Talc 20 N1 1.296 1.317 1.307 1.296 1.327 −2.164
N2 1.251 1.256 1.251 1.261 1.266
5 GF 5 N1 0.531 0.531 0.618 0.541 0.597 3.684
N2 0.740 0.730 0.750 0.740 0.704
6 GF 10 N1 0.541 0.500 0.556 0.531 0.536 6.374
N2 0.434 0.393 0.398 0.413 0.459
7 GF 15 N1 0.286 0.265 0.286 0.311 0.271 10.351
N2 0.316 0.316 0.296 0.332 0.347
8 GF 20 N1 0.271 0.276 0.260 0.260 0.250 11.358
N2 0.260 0.306 0.281 0.281 0.255
No. A B (wt %) Shrinkage (%) S/N Ratio (dB)
1 Talc 5 N1 1.595 1.551 1.572 1.545 1.551 −4.041
N2 1.602 1.595 1.636 1.639 1.633
2 Talc 10 N1 1.511 1.507 1.507 1.507 1.511 −3.489
N2 1.470 1.460 1.497 1.487 1.484
3 Talc 15 N1 1.328 1.328 1.345 1.355 1.352 −2.668
N2 1.379 1.386 1.372 1.386 1.362
4 Talc 20 N1 1.264 1.254 1.264 1.261 1.264 −1.861
N2 1.210 1.213 1.230 1.200 1.227
5 GF 5 N1 0.848 0.852 0.865 0.845 0.875 0.463
N2 1.034 1.021 1.048 1.031 1.021
6 GF 10 N1 0.821 0.818 0.804 0.794 0.825 2.252
N2 0.737 0.696 0.710 0.754 0.744
7 GF 15 N1 0.510 0.453 0.470 0.446 0.460 4.997
N2 0.629 0.632 0.673 0.652 0.629
8 GF 20 N1 0.466 0.466 0.480 0.470 0.470 6.621
N2 0.450 0.483 0.463 0.443 0.473
No. A B (wt %) Shrinkage (%) S/N Ratio (dB)
1 Talc 5 N1 0.069 0.067 0.072 0.043 0.046 23.038
N2 0.103 0.104 0.018 0.060 0.077
2 Talc 10 N1 0.082 0.070 0.070 0.075 0.082 23.958
N2 0.056 0.056 0.029 0.059 0.027
3 Talc 15 N1 0.086 0.045 0.074 0.058 0.047 24.676
N2 0.040 0.059 0.062 0.054 0.042
4 Talc 20 N1 0.032 0.063 0.043 0.036 0.063 26.755
N2 0.040 0.042 0.020 0.061 0.039
5 GF 5 N1 0.318 0.321 0.248 0.304 0.278 10.559
N2 0.294 0.291 0.297 0.291 0.316
6 GF 10 N1 0.280 0.318 0.248 0.263 0.289 10.600
N2 0.303 0.303 0.312 0.340 0.284
7 GF 15 N1 0.225 0.188 0.184 0.135 0.189 11.576
N2 0.312 0.316 0.377 0.321 0.282
8 GF 20 N1 0.196 0.191 0.220 0.210 0.220 14.092
N2 0.189 0.177 0.182 0.162 0.218
No. Talc (wt %) GF (wt %) Shrinkage in FD (%) Shrinkage in TD (%) Differential Shrinkage (%)
1 20 0 1.290 1.245 0.045
2 19 1 1.308 1.274 0.034
3 18 2 1.313 1.292 0.021
4 17 3 1.307 1.300 0.006
5 16 4 1.290 1.300 0.009
6 15 5 1.265 1.291 0.026
7 14 6 1.231 1.274 0.043
8 13 7 1.190 1.249 0.059
9 12 8 1.142 1.218 0.076
10 11 9 1.089 1.181 0.092
11 10 10 1.030 1.137 0.108
12 9 11 0.965 1.088 0.123
13 8 12 0.897 1.034 0.137
14 7 13 0.824 0.974 0.150
15 6 14 0.747 0.910 0.163
16 5 15 0.667 0.842 0.175
17 4 16 0.583 0.769 0.186
18 3 17 0.497 0.693 0.196
19 2 18 0.407 0.612 0.205
20 1 19 0.315 0.528 0.214
21 0 20 0.220 0.441 0.221
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Ryu, Y.; Sohn, J.S.; Kweon, B.C.; Cha, S.W. Shrinkage Optimization in Talc- and Glass-Fiber-Reinforced Polypropylene Composites. Materials 2019, 12, 764. https://doi.org/10.3390/ma12050764
AMA Style
Ryu Y, Sohn JS, Kweon BC, Cha SW. Shrinkage Optimization in Talc- and Glass-Fiber-Reinforced Polypropylene Composites. Materials. 2019; 12(5):764. https://doi.org/10.3390/ma12050764
Chicago/Turabian Style
Ryu, Youngjae, Joo Seong Sohn, Byung Chul Kweon, and Sung Woon Cha. 2019. "Shrinkage Optimization in Talc- and Glass-Fiber-Reinforced Polypropylene Composites" Materials 12, no. 5: 764. https://
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/1996-1944/12/5/764","timestamp":"2024-11-10T01:41:51Z","content_type":"text/html","content_length":"523428","record_id":"<urn:uuid:64e760b8-cad9-4301-ab84-68897be736a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00253.warc.gz"} |
Want to calculate how much money you need to spend on gas?
This calculator is designed to assist you in determining your gas expenses. Simply input the total distance traveled and the cost per mile, and it will calculate your overall gas cost!
Cash Flow:
Year 1$:
Year 2$:
Year 3$:
Year 4$:
Year 5$: | {"url":"https://www.calculyte.com/finance/internal-rate-return-calculator","timestamp":"2024-11-10T01:14:17Z","content_type":"text/html","content_length":"10148","record_id":"<urn:uuid:dca6ea55-f22a-4284-b48d-ec52a724cb2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00572.warc.gz"} |
JEE-Mathematics Do yourself - 13 : 1. If f(x)=x∣x∣ then f−1(x) ... | Filo
Question asked by Filo student
JEE-Mathematics Do yourself - 13 : 1. If then equals-
d. Does not exist
Not the question you're searching for?
+ Ask your question
Filo tutor solution
Learn from their 1-to-1 discussion with Filo tutors.
Generate FREE solution for this question from our expert tutors in next 60 seconds
Don't let anything interrupt your homework or exam prep with world’s only instant-tutoring, available 24x7
Found 7 tutors discussing this question
Discuss this question LIVE
14 mins ago
Practice more questions on Functions
View more
Students who ask this question also asked
View more
Question Text JEE-Mathematics Do yourself - 13 : 1. If then equals-
Updated On May 2, 2023
Topic Functions
Subject Mathematics
Class Class 12 | {"url":"https://askfilo.com/user-question-answers-mathematics/jee-mathematics-do-yourself-13-1-if-then-equals-34393935373634","timestamp":"2024-11-07T19:25:54Z","content_type":"text/html","content_length":"251717","record_id":"<urn:uuid:557080c1-9334-409a-823b-12005269365c>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00763.warc.gz"} |
Examples of centrifuge calculations
As with sedimentation centrifuges the calculations for filtering centrifuges have some common patterns with filter calculations considering the same principle of operation, however centrifugal field
conditions bring about some differences.
The common equation for determining theoretical throughput of centrifuges is shown below:
Q = a·Σ
Q – centrifuge throughput, m³/s;
a – adjustment factor depending on the type of centrifuge (for a filtering centrifuge а is replaced by the filtering constant k, determined experimentally);
Σ – throughput index.
In its turn, the throughput index for a centrifuge is calculated as follows:
Σ = F[av]·K[av]
F[av] = 2·π·L·(R+r) – is the average separation surface, m²;
L – drum length, m;
R – inner radius of a centrifuge rotor, m;
r – inner radius of slurry ring in a centrifuge, m;
K[av] = [ω²·(R+r)] / [2·g] – average separation factor of a centrifuge;
ω – angular velocity of a centrifuge rotor, s^-1;
g – gravitational acceleration, m/s.
However, the actual throughput is often less than the theoretical one due to some factors, such as friction of the liquid layer against the centrifuge drum, etc. To account for all those factors an
adjustment factor (ζ) called an efficiency indicator is introduced into throughput equation for a filtering centrifuge. Thus, the final throughput equation is as follows:
Q = ζ·a·Σ
For a batch type filtering centrifuge the formula is different:
Q = a·√t[f]·V[w]·Σ
a – adjustment factor characterizing the sediment resistance;
t[f] – slurry feed time, s;
V[w] = π·L·(R²-r²) – working volume of the drum, m³.
To obtain maximum average throughput of a filtering centrifuge t[f] is usually assumed to be equal to the aggregate time of centrifugation (t[c]) and sediment discharge (t[d]) processes:
t[f] = t[c]+ t[d]
In the calculations of centrifuge power one should distinguish between starting power (N[start]) and operation period power (N[op]). The starting power is a sum of the following values:
N[start] = N[i]+N[b]+N[a] [kW]
N[i] – initial power of a centrifuge, W;
N[b] – power consumed for bearing losses, W;
N[a] – power consumed for air friction, W.
In its turn, the operation period power is composed of the following elements:
N[op] = N[l]+N[s]+N[b]+N[a]; [kW]
N[l] – centrifuge power for transferring kinetic energy to the liquid phase of the slurry, W;
N[s] – centrifuge power for transferring kinetic energy to the solid phase of the slurry, W.
The initial power of a centrifuge takes into account all the moments of inertia occurring at the start:
N[i] = (I·ω²) /(2·10³·t[s]); [kW]
I – total inertia moment of rotor and load with respect to rotation axis, kg·m²;
ω – angular velocity of a centrifuge rotor, s^-1;
t[s] – starting time of a centrifuge, s.
Power lost to friction in bearings:
N[b] = [f·ω·Σ(P·d)] / [2·10³]; [kW]
f – friction coefficient in bearings;
Σ(P·d) – a sum of products of dynamic loads on bearings (P, Н) by respective shaft diameters (d, m).
Power lost to air friction:
N[a] = 12·10^-6·ρ[a]·R[aor]·ω²; [kW]
ρ[a] – air density, kg/m³;
R[aor] – average outer radius of a rotor, m.
Example 1
Selection of a centrifuge and calculation of its throughput
Conditions: A sedimentation centrifuge is available capable of reaching operating angular velocity of ω = 600 rpm. Specification of the drum: inner radius R = 300 mm, length L = 500 mm. The
centrifuge is used for clarification of water from suspended solid particles having diameter d[p] = 0.5 mm and density ρ[p] = 2,100 kg/m³. For this problem dynamic viscosity is assumed to be equal to
μ = 0.001 Pa·s, and density ρ[l] = 1,000 kg/m³.
Problem: Calculate the centrifuge throughput Q.
Solution: The sought quantity may be calculated using the formula below:
Q = (F·v[gf ]·Fr)/g
v[gf] is particle sedimentation velocity in the gravity filed, which may be determined as follows (g = 9.81 m/s – gravitational acceleration):
v[gf] = [d[p]²·(ρ[p]-ρ[l])·g] / [18·μ] = [0.0005²·9.81·(2,100-1,000)] / [18·0.001] = 0.15 m/s
The settlement area of the drum F may be found from its geometry according to:
F = 2·π·R·L = 2·3.14·0.3·0.5 = 0.942 m2
Fr is the Froude number characterizing correlation between particle sedimentation velocities in the centrifugal field and gravity field:
Fr = (ω²·R) / g = ((600/60)²·0.3) / 9.81 = 30.58
From here, particle sedimentation velocity in the centrifugal field equals to:
v[c] = (v[gf]/g)·Fr = (0.15/9.81)·30.58 = 0.47 m/s
F·Fr is usually replaced with Σ – throughput index, which may be adjusted depending on the particle precipitation pattern, which, in its turn, is determined by Reynolds number:
Re = (ρ[l]·v[c]·d[p]) / μ = (1000·0.47·0.0005) / 0.001 = 235
The resulting Re value is in the range of 2<Re<500, consequently, the sedimentation pattern is transient, for which the adjusted formula for the throughput index is as follows:
Σ = F·Fr^0.73 = 0.942·30.58^0.73 = 11.44
Let’s insert the resulting values into the initial equation and calculate the target value:
Q = (F·v[gf]·Fr)/g = (v[gf]/g)·Σ = (0.15/9.81)·11.44 = 0.17 m³/s.
Result: centrifuge throughput is 0.17 m³/s.
Example 2
Calculation of starting power of a filtering centrifuge
Conditions: A filtering centrifuge is available for separating slurry having the density of ρ[s] = 1,100 kg/m³. A drum weighing m[b] = 200 kg has inner radius R = 0.5 m, wall thickness b = 0.005 m
and length L = 0.4 m. The initial load of the drum is 50% of its internal volume. The centrifuge ramp-up time to the operating velocity is t[stay]= 7 s. The centrifuge angular velocity is ω = 1,000
rpm. In the calculations air density ρ[a] is assumed to be equal to 1.3 kg/m³ and bearing friction coefficient f = 0.05. Shaft neck diameter d[n] = 80 mm.
Problem: Calculate the starting power N[start].
Solution: The starting power (N[start]) is a sum of powers for bearing friction losses (N[b]), for air friction (N[a]) and for initial overcoming of inertia at start (N[i]):
N[start] = N[b]+N[a]+N[i]
In order to determine the power consumed for bearing friction losses let’s use the formula based on the mass of rotating parts of a centrifuge. Let’s assume that only the drum and slurry weights are
involved into the rotation:
N[b] = f·g·M·v[n]
М is the total mass of the rotating parts of a centrifuge. The drum mass is known and we only need to determine the initial mass of the slurry. Since the initial filling of the drum is 50%, we can
determine the slurry mass m[s] by finding its volume and multiplying it by density:
m[s] = 0.5·2·π·R·L·ρ[s] = 0.5·2·3.14·0.5·0.4·1100 = 691 kg
Then the total mass is equal to:
M = m[b]+m[s] = 200+691 = 891 kg
The circumferential velocity of the shaft neck v[n] is determined from the formula:
v[n] = ω·d[n]/2 = (1,000/60)·(0.08/2) = 0.66 m/s
Let’s calculate the power N[b]:
N[b] = f·g·M·v[n] = 0.05·9.81·891·0.66 = 288.4 W
Let’s also calculate the power N[a] assuming that the outer radius of the drum is R[o] = R+b:
N[a] = 0.012·ρ[a]·R[o]·ω² = 0.012·1.3·(0.5+0.005)·(1,000/60)² = 2.2 W
We will calculate N[i] assuming that the entire rotating mass is concentrated on the inner radius of the drum R, then the total moment of inertia may be represented as I = M·R²:
N[i] = (I·ω²)/(2·t[stay]) = (M·R²·ω²)/(2·t[stay]) = (891·0.5²·(1,000/60)²)/(2·7) = 4,419.6 W
Now we can find the target value:
N[start] = N[b]+N[a]+N[i] = 288.4+2.2+4,419.6 = 4,710.2 W
Result: The starting power is 4.71 kW | {"url":"https://intech-gmbh.ru/en/centrifuges_calc_examples/","timestamp":"2024-11-14T08:13:43Z","content_type":"text/html","content_length":"49432","record_id":"<urn:uuid:60c86ad5-d899-4139-9110-a0b8909d119f>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00134.warc.gz"} |
If the given sequence is geometric, find the common ratio \(r\) If the sequence is not geometric, say so. See Example 1 . $$ 5,15,45,135, \dots $$
Short Answer
Expert verified
The common ratio \( r \) is 3.
Step by step solution
- Identify the Sequence Type
To determine if the given sequence is geometric, first recall that in a geometric sequence, each term after the first is obtained by multiplying the preceding term by a fixed, non-zero number called
the common ratio, denoted as \( r \).
- Compute the Ratio Between Consecutive Terms
Calculate the ratio between the second term and the first term: \( \frac{15}{5} = 3 \).
- Verify the Consistency of the Ratio
Check the ratio between the third term and the second term: \( \frac{45}{15} = 3 \). Finally, verify the ratio between the fourth term and the third term: \( \frac{135}{45} = 3 \).
- Confirm the Sequence is Geometric
Since the ratio is consistent (always equal to 3) between consecutive terms, the sequence is confirmed to be geometric.
- State the Common Ratio
The common ratio \( r \) of the geometric sequence is 3.
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
Common Ratio
In a geometric sequence, the key characteristic is the common ratio. This is the fixed number you multiply each term by to get the next term in the sequence. To find the common ratio, pick any two
consecutive terms. Let's say we have a sequence where each term follows from the previous one, like in the example: 5, 15, 45, 135,... Here, divide the second term by the first term: \( \frac{15}{5}
= 3 \). This value is the common ratio, often written as \( r \). To confirm, you should check if the same ratio holds for the other terms in the sequence.
Consecutive Terms
Consecutive terms are simply terms that come one right after another in the sequence. In our example of the geometric sequence 5, 15, 45, 135,... the terms 15 and 5, or 45 and 15 are pairs of
consecutive terms. When calculating the common ratio, always pick two consecutive terms. The ratio between consecutive terms should remain constant for the sequence to be geometric. For instance,
calculate the ratio \( \frac{\text{third term}}{\text{second term}} = \frac{45}{15} = 3 \) to double-check the consistency.
Verify Sequence
Verifying if a sequence is geometric involves checking the common ratio throughout the sequence. Start by picking consecutive terms and computing their ratio. If the ratio is consistent across all
pairs of consecutive terms, the sequence is geometric. In our example sequence 5, 15, 45, 135,... you verified \( \frac{15}{5} = 3 \), \( \frac{45}{15} = 3 \), and \( \frac{135}{45} = 3 \). Since the
ratio is the same, it confirms that the sequence is geometric. This ensures the same number (common ratio) consistently multiplies each term to produce the next term, guaranteeing the sequence's
geometric nature. | {"url":"https://www.vaia.com/en-us/textbooks/math/intermediate-algebra-11-edition/chapter-12/problem-2-if-the-given-sequence-is-geometric-find-the-common/","timestamp":"2024-11-14T05:22:14Z","content_type":"text/html","content_length":"247224","record_id":"<urn:uuid:cb1579b2-c7dd-4518-8230-9ad107c0dcda>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00795.warc.gz"} |
CategorÃa: Actuarial Modelling
On why it doesn’t really make sense to fit a Pareto distribution with a method of moments.
I was sent some large loss modelling recently by another actuary for a UK motor book. In the modelling, they had taken the historic large losses, and fit a Pareto distribution using a method of
moments. I thought about it for a while and realized that it didn't really like the approach for a couple of reasons which I'll go into in more detail below, but then when I thought about it some
more I realised I'd actually seen the exact approach before ... in an IFoA exam paper. So even though the method has some shortcomings, it is actually a taught technique. [1]
Following the theme from last time, of London's old vs new side by side. Here's a cool photo which shows the old royal naval college in Greenwich, with Canary Wharf in the background. Photo by Fas
Okay, that's a bit of an exaggeration, but there’s a quirky mathematical result related to these deals which means the target loss cost can often end up clustering in a certain range. Let’s set up
a dummy deal and I’ll show you what I mean.
Source: Jim Linwood, Petticoat Lane Market, https://www.flickr.com/photos/brighton/4765025392
I found this photo online, and I think it's a cool combo - it's got the modern City of London (the Gherkhin), a 60s brutalist-style estate (which when I looked it up online has been described as "a
poor man's Barbican Estate"), and a street market which dates back to Tudor times (Petticoat lane).
As a rule of thumb, news outlets like the Guardian [1] or BBC News [2] don't typically report on the decisions of the Delaware Court of Chancery, a fairly niche 'court of equity' which decides
matters of corporate law in the state of Delaware. That is of course, unless those decisions involve Elon Musk. Recently, the Delaware court handed down a judgement which voided a /$56bn pay-out
which was due to Musk for his role as Tesla’s CEO. The reasoning behind striking it down is quite legal and technical, and not really my area of expertise but Matt Levine has a good write up for
those interested. [3]
What I am interested in is thinking about how we would assess the fairness of the pay-out. Now fairness is a slippery concept, but I'm going to present one angle, which I've haven't seen discussed
elsewhere yet, which I think is one possible way of framing the situation.
Source: https://en.m.wikipedia.org/wiki/File:Roadster_2.5_charging.jpg
I received a chain ladder analysis a few days ago that looked roughly like the triangle below, but there's actually a bit of a problem with how the method is dealing with this particular triangle,
have a look at see if you can spot the issue.
I’ve had the textbook 'Modelling Extremal Events: For Insurance and Finance’ sat on my shelf for a while, and last week I finally got around to working through a couple of chapters. One thing I
found interesting, just around how my own approach has developed over the years, is that even though it’s quite a maths heavy book my instinct was to immediately build some toy models and play
around with the results. I recall earlier in my career, when I had just got out of a 4-year maths course, I was much more inclined to understand new topics via working through proofs step-by-step in
long hand, pen to paper.
In case it’s of interest to others, I thought I’d upload my Excel version I built of the classic ruin process. In particular I was interested in how the Cramer-Lundberg theorem fails for
sub-exponential distributions (which includes the very common Lognormal distribution). Therefore the Spreadsheet contains a comparison of this theorem against the correct answer, derived from monte
carlo simulation.
The Speadsheet can be found here:
The first tab uses an exponential distribution, and the second uses a Lognormal distribution. Screenshot below.
I also coded a similar model in Python via Jupyter Notebook, which you can read about below.
I was thinking more about the post I made last week, and I realised there’s another feature of the graphs that is kind of interesting. None of the graphs adequately isolates what we in insurance
would term ‘severity’ inflation. That is, the increase in the average individual verdict over time.
You might think that the bottom graph of the three, tracking the ‘Median Corporate Nuclear Verdict’ does this. If verdicts are increasing on average year by year due to social inflation, then surely
the median nuclear verdict should increase as well right?!?
Actually, the answer to this is no. Let's see why.
We previously introduced a method of deriving large loss claims inflation from a large loss claims bordereaux, and we then spent some time understanding how robust the method is depending on how
much data we have, and how volatile the data is. In this post we're finally going to play around with making the method more accurate, rather than just poking holes in it. To do this, we are once
again going to simulate data with a baked-in inflation rate (set to 5% here), and then we are going to vary the metric we are using to extract an estimate of the inflation from the data. In
particular, we are going to look at using the Nth largest loss by year, where we will vary N from 1 - 20.
Photo by Julian Dik. I was recently in Losbon, so here is a cool photo of the city. Not really related to the blog post, but to be honest it's hard thinking of photos with some link to inflation, so
I'm just picking nice photos as this point!
We've been playing around in the last few posts with the 'Nth largest' method of analysing claims inflation. I promised previously that I would look at the effect of increasing the volatility of our
severity distribution when using the method, so that's what we are going to look at today. Interestingly it does have an effect, but it's actually quite a subdued one as we'll see.
I'm running out of ideas for photos relating to inflation, so here's a cool random photo of New York instead. Photo by Ronny Rondon
In the last few posts I’ve been writing about deriving claims inflation using an ‘N-th largest loss’ method. The thought popped into my head after posting, that I’d made use of a normal
approximation when thinking about a 95% confidence interval, when actually I already had the full Monte Carlo output, so could have just looked at the percentiles of the estimated inflation values
Below I amend the code slightly to just output this range directly.
In my last couple of post on estimating claims inflation, I’ve been writing about a method of deriving large loss inflation by looking at the median of the top X losses over time. You can read the
previous posts here:
Part 1:
Part 2:
One issue I alluded to is that the sampling error of the basic version of the method can often be so high as to basically make the method unusable. In this post I explore how this error varies with
the number of years in our sample, and try to determine the point at which the method starts to become practical.
Photo by Jøn
I previously wrote a post in which I backtested a method of deriving large loss inflation directly from a large loss bordereux. This post is an extension of that work, so if you haven't already,
it's probably worth going back and reading my original post. Relevant link:
In the original post I slipped in the caveat that that the method is only unbiased if the underlying exposure doesn’t changed over the time period being analysed. Unfortunately for the basic method,
that is quite a common situation, but never fear, there is an extension to deal with the case of changing exposure.
Below I’ve written up my notes on the extended method, which doesn't suffer from this issue. Just to note, the only other reference I’m aware of is from the following, but if I've missed anyone out,
apologies! [1]
I was reviewing a pricing model recently when an interesting question came up relating to when to apply the policy features when modelling the contract.
Source: Dall.E 2, Open AI.
I thought it would be fun to include an AI generated image which was linked to the title 'capped vs uncapped estimators'. After scrolling through tons of fairly creepy images of weird looking
robots with caps on, I found the following, which is definitely my favourite - it's basically an image of a computer 'wearing' a cap. A 'capped' estimator...
I wrote a quick script to backtest one particular method of deriving claims inflation from loss data. I first came across the method in 'Pricing in General Insurance' by Pietro Parodi [1], but I'm
not sure if the method pre-dates the book or not.
In order to run the method all we require is a large loss bordereaux, which is useful from a data perspective. Unlike many methods which focus on fitting a curve through attritional loss ratios, or
looking at ultimate attritional losses per unit of exposure over time, this method can easily produce a *large loss* inflation pick. Which is important as the two can often be materially different.
Source: Willis Building and Lloyd's building, @Colin, https://commons.wikimedia.org/wiki/User:Colin
Quota Share contracts generally deal with acquisition costs in one of two ways - premium is either ceded to the reinsurer on a ‘gross of acquisition cost’ basis and the reinsurer then pays a large
ceding commission to cover acquisition costs and expenses, or premium is ceded on a ‘net of acquisition’ costs basis, in which case the reinsurer pays a smaller commission to the insurer, referred
to as an overriding commission or ‘overrider’, which is intended to just cover internal expenses.
Another way of saying this is that premium is either ceded based on gross gross written premium, or gross net written premium.
I’ve been asked a few times over the years how to convert from a gross commission basis to the equivalent net commission basis, and vice versa. I've written up an explanation with the accompanying
formulas below.
Source: @
, Zurich
There's some interesting literature from the world of forecasting and natural sciences on the best way to aggregate predictions from multiple models/sources.
For a well-written, moderately technical introduction, see the following by Jaime Sevilla:
Jaime’s article suggests a geometric mean of odds as the preferred method of aggregating predictions. I would argue however that when it comes to actuarial pricing, I'm more of a fan of the
arithmetic mean, I'll explain why below. | {"url":"https://www.lewiswalsh.net/blog/category/actuarial-modelling","timestamp":"2024-11-09T17:26:44Z","content_type":"text/html","content_length":"69628","record_id":"<urn:uuid:9027121f-547f-4af5-a926-e4b555f1681f>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00064.warc.gz"} |
Followers 0 Following 0 Joined Last Online
• titi-gal
Stil a work in progress, but I've made the thx-sound-pd.rar . I'm a bit loss on what to do with the main synth to get more closer to the original, on panning, dinamycs and eq I have some ideas to
improve but for the moment I'm a bit tired of it.
• titi-gal
Hey, sorry to ask on this old topic, but I think is related enough. How did you know that the default array structure is "float-array z"? I couldn't find this anywhere else.
• titi-gal
In the patch below the sliders and number boxes behave as I want, but it has a stack overflow error. Is there a proper way to achieve this? I tried using a range message but it's not the same
• titi-gal
I find myself creating loops with [until] [f] [+ 1] etc... over and over again. Today I finally made a [for] loop abstraction, with start, stop and step arguments, it can also increment or
decrement. I tried to document it thoroughly. Hope to help.
• titi-gal
loved the sounds, trying to wrap my head around the patch. thanks for sharing! I will try i see if i can make it work with pedal and more than 3 notes polyphony.
• titi-gal
That's it. Aliasing. I tried at 48khz sample rate, and it changed. I researched a little and I understand in general terms what is happening, it's caused by the periodicity of my generated signal
interacting with the periodicity of its sampling to generate the audio. But these are new concepts for me, it opened more questions, I will research a bit more, I quite enjoy the results, but
couldn't understand why it was happening, thanks for the clarification.
• titi-gal
Hi, I'm looking for someone with a deeper understanding of PD and audio synthesis to help me understand why my patch sounds the way it sounds. I don't know why the audio is generated this way. I
don't know if it is because of some expected behavior of square waves and the math behind it, or if it is because of some quirk of PD and the way I'm generating the square wave. If you think you
can help, please stay with me, I don't know who to go to or what to study to help me understand this.
I was attempting to make a square wave from scratch, I saw some tutorials and I know there are more "proper" ways of doing this, but the simplest I could think of was -1 and 1 alternating. The
osc I built is a metronome with a route alternating between -1 and 1 at a given hertz. But the result doesn't sound like a single square wave, there are some harmonics, almost like having
senoidal waves mixed with the square wave.
Experimenting, I noticed a cycle of the resonances going from 1hz to 1378.125hz where it completely stops, like zero hz, or one tick in several, several minutes. Other cycles multiples of this
frequency exist, like this:
Cycle0 = 1 to n* 2^0
Cycle1 = n*(2^0) to n*(2^1)
Cycle2 = n*(2^1) to n*(2^2)
Cycle3 = n*(2^2) to n*(2^2)
? Cyclen= n2^(n-1) to n(2^n) ?
C0 and C1 are very similar, with subtle differences, but inside of C2 it seems like 2 times C0 or C1 (I didn't check it, but it sounds like this). Inside C3 there are 3 times, and next, C4, 4
times, and so on, the same cycle but more times and faster in each power of two.
I have made a patch that makes it easy to clone the osc to generate each Cycle. For example, clone 10 times to get audio files from C0 through C9.
Where do the resonances come from? Why these cycles exist? Are they some kind of harmonic series? Why 1378.125hz? Is this expected from the math, or is it a quirk result of some PD object (like
inputing very low numbers to the metronome)? Does someone understand why this is happening?
square-from-scratch-osc.pd used in main
square-from-scratch-osc-lineramp.pd used in main | {"url":"https://forum.pdpatchrepo.info/user/titi-gal","timestamp":"2024-11-11T07:17:12Z","content_type":"text/html","content_length":"48577","record_id":"<urn:uuid:99a0ce35-f823-4291-b9f8-7b0d61b5c652>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00147.warc.gz"} |
Free Group Study Rooms with Timer & Music | FiveableUnit 4 Overview: Magnetic Fields | AP Physics C: E&M Class Notes | Fiveable
Unit 4 on Magnetic Fields is a topic in the field of physics that covers the behavior of magnets and magnetic fields, as well as the interactions between magnets and charged particles. The unit
includes several key topics, including:
1. Magnetic Fields and Magnetic Force: This topic covers the behavior of magnetic fields and how they interact with other magnets or charged particles. It also includes the concept of magnetic force
and the calculation of the force on a charged particle moving through a magnetic field.
2. Applications of Magnetic Fields: This topic covers the practical applications of magnetic fields in areas such as particle accelerators, magnetic resonance imaging (MRI), and electric motors.
3. Electromagnetic Waves and Fields: This topic covers the relationship between magnetic fields and electric fields, and how they combine to create electromagnetic waves. It also includes the
concept of electromagnetic radiation and the behavior of waves in different media.
4. Magnetism and Matter: This topic covers the behavior of magnetic materials and how they respond to external magnetic fields. It also includes the concept of magnetization and the different types
of magnetic materials.
Overall, this unit provides a comprehensive understanding of the behavior of magnetic fields and their interactions with charged particles and other magnetic materials.
When a charged particle moves through a magnetic field, it experiences a magnetic force. The magnetic force is perpendicular to both the magnetic field and the direction of motion of the charged
particle. The force on a charged particle moving in a magnetic field can be described by the following equation:
F = q(v x B)
where F is the magnetic force acting on the charged particle, q is the charge of the particle, v is its velocity, and B is the magnetic field. The vector product v x B is a vector that is
perpendicular to both v and B, and its magnitude is given by the product of the magnitudes of v and B multiplied by the sine of the angle between them.
The direction of the magnetic force on a charged particle is given by the right-hand rule. If you point your thumb in the direction of the velocity of the charged particle, and your fingers in the
direction of the magnetic field, the direction of the magnetic force is given by the direction your palm is facing.
The magnitude of the magnetic force on a charged particle is proportional to the charge of the particle, the magnitude of its velocity, and the strength of the magnetic field. The magnetic force does
not do any work on the charged particle, because it is always perpendicular to the direction of motion of the particle.
The magnetic force on a charged particle can cause it to change its direction of motion, but it cannot change the magnitude of its velocity. This means that a charged particle moving in a magnetic
field will follow a curved path, called a cyclotron motion, as it moves through the magnetic field. The radius of the cyclotron motion depends on the velocity of the charged particle, the strength of
the magnetic field, and the mass and charge of the particle.
When an electric current flows through a wire, it produces a magnetic field around the wire. This magnetic field can interact with other magnetic fields, including the magnetic field produced by a
permanent magnet or an electromagnet. This interaction can result in a force being exerted on the wire, causing it to move.
The force on a current-carrying wire in a magnetic field is given by the following equation:
F = ILBsinθ
where F is the force on the wire, I is the current flowing through the wire, L is the length of the wire, B is the magnetic field, and θ is the angle between the direction of the current and the
direction of the magnetic field.
The direction of the force is given by the right-hand rule. If you point your thumb in the direction of the current in the wire, and your fingers in the direction of the magnetic field, the direction
of the force is given by the direction your palm is facing.
The magnitude of the force on the wire depends on the strength of the magnetic field, the current flowing through the wire, and the length of the wire that is in the magnetic field. The force is
proportional to the sine of the angle between the direction of the current and the direction of the magnetic field.
This force can be used in a variety of applications, including electric motors and generators, where a magnetic field is used to create motion in a wire or coil. It is also the principle behind many
sensors and devices that measure magnetic fields, such as Hall effect sensors.
A long, straight current-carrying wire produces a magnetic field around it that is circular in shape and is known as a solenoid. The magnetic field produced by a long current-carrying wire can be
calculated using Ampere's Law.
Ampere's Law states that the magnetic field around a long, straight wire is directly proportional to the current flowing through the wire and inversely proportional to the distance from the wire. The
equation for the magnetic field produced by a long, straight wire is given by:
B = μ₀(I / 2πr)
where B is the magnetic field, I is the current flowing through the wire, r is the distance from the wire, and μ₀ is the permeability of free space, which is a constant with a value of approximately
4π x 10^-7 N/A^2.
The direction of the magnetic field is given by the right-hand rule. If you wrap your fingers around the wire in the direction of the current, your thumb points in the direction of the magnetic
The magnetic field produced by a long, straight wire is strongest close to the wire and decreases as the distance from the wire increases. The magnetic field also becomes weaker as the current
through the wire decreases.
The magnetic field produced by a long current-carrying wire can be used in a variety of applications, including in electromagnets and solenoids. By wrapping a wire around a magnetic core and passing
a current through the wire, the magnetic field produced can be used to create a strong magnetic field that can be used in a variety of industrial and scientific applications.
The Biot-Savart Law and Ampere's Law are two important laws in electromagnetism that relate to the calculation of magnetic fields.
The Biot-Savart Law states that the magnetic field produced by a current-carrying wire at a point in space is proportional to the current in the wire and the length of the wire. The magnetic field is
also proportional to a vector quantity known as the vector potential, which depends on the distance from the wire and the direction of the magnetic field. The Biot-Savart Law can be used to calculate
the magnetic field at any point in space due to a current-carrying wire.
Ampere's Law relates the magnetic field around a closed loop to the current passing through the loop. It states that the line integral of the magnetic field around a closed loop is proportional to
the current passing through the loop. The proportionality constant is known as the permeability of free space, μ₀. Ampere's Law can be used to calculate the magnetic field around a current-carrying
wire or a group of current-carrying wires.
Both the Biot-Savart Law and Ampere's Law are important in the study of electromagnetism and can be used to calculate the magnetic fields produced by a variety of different current-carrying systems,
including long straight wires, solenoids, and even more complex systems like magnetic circuits and motors.
It is important to note that the Biot-Savart Law and Ampere's Law are both based on experimental observations and are fundamental laws of electromagnetism. They are used in a variety of fields,
including electrical engineering, physics, and materials science, to understand and design magnetic systems. | {"url":"https://hours-zltil9zhf-thinkfiveable.vercel.app/ap-physics-e-m/unit-4/overview/study-guide/qqMpnZmm7FBWW6dXdsIA","timestamp":"2024-11-03T07:34:28Z","content_type":"text/html","content_length":"138781","record_id":"<urn:uuid:b13596d1-457a-4a93-965a-faeb85298400>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00162.warc.gz"} |
What is convex optimization - ThinkLike.AI
What is convex optimization
Convex optimization is a subfield of mathematical optimization that studies the minimization of convex functions over convex sets. It is an important tool in machine learning and has been used to
solve a variety of problems, from image classification to natural language processing.
At its core, convex optimization is about finding the minimum value of a convex function. A convex function is one that is always increasing or decreasing, meaning that the minimum value is always at
a point on the curve. For example, a parabola is a convex function.
In machine learning, convex optimization is used to solve optimization problems, such as finding the best parameters for a model or the most efficient way to solve a problem. It is also used to
optimize the performance of a model, such as finding the best weights for a neural network.
Convex optimization is based on the principle of convexity, which states that if a set of points is convex, then any point in the set can be reached by a straight line connecting two other points in
the set. This means that if we have a set of points that represent a function, then any point on the function can be reached by a straight line connecting two other points on the function.
The main advantage of convex optimization is that it is guaranteed to find the global minimum of a function. This is because it is guaranteed to explore all points on the convex set, meaning that it
will always find the point with the lowest value. This makes it a powerful tool for solving optimization problems, as it is guaranteed to find the best solution.
In addition, convex optimization is relatively easy to implement, as it is based on simple mathematical principles. This makes it a popular choice for solving optimization problems in machine
In conclusion, convex optimization is an important tool in machine learning, as it can be used to solve a variety of optimization problems. It is based on the principle of convexity and is guaranteed
to find the global minimum of a function. It is also relatively easy to implement, making it a popular choice for solving optimization problems in machine learning. | {"url":"https://thinklike.ai/artificial-intelligence/what-is-convex-optimization/","timestamp":"2024-11-09T13:18:09Z","content_type":"text/html","content_length":"137328","record_id":"<urn:uuid:6f8c7bea-6ca4-483b-9470-9c79b3082b54>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00609.warc.gz"} |
Under Siege: The Golden Mean in Architecture by Michael Ostwald for the Nexus Network Journal vol. 2 no. 2 April 2000
Under Siege: The Golden Mean in Architecture
Michael J. Ostwald
Department of Architecture
Faculty of Architecture, Building and Design, University of Newcastle
University Drive, Callaghan, NSW 2308 AUSTRALIA
Edmund Husserl's Origin of Geometry was originally written as an appendix to his The Crisis of European Sciences and Transcendental Phenomenology. Significantly, in the Origin of Geometry Husserl
examines geometry from a historical perspective not a mathematical one. For Husserl the elevation of geometric forms to transcendental status (as, for example, the basis of universal beauty)
presupposes the connection between such forms and some "ideal" or "original" form. An example of this is found in Vitruvius, who maintains that the Euclidean circle and square are perfect for the
generation of architecture because they approximate the geometry of the spreadeagled human body - a body made in the image of god.[1] Similarly, as a "golden" spiral generated from the diagonal of a
half-square approximates both the shape of a nautilus shell and the distribution pattern of seeds in a sunflower it is presumed to be connected to the essence of nature.[2] In both cases the
geometric sign (the square and circle) or proportional set (the Golden Mean) is significant for its mimetic relationship to some other form, not for any intrinsic property. While Husserl does not
deny that geometric signs and sets contain intriguing mathematical properties, their historic significance outside of mathematics must, for him, be governed by "objectivity" or else they will lose
any and all connection to the historic ideal.
In 1962 Jacques Derrida wrote a lengthy introduction to Husserl's Origin of Geometry in which he similarly maintains that the search for connections between geometry and historic forms (which include
architectural as well as philosophical or epistemological forms) must be tempered with an awareness of the inherent futility of such an endeavour.[3]This is not to suggest that such enterprises
should never be undertaken, but rather that they must occur with due recognition of the fundamental difficulties involved. The architectural theorist Catherine Ingraham echoes this sentiment in her
detailed philosophical analysis of the role of the line and geometry in architectural representation.[4] Ingraham, who examines lines traced on drawings, photographs and maps, argues that any
analysis of this kind must take due account of what she calls the "burdens of linearity" [5]: the problems that beset all attempts to relate geometry to architecture.
Ultimately Husserl, Derrida and Ingraham separately affirm that tacit assumptions about the relationship between geometric forms and other forms - say geometry and architecture - must be constantly
questioned if they are to retain any validity. What is interesting about this position is that it resonates with the stance taken by a number of recent authors investigating the Golden Mean in
architecture. This paper briefly describes the Golden Mean and its history before summarising some of these arguments in an attempt to both inform the reader and to respond to Husserl's and Derrida's
Golden Proportions?
Explanations of the Golden Mean typically commence with a brief description of the Fibonacci sequence. An equally simple definition, which is often paraphrased in various texts is as follows. If a
line AB is divided by a point C such that the ratio of the whole line AB to the longer segment AC is equal to the ratio of the longer segment AC to the smaller segment CB then the ratio AB : AC (and
also AC : CB) is known as the Golden Mean (f or phi).[6] If the length of AB is 1.000 then the Golden Mean is approximately 1.618.
Such "divine" or "golden" systems of proportions first became the subject of serious scholarship in the fifteenth century in the work of Luca Paciolo.[7] In the seventeenth century Johannes Kepler
described the knowledge of these proportional systems as essential to the appreciation of art and nature. Indeed, Kepler could be seen to be at least partially responsible for propagating many
studies of geometrically defined aesthetic systems that were undertaken in the following two hundred years. By the nineteenth century, despite the protestations of John Ruskin, the practice of
tracing lines on drawings of facades in order to uncover invisible proportional systems had become commonplace.[8] Heinrich Wölfflin's pioneering analysis of Renaissance and Baroque churches set the
standard for this approach to the formal analysis of proportion in plan and facade. By the mid-twentieth century, when Rudolf Wittkower published his influential Architectural Principles in the Age
of Humanism, art and architectural historians were tracing the Golden Mean in countless historic buildings, paintings and sculptures.[9] Not only was this practice limited to historic buildings but
comparative analyses began to be undertaken from one period to another. One seminal example of this kind of research is Colin Rowe's 1947 essay "The Mathematics of the Ideal Villa" which traces
parallel proportional systems in the work of Palladio and Le Corbusier. In the aftermath of the publication of Rowe's research the Golden Mean enjoyed a popular resurgence in architectural practice
as a universal aesthetic panacea (neither Rowe's intent, nor, ironically, actually supported in his paper).
Throughout the thirty years that followed symposiums held in America, Canada and Europe called for the Golden Mean to be recognised as underlying a universal system of beauty. However, throughout the
seventies a small but growing number of criticisms of its role in architecture emerged. Notably one of the first of these is contained in Rowe's 1973 addendum to "The Mathematics of the Ideal Villa".
In this text Rowe criticises the "Wölfflinian" search for proportional systems noting that "its limitations should be obvious".[10] He goes on to enumerate these limitations concluding that the
practice of tracing proportional systems in architecture "cannot seriously deal with questions of iconography and content and, because it is so dependent on close analysis if protracted [such
analysis] can only impose enormous strain upon both its consumer and its producer".[11] Rowe's change of heart reflects a growing awareness in this field of study that the relationship between
geometry and architecture is neither so predictable nor so static as often thought. During the eighties and early nineties a small but growing number of conferences and symposia began to more openly
criticise the hegemony of the Golden Mean. Most recently, in June of 1998, the role of the Golden Mean in architecture was hotly debated at a conference in Mantua in Italy. This conference, the
second in a series begun in Florence in 1996, was entitled "Nexus: Architecture and Mathematics".
At the centre of much of the debate at the 1998 Nexus conference was a controversial paper by Frascari and Ghirardini which argues that mathematicians and historians have been over-zealous in their
attempts to uncover the Golden Mean in architecture. In contrast, the mathematician Vera de Spinadel took the more common stance of accepting that the Golden Mean is the geometric basis for many
historic architectural works and the theologian Gert Sperling rejected Frascari's and Ghirardini's thesis in his geometric and numeric analysis of the Pantheon.[12] While many other important issues
where raised at the Nexus conference in Mantua it is this debate surrounding the validity of the Golden Mean that is particularly noteworthy.
Marco Frascari and Livio Volpi Ghirardini's paper "Contra Divinam Proportionem" commences with the claim that "[a] golden or divine magnifying glass that distorts rather than clarifies has been
applied to everything in the name of aesthetic and mystical impulses."[13] For Frascari and Ghirardini the search for the Golden Mean has been carried out by fanatics (ironically dubbed by them the
"faithful") who have ignored the reality of architecture and the construction process to find the Golden Mean in almost every famous building from antiquity to the present day. Frascari and
Ghirardini explicitly criticise the tradition, arising from the German philosopher Adolf Zeising and the mathematician Siegmund Gunter, which traces the Golden Mean over photographs of historic
buildings and objects. Frascari and Ghirardini argue that "[w]ithout any doubt Zeising and Gunter were very skilful at measuring pictures, but it is clear that neither of them had ever measured a
building".[14] Architecture must be measured with both a degree of mathematical precision and with an appreciation of the innate dimensional accuracy of its material form. Stone, metal and brick all
possess different capacities to retain a finished dimension. "In metrical terms, every constructive part of building has its geometric order: masonry, in decimeters; wood carpentry, in centimeters;
metal works, in millimeters. Every part is exactly approximate."[15] When buildings are measured without such an appreciation of the materiality of architecture the search for the Golden Mean is
invariably meaningless. For Frascari and Ghirardini the Golden Mean must therefore remain an "untamable and intangible measure since, in order for it to be real and efficient, it must be explicitly
exact. However architecture does not permit this categorical exactness because there are always mitigating factors such as play in the joints and the density of materials."[16]
The measurement of architecture is always problematic because, as architecture can never provide an "exact" Golden Mean, any argument must be derived from approximate dimensions. The result of this
reliance on imprecise dimensions is that arguments for the presence of the Golden Mean in architecture are often completely inconsistent in their use of measured dimensions. For example, the
dimensions of one wall of a building could be measured from the floor to the ceiling and a second wall from the floor to the base of the cornice. The argument might then be made that both are perfect
examples of the Golden Mean. This is plainly an inconsistent and flawed method yet it is all too common in arguments surrounding proportional systems in architecture. One case in point is the
Pantheon which has been measured many times including a very recent and highly detailed survey. Yet, scholars analysing the Pantheon too often use these highly accurate dimensions only when they suit
their own arguments and ignore them when they do not.[17] Masi's analysis of the "Pantheon as an Astronomical Instrument" even talks about "exact" dimensions such as "9m" or "30 ft" as if the two
approximate measurements are somehow identical![18]
A related problem arises when over-precise measurements are used and complex and hermetic arguments are proposed to explain tiny inconsistencies in construction. Thus a "square" panel of tiling which
actually has one side 1.6 millimetres shorter than the other is described as an attempt by the master mason to hide the Golden Ratio within the walls of a building. In a recent book Paul-Alan Johnson
criticises such methods in some detail. "The equation of geometrical with architectural figures" he argues, "is only what we choose to make of it. Precision per se is not enough no matter how
satisfying it is for the analyst."[19] All measurements must be treated with consistency and due regard for the dimensionality of the materials being measured. Scholars, in this hybrid field where
architecture and mathematics meet, too freely use those measurements of buildings which suit their arguments and simply ignore those that do not. The main problem is, as Frascari and Ghirardini
identify, "for the f believers, any point is good for making the point."[20]
A further problem arising from the reliance on approximate dimensions is that there are well documented proportional systems which have been used in architecture throughout history that are
sufficiently close to the Golden Mean that they may seem interchangeable with it. As Frascari and Ghirardini explain, a common proportional system utilised by architects relies on the ratio 5 : 3 (or
approximately 1.66 ) which, owing to the limits of materials and the craft of building, is readily mistaken for the Golden Mean (or approximately 1.618). Frascari and Ghirardini also discuss many
examples wherein the documented ratio employed by architects and builders is 5 : 3 (or 8 : 5) and suggest that these proportions may better explain those measured in buildings than the proportions of
the Golden Mean. Pierre von Meiss reiterates this line of argument in his Elements of Architecture noting that the "Golden [Mean] is very close to the ratio of 5 : 8" and that "Le Corbusier [even]
takes the credit for reducing the Golden [Mean] to rational numbers applicable to architecture."[21]Robin Evans's recent book examines this same concept in detail describing how Le Corbusier
initially tried to use the Golden Mean to generate ideal architectural proportions but found the results "miserable". Le Corbusier's solution, documented at length in his two volumes of the Modulor,
was to work with ratios of 5 : 3 or 8 : 5 to overcome the "startling ugliness" of the architectural solutions generated through the use of the Golden Mean. None of which is to suggest that the Golden
Mean has never been used in architecture nor that the ratio 5 : 3 is dominant but rather that a more thoughtful, consistent and critical analysis is necessary before any claim regarding proportional
systems in architecture can be made. Le Corbusier and Palladio were each familiar with the Golden Mean and there is some evidence to suggest that they each utilised its properties in their designs.
However, in the case of the former at least, the overwhelming body of evidence points away from the use of the Golden Mean in any sustained way. As Evans concludes; "[t]heories of proportion as
traditionally formulated are quite inadequate to the task of describing complex shapes." Le Corbusier's variant of the Golden Mean "lurks behind the wall as if it were responsible for it as if it
made all of the difference in the world while hardly making any difference at all".[22]
Frascari and Ghirardini's arguments are also furthered by those of Rocco Leonardis who claims that the very phrase "the Golden Mean" is problematic. For Leonardis the word "Golden" implies that the
ratio is somehow rare or especially valuable - neither of which is necessarily true. An apprentice or student with the right tools and a modicum of effort can produce the proportions of the Golden
Mean by accident. With a straight edge ruler and a pair of compasses anyone given enough time will generate a pentagram. Producing a pentagram (or some related geometric expression of the Golden
Mean) in no way suggests that an amateur geometer understands anything about mathematics. Eminent historian of mathematics, Georges Ifrah, makes this same point in some detail when he recalls that
once knew a professor of mathematics who [ ] tried to persuade his students that abstract geometry was historically prior to its practical applications, and that the pyramids and buildings of
ancient Egypt "proved" that their architects were highly sophisticated mathematicians. But the first gardener in history to lay out a perfect ellipse with three stakes and a length of string
certainly held no degree in the theory of cones! Nor did Egyptian architects have anything more than simple devices - "tricks", "knacks" and methods of an entirely empirical kind, no doubt
discovered by trial and error - for laying out their ground plans. They knew, for example, that if you took three pieces of string measuring respectively three, four, and five units in length,
tied them together, and drove stakes into the ground at the knotted points, you got a perfect right angle. This "trick" demonstrates Pythagoras's theorem [ ] but it does not presuppose knowledge
of the abstract formulation, which the Egyptians most certainly did not have.[23]
Johnson reiterates this view and records that throughout history most architects have only possessed "a rudimentary understanding of geometry and design using more or less straightforward
permutations on regular polygons and the circle At the risk of oversimplification, for more than two millennia basilica, domed and vaulted structures, have been generated principally by the
projection or rotation of three primary figures - circle, rectangle, triangle".[24]Like the "sacred cut" and the "vesica pisces", the Golden Mean is a simple geometric construct which can be used to
shape windows and floor plans, to locate paving patterns and to divide courtyards. That these geometric constructs have been used in architecture throughout the ages is undoubtable. But that these
forms represent a more complex awareness of numeric or harmonic symbolism in architecture is debateable. In these rare instances where there is documented evidence that the architect was aware of the
Golden Mean and possibly even its mathematics, then a case can be made. In other cases, scholars, whether architects or mathematicians, must be more circumspect.
The interpretation of architecture, like the interpretation of art or mathematics must be undertaken rigorously. Without scholarly rigour simple errors of dimension or geometry can be used to derive
an entire thesis that is completely spurious. As an example of this Evans suggests an examination of two investigations of the proportions of the same facade undertaken by both Wölfflin and Wittkower
almost fifty years apart. Fundamentally Evans finds that each of the studies of Alberti's facade for the Santa Maria Novella in Florence rely on "auxiliary lines (imaginary lines) to reveal
privileged relations and virtual figures that can not easily be inferred from direct inspection".[25] Moreover it appears that each analysis is traced on inaccurate drawings of the facade, and in
addition elements from the third dimension (those which are well behind the facade) are transposed to the same plane. It is no wonder, given these gross liberties, that both Wölfflin and Wittkower
are able to propose conflicting and equally credible interpretations of the proportions of Santa Maria Novella.
Rowe presciently argued in 1973 that proportional analysis should only be undertaken where there is clear, visible evidence and even then it should not be misconstrued as representing any powerful
proof of a real relationship between geometry and architecture. Like Husserl and Derrida, Rowe is aware of the fundamental inconsistencies present in attempts to connect geometry with some other
form. The Nexus conference was not the first conference to raise these issues and it will not be the last; it did however in many ways respond to the calls of Husserl and Rowe.
1. Vitruvius, The Ten Books on Architecture, Morris Hicky Morgan, trans. (Cambridge, Massachusetts: Harvard University Press, 1914), 73. [first written circa 13BC.] return to text
2. Robert Lawler, Sacred Geometry (London: Thames and Hudson, 1982), 56-57. return to text
3. Jacques Derrida, Edmund Husserl's Origin of Geometry: An Introduction. John P. Leavey Jr., trans. (Lincoln: University of Nebraska Press, 1989). [first edition 1962]. return to text
4. This issue is discussed in more detail in Michael J. Ostwald and R. John Moore, "The Mapping of Architectural and Cartographic Faults: Troping the Proper and the Significance of (Coast) Lines,"
Architectural Theory Review (1998), 4-45. return to text
5. Catherine T. Ingraham, Architecture and the Burdens of Linearity. (New Haven: Yale University Press, 1998). return to text
6. See Steven Vajda, Fibonacci And Lucas Numbers, And The Golden Section (New York: John Wiley and Sons, 1989); Charles F. Linn, The Golden Mean: Mathematics And The Fine Arts (New York: Doubleday,
1974). return to text
7. Alberto Pérez-Gómez and Louise Pelletier, Architectural Representation and the Perspective Hinge (Cambridge, Massachusetts: MIT Press, 1997). return to text
8. See, for example, the discussion of "right line" in R. John Moore and Michael J. Ostwald, "Choral Dance: Ruskin and Dædalus." Assemblage, vol. 32 (April 1997), 88-107. return to text
9. The classic example of this approach is found in Rob Krier, Architectural Composition (London: Academy Editions, 1988). return to text
10. Colin Rowe, The Mathematics of the Ideal Villa and Other Essays (Cambridge, Massachusetts: MIT Press, 1976), 16. [Rowe's addendum was made in 1973.] return to text
11. Ibid. return to text
12. Vera W. de Spinadel, "The Metallic Means and Design," Nexus II: Architecture and Mathematics, Kim Williams, ed. (Fucecchio, Florence: Edizioni Dell'Erba, 1998), 143-158; Sperling, Gert. "The
Quadvrivium in the Pantheon of Rome." in Nexus II: Architecture and Mathematics, 127-142. return to text
13. Frascari, Marco and Livio Volpi Ghirardini. "Contra Divinam Proportionem," Nexus II: Architecture and Mathematics, Kim Williams, ed. (Fucecchio, Florence: Edizioni Dell'Erba, 1998), 65-66. return
to text
14. Ibid., 67. return to text
15. Ibid., 68-69. return to text
16. Ibid., 69. return to text
17. See the discussion in Sperling, "The Quadvrivium in the Pantheon of Rome," cited in note 12. return to text
18. Fausto Masi, The Pantheon as an Astronomical Instrument (Rome: Edizioni Internazionali di Letteratura e Scienze, 1996), 6. return to text
19. Paul-Alan Johnson, The Theory of Architecture: Concepts, Themes and Practices (New York: Van Nostrand Reinhold, 1994), 359. return to text
20. Frascari, and Ghirardini, "Contra Divinam Proportionem", 70. return to text
21. Pierre von Meiss, Elements of Architecture. From Form to Place (New York: Van Nostrand Reinhold, 1990), 63. return to text
22. Robin Evans, The Projective Cast: Architecture and its Three Geometries. (Cambridge, Massachusetts: MIT Press, 1995), 292. return to text
23. Georges Ifrah, The Universal History of Numbers (London: Harvill, 1998), 92. return to text
24. Johnson, The Theory of Architecture, 358. return to text
25. Evans, The Projective Cast, 248. return to text
Robin Evans, The Projective Cast: Architecture and its Three Geometries. (Cambridge, Massachusetts: MIT Press, 1995). To order this book from Amazon.com, click here.
Georges Ifrah, The Universal History of Numbers (London: Harvill, 1998). To order this book from Amazon.com, click here.
Catherine T. Ingraham, Architecture and the Burdens of Linearity. (New Haven: Yale University Press, 1998). To order this book from Amazon.com, click here.
Paul-Alan Johnson, The Theory of Architecture: Concepts, Themes and Practices (New York: Van Nostrand Reinhold, 1994), 359. To order this book from Amazon.com, click here.
Charles F. Linn, The Golden Mean: Mathematics And The Fine Arts (New York: Doubleday, 1974).
Alberto Pérez-Gómez and Louise Pelletier, Architectural Representation and the Perspective Hinge (Cambridge, Massachusetts: MIT Press, 1997). To order this book from Amazon.com, click here.
Colin Rowe, The Mathematics of the Ideal Villa and Other Essays (Cambridge, Massachusetts: MIT Press, 1976). To order this book from Amazon.com, click here.
Steven Vajda, Fibonacci And Lucas Numbers, And The Golden Section (New York: John Wiley and Sons, 1989).
Kim Williams, ed. Nexus II: Architecture and Mathematics (Fucecchio, Florence: Edizioni Dell'Erba, 1998). To order this book from Amazon.com, click here.
Mathsoft's Golden Mean
A Scroll-Down Lesson in the Golden Mean
The Sacred Geometry HomePage
The Golden Section and Fibonacci Series
Sunman's Golden Section
The Golden Section Ratio: Phi
The Golden Section in Art, Architecture and Music
The Golden Section
ABOUT THE AUTHOR. Dr Michael J Ostwald lectures in architectural history and theory at the University of Newcastle in Australia. He has written extensively on the relationship between architecture
and geometry.
The correct citation for this article is:
Michael J. Ostwald, "Under Siege: The Golden Mean in Architecture", Nexus Network Journal, vol. 2 ( 2000), pp. 75-81. http://www.nexusjournal.com/Ostwald.html
Copyright ©2000 Kim Williams
top of page | {"url":"http://ftp.gwdg.de/pub/misc/EMIS/journals/NNJ/Ostwald.html","timestamp":"2024-11-10T22:29:39Z","content_type":"text/html","content_length":"39245","record_id":"<urn:uuid:0e541b7a-2cf5-4a91-b45f-bf2ac4e1cdc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00295.warc.gz"} |
What’s on the Digital SAT Formula Sheet?The Digital SAT Formula Sheet: What's On It, and What Isn't?
What’s on the Digital SAT Formula Sheet?
The SAT has always provided a reference sheet to help students tackle the math portion of the exam, but how has it changed since the exam’s digital transformation?
In this article, we’re breaking down what’s on the digital SAT formula sheet—and what isn’t. Read on for Piqosity’s SAT formula sheets, covering many of the crucial formulas that test-takers are
expected to remember ahead of SAT test day.
Math on the Digital SAT
The digital SAT Math section tests four distinct content categories: Algebra, Advanced Math, Problem Solving & Data Analysis, and Geometry & Trigonometry.
While the Reading & Writing portion presents categories as different sections of the test, categories are shuffled throughout the SAT Math test. SAT math questions are organized by difficulty
instead, with a gradual move from easier questions to more difficult ones.
As covered in the graphic above, around 70% of math questions that students encounter fall under Algebra and Advanced Math, which cover topics from Algebra I & Algebra II courses. The remaining ~30%
of questions fall under Problem Solving & Data Analysis, normally covered across math and science courses, and Geometry & Trigonometry, which students typically learn in Geometry and Pre-Calculus.
As for the types of questions students encounter on the digital SAT Math portion: about 70% of math questions are basic equations and problems, while about 30% are word problems (which contextualize
mathematical problems, capped at 50 words in length). “
One major update to the SAT format is that calculators are now allowed throughout the whole SAT Math test, a contrast to the two distinct “No Calculator” and “Calculator” portions of the
pencil-and-paper Math SAT. With a total of 70 minutes to answer 44 questions, students have about 95 seconds per math question—this allots more time to input problems into the calculator for those
longer, multi-step questions.
Note: Although the math section covers the same content and allows calculator use throughout, it is still divided in half. A key aspect of the digital SAT format is that each section has a “normal”
Module 1 that is the same for every test-taker, and an “adaptive” second Module that is easier or harder (depending on the tester’s Module 1 performance).
Together with permitting calculators and providing scratch paper (as many sheets as you need), College Board gives test-takers another resource to help them on the math section: a Digital SAT formula
What’s on the Digital SAT Formula Sheet?
The reference sheet on the SAT includes formulas to help students complete math questions. While these are formulas that students should already be familiar with, the availability of a formula sheet
will help them solve problems more efficiently and correctly—they won’t need to dig in their memory for a formula they don’t immediately recall.
The digital SAT formula sheet largely covers geometric formulas and principles:
As you see in the image above, the SAT math formula sheet only includes formulas that are helpful on Geometry & Trigonometry questions.
The Digital SAT math reference sheet includes formulas for solving circumference, the area of circles, rectangles, and triangles, and the volume of rectangular prisms, cylinders, spheres, cones, and
pyramids. This sheet also includes the pythagorean theorem as well as a breakdown of the angles in special right triangles.
Key Formulas Not on the Digital SAT Formula Sheet
Since the digital SAT formula sheet only provides some essential formulas used in geometry problems, what are the formulas you’re expected to know throughout the rest of the SAT math test?
Algebra Formulas You Should Know
The Digital SAT Math Test’s Algebra questions cover linear equations, functions, and inequalities. Students should be familiar with all basic forms of representing lines and linear growth, including:
• Standard Form
• Point-Slope Form
• Slope-Intercept Form
• Slope
Digital SAT Advanced Math Formulas
The Advanced Math questions on the SAT draw from Algebra and Algebra II courses. Students should be familiar with linear and quadratic systems of equations, polynomials, nonlinear functions,
exponential functions, radicals, absolute value equations, and more higher level algebra concepts.
Some of the Advanced Math functions and formulas students should make sure to know ahead of the SAT are:
• Quadratic equations (Standard, Quadratic, and Vertex Forms)
• Factoring polynomials
• Rules of radical operations
• Properties of radicals
• Exponential functions
Problem-Solving and Data Analysis Formulas
The SAT’s Problem-Solving and Data Analysis questions cover topics from percentages to probability to chart and graph analysis. These questions are the SAT’s equivalent to ACT science questions,
covering students’ basic knowledge of understanding data that’s represented in different ways.
SAT-takers should be familiar with formulas and methods for analyzing data sets, such as finding averages, calculating percentages and probabilities, and converting values between percentages,
fractions, and decimals.
Geometry & Trig Formulas Not on the Digital SAT Formula Sheet
As you now know, the SAT math reference sheet consists of only formulas that cover Geometry & Trigonometry topics; however, it doesn’t include the full extent of formulas and definitions you’ll need
to know for this section.
Geometry and Trigonometry SAT questions draw mainly from Geometry and Pre-Calculus courses. To be informed when you face these questions, you’ll need to have a solid understanding of angles, circles,
and triangles, including:
• Equation of a circle
• Length of an arc
• Area of a sector
• Angles in parallel & intersecting lines
• Finding sin, cos, and tan
• Types of triangles
• Angles in triangles
• Surface Area of 3D Shapes
How to Prepare for the Digital SAT Math Test
Knowing what is and isn’t on the digital SAT formula sheet will help you approach the math portion more prepared! The formula sheets above are a great resource to help you study for the exam. Our
advice is to jot down some of these key formulas from your memory as soon as you get your scratch paper! If you make your own reference sheet, you won’t have to dig through your memory in the midst
of the exam.
If you want to do better on digital SAT math questions, the tried-and-true method is a cycle of taking a practice test, studying your weak concepts using lessons and practice questions, then
repeating with another practice test through test day! Then, you can study and retake the SAT if you still aren’t at your goal score.
If you’re looking for an abundance of affordable resources to help you on your math (and reading) SAT journey, Piqosity is here to help. Along with our full-length, online ELA and Math courses for
grades 5-11, we offer full SAT, ACT, and ISEE test prep courses, each of which includes 12 practice exams, dozens of concept lessons, personalized practice software, and more.
In addition to our new digital SAT course, we’re also offering two free digital PSATs! These DPSAT practice tests are designed to help you prepare ahead of time for the October exam’s format,
available for anyone who signs up for a Piqosity community account.
Our free community account allows you to try out all of Piqosity’s features—no credit card required! When you’re ready to upgrade, Piqosity’s year-long accounts start at only $89.
Leave A Comment
Share This Story, Choose Your Platform! | {"url":"https://piqosity.com/digital-sat-formula-sheet/","timestamp":"2024-11-10T09:50:24Z","content_type":"text/html","content_length":"121886","record_id":"<urn:uuid:97f1d50f-99e8-4ecb-ba3f-b2d195aee405>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00237.warc.gz"} |
Likelihood Ratio Versus Predictive Value
Skip Nav Destination
Likelihood Ratio Versus Predictive Value :
June 5, 2018
Source: Schneider AE, Cannon BC, Johnson JN, et al. Left axis deviation in children without previously known heart disease. Pediatrics. 2018;141(3):e20171970; doi:10.1542/peds.2017-1970. See AAP
Grand Rounds commentary by Dr. David Spar (subscription required).
Researchers at a single institution (Mayo Clinic) reviewed medical records of all children without known heart disease from 1 to 18 years of age who had ECG evidence of left axis deviation (LAD) over
a 13 year period. They found that LAD findings on ECG were more likely to be clinically significant in children who had either a high degree of LAD (defined as less than negative 42 degrees axis on
ECG), ECG evidence of chamber enlargement or hypertrophy, or abnormal cardiac physical exam. In general, the greater the number of the 3 features that were present, the greater the likelihood that
the LAD was clinically important. They acknowledged some study limitations that are expected with retrospective studies, most prominently here the fact that only about half of the children had
echocardiograms performed. Given that clinicians decided on ordering an echocardiogram, this could result in significant bias since it is likely the children with echocardiogram performed had a
greater likelihood of significant disease. If this Mayo Clinic population consisted primarily of children referred by another provider, they could represent a more severe end of the disease spectrum,
i.e. spectrum basis. Also, the researchers did not investigate the indication for the ECG in all cases, but the subset where this was illuminated had a variety of indications including clinical
illness such as dyspnea or stroke.
However, I'm most interested in their use of predictive values to support their conclusion that children with LAD and at least 1 of the 3 factors above should be considered for an echocardiogram.
Here's why.
The authors supplied us with sensitivities, specificities, and predictive values for heart disease based on the number of variables present: for no variables present, the positive predictive value
(PPV) was 32% and the negative predictive value was 98%. For 1 variable, PPV was 18% and NPV 86%, while the values were 64% and 88% for 2 variables and 86% and 88% for all 3 present. What's wrong
with that?
The problem with predictive values is that they will vary with the frequency of the disease in the patient population being studied. In this study, that means we're talking about children referred to
Mayo Clinic for evaluation. That can be a very different population from children presenting to a primary care provider, or even from children presenting to a pediatric cardiology practice at a
tertiary medical center. On the other hand, likelihood ratios (LR) do not depend on the frequency of the disease in the population, although they can be affected by spectrum bias to some extent.
It's quite easy to calculate LRs from the data the authors provided. For zero variables present, the LR for a positive test (LR+) is 2.63 and for a negative test (LR-) is 0.12. The corresponding
values for 1, 2, and 3 variables present are 1.23 and 0.9, 9.67 and 0.73, and 25 and 0.76.
The path of least resistance is to look at predictive values because the numbers are in percentages that make inherent sense. LRs, on the other hand, provide us with no intuitive meaning for the
numbers, except for individuals accustomed to using LRs. For example, an LR+ of 9.67 doesn't mean 9 times as likely, or 9%, or anything directly linked to the number 9 at all. Instead, LRs allow us
to determine the post-test probability of the disease being present, assuming we know the pre-test probability. In this study, the overall rate of heart disease in the population was about 15%. An LR
equal to 1 represents a test that has no role in diagnosing a particular condition. (Note that this is what the study found in children with just 1 risk variable present, more on that later.) The
higher an LR+, the greater the post-test probability, so that the 9.67 number when calculated out results in moving the pre-test probability from 15% to a post-test probability of 63%. Whether or not
that is important is a matter of individual debate. If you feel that the 15% rate is already high enough to order an echocardiogram on your patient, then it doesn't matter that the post-test
probability is 63% or 18% (which happens to be the post-test probability if just 1 variable is present). I'd argue that 15% is pretty high, and I wouldn't want to miss serious heart disease, so I'd
order an echocardiogram regardless.
To get at the other side of this, how low would be post-test probability need to be such that I wouldn't order an echocardiogram? Here's where the LR- becomes useful, and the only LR- in this study
that results in any appreciable change is that associated with no variables present, where the post-test probability becomes 2%. Are you willing to accept missing 2% of the children and not order
echocardiograms on all the children, regardless of the number of variables present? I'm not sure, but complicating the decision is that, in this study, only about half the children had
echocardiograms performed, based on the judgment of their clinician. Presumably, all who had concerning symptoms received an echocardiogram, leaving the more asymptomatic of the group not represented
in the study data.
One final point. The abstract concludes that "Clinicians should consider obtaining an echocardiogram in patients with LAD and ..." any of the 3 variables present. However, their data for just 1
variable present do not change post-test probability significantly, raising it from 15% to 18%. This is easily seen since the LR+ for 1 variable present is 1.23, pretty close to 1. I think their
conclusion should have suggested echocardiogram for children with at least 2 variables present if they wanted to draw any conclusions from their data.
The information in the study is of interest but is of little utility without knowing echocardiogram information about the other half of the population. Mostly, though, I wish the investigators would
make it a little easier on the reader by reporting and explaining LRs because predictive values can be seductively misleading. | {"url":"https://publications.aap.org/journal-blogs/blog/3721/Likelihood-Ratio-Versus-Predictive-Value?autologincheck=redirected","timestamp":"2024-11-03T19:35:12Z","content_type":"text/html","content_length":"71068","record_id":"<urn:uuid:ddf50d40-4a5b-4158-b724-6f19c6ff47c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00409.warc.gz"} |