content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Generate binary numbers using a queue - A CODERS JOURNEY
Generate binary numbers using a queue
Generate binary numbers from 1 to any given number, “n”, using a queue.
Function Signature
List<string> GenerateBinaryNumber(int n)
Example Input and Output
n = 1 => (1)
n = 3 => ( 1, 10, 11)
Problem Solving Strategy
Assuming you’ve never encountered this problem before and don’t have much experience using stacks and queues, try to discover a pattern. The first step of discovering a pattern is to write down a few
sample inputs and outputs.
Decimal : 1 2 3 4 5
Binary : 1 10 11 1000 101
If you notice carefully, you’ll see that 2 is formed by appending a “0” to the previous number , “1”. And 3 is formed by appending a “1” to the previous previous number, 1. Similarly, 4 is formed by
appending a “0” to 3 (“11”) and 5 is formed by appending a “1” to 3.
So could it be that if we keep on adding a “0” and “1” to the previously generated binary number, we can create this pattern ? Yes ! Let’s visualize how this will work with a Queue.
Visualize the Solution
We’ll use a queue to generate the numbers and a list(or array) to store the results.
Generate binary numbers using a queue
So, after working through a graphic example, seems like this’ll work – so let’s formalize the algorithm
1. Create an empty Queue – this will be used to generate the binary numbers
2. Create an empty List/ Array – this will be used to hold the results , i.e, the list of generated binary numbers till n
3. Enqueue “1” in the queue
4. Generate the binary numbers inside a loop that runs till “n” binary numbers has been added to the list. Here’s what happens inside the loop:
□ Remove an element from the queue – call this “X”
□ Generate the next two binary numbers by adding a “0” and “1” to “X” respectively. The two new binary numbers thus generated are “X0” and “X1”
□ Enqueue “X0” and “X1” into the queue
□ Add “X” to the result list
Note: Once “n” elements have been added to the list, the loop terminates. At this point, there might be more elements left in the queue which will not be added to the results list ( since we only
need n elements). But that is fine.
C# Implementation
using System;
using System.Collections.Generic;
namespace StacksNQueues
public class GenerateBinaryNumbers
public static List<string> GenerateBinaryNumber(int n)
Queue<string> binaryGenerationQueue = new Queue<string>();
List<string> results = new List<string>();
string current = binaryGenerationQueue.Dequeue();
string appendZero = current + "0";
string appendOne = current + "1";
return results;
And here’s the test program
using System;
using System.Collections.Generic;
namespace StacksNQueues
class Program
static void Main(string[] args)
// test generate binary numbers using a queue
List<string> testbinary0 = GenerateBinaryNumbers.GenerateBinaryNumber(0);
List<string> testbinary1 = GenerateBinaryNumbers.GenerateBinaryNumber(1);
List<string> testbinary3 = GenerateBinaryNumbers.GenerateBinaryNumber(3);
List<string> testbinary5 = GenerateBinaryNumbers.GenerateBinaryNumber(5);
Complexity Analysis
Runtime complexity : O(n) since we only loop till we’ve generated n numbers and the runtime increases lineraly as n becomes bigger
Space complexity: O(2n) = O(n) because we use a queue and a List/Array for processing and holding the results
|
{"url":"https://acodersjourney.com/generate-binary-numbers-using-a-queue/","timestamp":"2024-11-12T21:59:15Z","content_type":"text/html","content_length":"130666","record_id":"<urn:uuid:9fa1a0da-aefb-42bc-9f76-0e6e8dbb7575>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00853.warc.gz"}
|
What is Gas Turbine Power Plant? - Mechanical Booster
What is Gas Turbine Power Plant?
After reading the heading of the post. If you can guess the answer of the question than its good but don’t worry if you get failed. Because in this article I will give every possible information
about gas turbine power plant, its definition, working principle and how it works. So, without wasting time let’s begin our discussion from its definition.
What is Gas Turbine Power Plant?
A Gas turbine power plant is a power plant which uses gas turbine to produce electricity. Now a question hit to our mind that how this electricity is produced by the gas turbine. So before
understanding about the working of the GTPP. We must have a deep understanding of its main parts. So let’s first understand the main parts of it than we will move to its working.
Main Parts
1. Compressor: It is a mechanical device that is used to compress the air to a high density. The compressed air helps in the burning of the fuel. The compressor and turbine have a common shaft.
2. Combustion chamber: It is the chamber where the burning of the fuel in the presence of the air takes place.
3. Gas Turbine: It consists of rotter blades. The hot gases produced due to burning of the fuel in the combustion chamber strikes on these blades and it starts rotating.
4. Generator / Alternator: Electric generator is coupled with the shaft of the gas turbine. It rotates with the turbine shaft and produces electricity.
Also Read:
Construction Details
1. Compressor:
A rotary type air compressor is used in the gas turbine power plant. At the inlet of the compressor, filtration of the air has been carried out from the dust due to the air filter. The compressor
plays the role of compressing the air and increasing the pressure of the air.
2. Regenerator:
The exhaust gases always carry the heat and that heat is used in the regenerator for increasing the temperature of the compressed air. In short, in the regenerator, the compressed air passing through
the fine tube of the regenerator mixes with the exhaust gases resulting in the rise of temperature of the compressed air.
3. Combustion chamber:
The hot air from the regenerator then, enters into the compression chamber. The burners inside the combustion chamber do the role of injecting the fuel oil in the form of oil spray. The compressed
air from the regenerator gets heated to 3000ºF due to the burning of the oil. This compressed-air then mixes with the combustion gases and cools down to the 1300º to 1500ºF.
In the turbine, mixing of the combustion gases and compressed air takes place. The kinetic energy is produced inside the turbine and also, the temperature of the gas again comes down to 900º F.
5. Alternator:
Inside the alternator, there is a rotor. The rotor of the alternator and the turbine has the same shaft. The alternator produces required electrical energy.
Working Principle of Gas Turbine Power Plant
First, all air is compressed inside the compressor. This compressed-air enters the compression chamber and gets heated. After that, this highly heated and pressurized air comes into the turbine. In
the turbine, the expansion of the air takes place and it acquires kinetic energy and rotates the turbine blades. Now, this kinetic energy is converted into electrical energy with the help of a
generator (alternator) coupled with the turbine shaft.
Now, let us talk about the choice of fuels in the gas turbine power plant. Natural gas is the most suitable fuel as it cheaper and also emits low Carbon Dioxide. Due to this, less pollution is
produced. It is to be noted that fuel must not emit pollutants like carbon monoxide, nitrogen oxides after the combustion.
Difference between the Gas Turbine Power Plant and Steam Turbine Power Plant:
Gas turbine and steam turbine power plant both seem to be similar. But, in the GTPP, compressed air is used while, in that of the steam turbine power plant, compressed steam is used. The main goal of
both of these power plants is the same that is to generate electricity.
Also Read:
• GTPP has a simple structure. But the steam turbine power plant is structurally more complicated.
• Gas turbine power plant is dimensionally smaller than other types of power plants. So, it can be installed in a compact size area.
• The cost of maintaining this power plant in the working condition is very less.
• The GTPP gives rise to less amount of pollution and you require a less amount of water supply to operate it. As water requirement is less, such power plants are popularly used in the places where
there is a scarcity of water and electric energy requirement is more.
• No need for condenser and boiler while operating the gas turbine power plant.
• It requires cost effective fuels. We can use cheaper fuels such as kerosene, benzene to run the power plant.
• When power plants are considered then, their efficiency is the most important factor that we have to consider. The gas turbine power plant has low thermal efficiency. This lower efficiency of the
GTPP leads to the limitations in its applications.
• From a compressor, due to high frequency, noise is generated. This leads to the noise pollution.
• Heat from the surface is carried out by the exhaust gases which also reduces the efficiency of this power plant.
• It is not suitable for production of electricity in our daily applications.
• For making of the gas turbine, a large cost is required. The operating temperature inside the GTPP is higher. Hence, special metals and alloys are used while constructing it.
• GTPP find its applications in the large compressor, high speed cars.
• They are also used for power generation in ships and aircrafts.
In this article, we covered all the detailed about the Gas turbine power plant. Let us know if you have any questions about it in the comment box.
Leave a Comment
|
{"url":"https://mechanicalbooster.com/2019/05/gas-turbine-power-plant.html","timestamp":"2024-11-08T12:32:24Z","content_type":"text/html","content_length":"220285","record_id":"<urn:uuid:fa893b91-d7fa-46d6-a557-33b0da825c60>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00348.warc.gz"}
|
Benchmark Data Details
Thomson Health Care’s methodologies based on the All Patient Refined Diagnosis Related Groups (APR-DRG)- allow us to produce severity-adjusted performance comparisons on length of stay (LOS),
hospital charges, and mortality between or across virtually any arbitrary subgroups of inpatients. These patient groupings can be based on DRGs, hospitals, product lines, geographic regions,
physicians, and so forth. In order to adjust for differences in diagnosis type and illness severity across the different reporting groups, one can calculate a severity-adjusted value based on the
assigned APR-DRG. The severity-adjusted values based on the APR-DRG methodology measure the clinical need for acute hospital resources with respect to LOS and charges, and an “expected” mortality
rate relating to the group’s aggregate severity level.
Normalized LOS and Charge Weights, and Mortality Rates
Thomson Health Care’s Projected Inpatient Database (PIDB) was used to develop the all-payer pediatric LOS and Charge weights, and Mortality rates based on the APR-DRG. The PIDB is a nationally
representative database containing more than 20 million inpatient discharges annually which are statistically weighted to represent the universe of all short-term, general, non-Federal hospitals in
the U.S. Each record is assigned a weight or “projection factor” which indicates the number of discharges it represents in the universe. In order to account for geographic cost-of-living differences,
charges are adjusted for each hospital using the CMS wage index. APR-DRG weights are calculated by dividing the average charge (wage-adjusted) or LOS for each APR-DRG by the overall average charge
(wage-adjusted) or LOS for patients in the universe of short-term, general, non-Federal hospitals in the U.S. The mortality rates are calculated within each APR-DRG and its mortality indicator using
the same normative data. For the development of the pediatric weights and rates for the APR-DRG, data were excluded with ages greater than or equal to 18 years.
Calculating Severity-Adjusted LOS, Charges, and Mortality
Using the normalized weights and rates, one can calculate severity-adjusted values that can be used in “fair” comparisons. Each variable (LOS, Charges, and Mortality) has its own calculation for
creating a severity-adjusted value. One can calculate the severity-adjusted values on the discharge level for all records and then choose different sets of patients to generate the severity-adjusted
values on a group level. In addition, group level severity-adjusted values can be generated from group level data if it is more convenient to use than the discharge level severity-adjusted values.
For Severity-Adjusted LOS:
Discharge level
1. Assign the APR-DRG and APR-DRG LOS Weight for each discharge
2. Severity-Adjusted LOS = (Actual LOS) / (APR-DRG LOS Weight)
3. Using the SA-LOS, one can calculate the Average SA-LOS on a group level
Group level (slightly different)
1. Average APR-DRG LOS Weight for the group of patients is required
2. Average Actual LOS for the group of patients is required
3. Severity-Adjusted Average LOS = (Avg. LOS) / (Avg. LOS Weight)
For Severity-Adjusted Charge:
Discharge level
1. Assign the APR-DRG and APR-DRG Charge Weight for each discharge
2. Severity-Adjusted Charge = (Actual Charge) / (APR-DRG Charge Weight)
3. Using the SA-Charge, one can calculate the Average SA-Charge on a group level
Group level (slightly different)
1. Average APR-DRG Charge Weight for the group of patients is required
2. Average Actual Charge for the group of patients is required
3. Severity-Adjusted Average Charge = (Avg. Charge) / (Avg. Charge Weight)
For Severity-Adjusted Mortality:
Discharge level
1. Assign the APR-DRG and the Mortality Rate for the specific APR-DRG and mortality indicator for each discharge
2. Average Severity-Adjusted Mortality Rate is simply the average of the rates on the discharge records
NOTE: For group level there is no calculation if the average of the mortality rates is already available.
Using Severity-Adjusted LOS, Charges, and Mortality
Using severity-adjusted values for comparisons is straightforward. There are some basic ways to use these values. For the LOS and Charge values, if the severity-adjusted value increases for a group
of patients with respect to its actual value this indicates the facility is treating more severe patients within that group. The same can be determined with the charges. This can also be determined
by simply looking at the average LOS (or Charge) weight for the group of patients. If the average weight is greater than 1.0 this indicates that the group of patients is more severe with respect to
resource utilization. For mortality, one can simply compare the actual mortality rate with severity-adjusted mortality rate and make similar inferences.
Calculating Severity-Adjusted Expected LOS, Charges, and Mortality
Using the normalized weights and rates, one can calculate severity-adjusted expected values that can be used in “fair” comparisons within a facility or across facilities. Each variable (LOS, Charges,
and Mortality) has its own calculation for creating a severity-adjusted expected value. One can calculate the severity-adjusted expected values on the discharge level for all records and then choose
different sets of patients to generate the severity-adjusted expected values on a group level. In addition, group level severity-adjusted expected values can be generated from group level data if it
is more convenient to use than the discharge level severity-adjusted expected values.
For Severity-Adjusted Expected LOS:
Discharge level
1. Assign the APR-DRG and APR-DRG LOS Weight for each discharge
2. Severity-Adjusted Expected LOS = (National Average LOS) x (APR-DRG LOS Weight)
3. Using the SA-ELOS, one can calculate the Average SA-ELOS on a group level
Group level (slightly different)
1. Average APR-DRG LOS Weight for the group of patients is required
2. Severity-Adjusted Expected Average LOS = (National Average LOS) x (Avg. LOS Weight)
For Severity-Adjusted Expected Charge:
Discharge level
1. Assign the APR-DRG and APR-DRG Charge Weight for each discharge
2. Severity-Adjusted Charge = (National Average Charge) x (APR-DRG Charge Weight)
3. Using the SA-ECharge, one can calculate the Average SA-ECharge on a group level
Group level (slightly different)
1. Average APR-DRG Charge Weight for the group of patients is required
2. Severity-Adjusted Expected Average Charge = (National Avg. Charge) x (Avg. Charge Weight)
For Severity-Adjusted Expected Mortality:
Discharge level
1. Assign the APR-DRG and Mortality Rate for the specific APR-DRG and mortality indicator for each discharge
2. Average Severity-Adjusted Expected Mortality Rate is simply the average of the rates on the discharge records
NOTE: For group level there is no calculation if the average of the mortality rates is already available.
Using Severity-Adjusted Expected LOS, Charges, and Mortality
A Standardized LOS Index is produced when the expected LOS divides the actual LOS. This Standardized LOS Index indicates whether a given subgroup has an average LOS that is lower or higher than
expected for the selected group. When calculating the Standardized Charge Index we need to consider wage-adjustment when comparing to the actual Charge. So the Standardized Charge Index can either be
the (wage-adjusted actual charges) divided by (expected charges), or the (actual charges) divided by (reverse wage-adjusted expected charges). In this document we have not shown the calculation for
the reverse wage-adjusted expected charge so the first option is the one that is recommended. For a Standardized Mortality Index, one calculates the actual Mortality Rate divided by the expected
Mortality Rate. To simply know the expected number of deaths in the selected group, sum the expected Mortality Rates at the discharge level. These Standardized Indexes are very useful in making fair
comparisons, determining if a group of patients are high, low, or moderate risk, and be able to trend over time.
A Standardized Index of 1.0 would indicate no difference between the actual (or observed) and expected average value. If the Standardized Index is less than 1.0, it indicates that the average
observed value is lower than expected, while a Standardized Index greater than 1.0 means that the average observed value is higher than expected. A confidence interval can be calculated to assess
whether the Standardized Index for a given subgroup is statistically different from 1.0. Given a specified probability (by convention, 95%), the confidence interval shows a range of possible values
of the Standardized Index, which are consistent with the data. If the 95% confidence interval includes the value of 1.0 (indicative of no difference), then the Index is not statistically significant
at the 95% level. If the confidence interval excludes the value of 1.0, then the Index is statistically significant at the 95% level. Note that any given Standardized Index may be either lower or
higher than 1.0 and not be “different enough” to be statistically significant. An Index may or may not reach statistical significance depending on the size of the sample, the variances of the
observed and expected values, and the covariance of the observed and expected values.
Copyright Notes:
1. 3M owns all copyrights in and to APR-DRG. All rights reserved.
2. Thomson Health Care owns the methodology for weights calculations as described in this document.
|
{"url":"https://www.childrensmn.org/services/care-specialties-departments/cancer-blood-disorders/cancer-and-blood-disorders-outcomes/benchmark-data-details/","timestamp":"2024-11-10T11:37:36Z","content_type":"text/html","content_length":"185834","record_id":"<urn:uuid:642345f3-3fd8-4e16-aca9-98369664c604>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00019.warc.gz"}
|
What is the reinforced concrete deep beam and where it used?What is the reinforced concrete deep beam and where it used?
Sorry, you do not have permission to ask a question, You must login to ask question. Become VIP Member
Join for free or log in to continue reading...
What is the reinforced concrete deep beam and where it used?
1. This answer was edited.
Reinforced Concrete Deep Beam:
☆ It is simply a member with clear spans in equal or less than four times the overall member depth of beams.
☆ That member are loaded on one face with concentrated loads within twice the member depth from the support and it is supported on the opposite face so that compression struts can be
developed between the loads and supports.
□ In case of the building system which is composed of bearing wall and moment frame as upper and lower part, respectively, then a load of the upper part is transferred to the column of another
lane through the transfer girder.
□ The result is that the transfer girder is to be under very high shear stress and that’s why the depth of it is to be very very deeper.
□ For such type of beam, the importance related to the bonding of reinforcement and its anchor has been increased. The main reason behind this anchorage increase is that the high stress
following the demand for the bulky and high rise building development and transferred into the concrete structure, which maximizes the concentrated stress at anchorage.
Thank You.
2. R/f’ed concrete deep beam
It is a special structure in beams. In the deep beam, the depth of the beam is greater than eff. span. The assumptions of the normal beam are not applicable in a deep beam. As the depth is more,
the distribution of the stress is not uniform. So we found some in finding the values of the lever arm & the other related things.
Defn of Deep Beam as per IS 456: 2000
The beam having greater depth in comparison to its eff. span is called as Deep Beam.
Uses of Deep Beam
1. Foundation beams: The foundation beams transferring the conc column load to the supporting soil.
2. Transfer girder or wall beams at an intermediate floor level of a bldg where some columns are read to be stopped for some particular reasons.
3. Bunker or tanks where the walls itself act as a deep beam
IS Codes provisions of a Deep Beam
□ As per IS 456:2000, a beam shall be deemed as deep beam,
1. If 1 ≤ L/D ≤ 2 – For Simply supported
2. If 1 ≤ L/D ≤ 2.5 – For Continuous beam
where, L = Eff span, D = Overall depth.
To calculate eff span = c/c dist betn supports or 1.15 x clear span. (Consider less value)
Lever Arm (Z): As the stress distn along the depth of the beam is non-linear. Due to this the lever arm bent the comp & tensile forces can’t be easily determined.
The values of Lever Arm as per IS 456:2000
□ For simply supported beam:
Z = 0.2 (L+2D) If 1 ≤ L/D ≤ 2
Z = 0.6 L If L/D < 1
Z = 0.2 (L+1.5D) If 1 ≤ L/D ≤ 2.5
Z = 0.5 L If L/D < 1
R/F provision as per IS 456-2000
In the case of deep beam, tension is still to be extended over the whole length of the span & should be well anchored in the support.
1. It should resist the +ve bending moment & extended w/t curtailment betn the supports.
2. This r/f should be embedded beyond the face of each support.
Embedded length = 0.8 x Development length
= 0.8 x (0.87 f[y] φ / 4 τ[bd])
Zone of depth = 0.25D – 0.05l
Provided to resist the -ve B.M
Termination of r/f – Half of the r/f may be terminated at a dist of 0.5D from the face of the support & remaining should be extended over the whole span.
When 1 ≤ L/D ≤ 2.5, wholly have to distribute the total tensile r/f in 2 zones
Zone – 1 = 0.2D having Ast[1] Ast[1] = 0.5 (l/D – 0.5)Ast
Zone – 2 = 0.3D (on either side of mid depth) having Ast[2] = Ast – Ast[1]
If l/D ≤ 1 = zone → 0.89 (evenly distributed)
□ Vertical r/f: Hanging action → Bars or suspension stirrups to be provided.
To resist the effect of shrinkage & temp
It also can be provided in vertical direction & horizontal direction.
% of total c/s’al area is to be provided given in IS 456:2000 cl: 29.3.4 → cl 32.4
Min req’ts of r/s in wall as per IS 456:2000 cl 32.4
% of side face r/f of gross area of cone
Type of r/f steel
Vertical Horizontal
Deformed bar of dia ≤ 16mm & f[y] ≥ 415 N/mm^2 0.12 0.2
Bars of other types 0.15 0.45
Welded wire fabric not larger than 16mm in dia 0.12 0.20
Spacing of side face r/f ≤ 3 x thkness or 450mm
3. This answer was edited.
Use of reinforced concrete deep beam:
There are many useful applications of the reinforced concrete deep beam in:
1. Foundation beam
2. Transfer girders
3. Wall footings
4. Foundation pile caps
5. Floor diaphragms
6. Shear walls
Definition of deep beam:
A member who is loaded on one face and supported on the opposite face. Hence, compression strut can develop between the loads and the supports.
The clear span of the reinforced concrete deep beam is equal to or less than four times the overall member depth.
4. Reinforced concrete deep beam is defined as that members with clear spans in equal or less than four times the overall member depth or regions of beams that are loaded on one face with
concentrated loads within twice the member depth from the support and supported on the opposite face so that compression struts can be developed between the loads and supports.
You must login to add an answer.
Join for free or log in to continue reading...
|
{"url":"https://test.theconstructor.org/question/what-is-the-reinforced-concrete-deep-beam-and-where-it-used/","timestamp":"2024-11-05T02:42:05Z","content_type":"text/html","content_length":"206331","record_id":"<urn:uuid:5a56c9ca-0d10-473e-b3cc-69a6727235bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00180.warc.gz"}
|
What Are Total Bases In Baseball? Definition & Meaning - Baseballes
A player’s contribution to the game and performance can be measured by a myriad of statistics in baseball. The “Total Bases” statistic is one such statistic that can provide a lot of insight into a
player’s offensive ability.
However, how are total bases calculated in baseball? The purpose of this article is to explore the concept of total bases, understand their significance, and understand how they affect a player’s
offensive evaluation.
What is Total Bases in Baseball? [Definition]
A player’s total bases represent his offensive performance in baseball. In baseball, it refers to how many bases a player reaches with a hit. A single, a double, a triple, and a home run are included
in this category.
Here is a table that helps you understand it better:
Total Bases Explanation
Singles 1 base reached
Doubles 2 bases reached
Triples 3 bases reached
Home Runs 4 bases reached
The following table provides an overview of how a player’s Total Bases are calculated. To calculate the overall contribution of the different types of hits, just add them up.
The impact of home runs on total bases is greater than that of singles. The 1921 season of Babe Ruth, with 59 home runs, is an example. He became a legendary power hitter as a result of this high
total base count.
The outfielder needs to understand the concept of total bases just as he needs to know that hungry is anger combined with hunger.
Importance of Understanding Total Bases in Baseball
Whether you’re a player, coach, or fan of baseball, the number of bases is crucial. Players are measured based on how well they perform on the field and their offensive output. Having an
understanding of a player’s total bases allows teams to make better decisions.
Fans are also more likely to appreciate the game if they understand total bases. Rankings and stats debates are enriched by this information. The total bases can show underrated players who
accomplish great things but aren’t recognized for it.
Understanding total bases is essential to enjoying baseball to its fullest. By doing so, you may be able to spot unnoticed great performances. Think about total bases when watching a game or chatting
with friends. As a result, you’ll gain valuable insight and improve your game. Don’t let this opportunity pass you by!
You may enjoy reading Best Baseball Gloves for 9 Year Olds Players
Total Bases Calculation
A baseball team’s total bases are calculated by taking into account the number of hits they receive: singles, doubles, triples, and home runs. Total bases are calculated by adding each of these
together. To illustrate how each type contributes to the overall count, we’ll examine singles, doubles, triples, and home runs in this section.
Contribution of singles to total bases
A single is one of the most important aspects of baseball. In order to get a better understanding of how they can help, let’s explore how they can help.
Below is a table showing the results. Each player’s singles are shown here:
Player Name Singles
John Smith 79
Emma Davis 62
Michael Lee 51
In order to calculate total bases, singles are essential. It is proven by this data. There are also special characteristics that make singles unique.
In the first place, they are a great place to start if you want to achieve more success. More bases can be scored with other hits, but singles are the beginning.
Additionally, they provide opportunities for strategic play. Singing helps players steal bases and hit-and-runs by putting them on base. There is more to their value than can be measured.
In order to understand a player’s performance, singles are very important. Every type of hit contributes to a team’s success, whether it is for personal goals or team wins.
Take a look at the whole picture! See our other articles for more information on baseball performance. Enjoy the game and stay informed! It is better to double your pleasure and your total bases
rather than settle for a single.
The contribution of doubles to total bases
The number of doubles a player has has a significant effect on his total bases. Let’s examine it in more detail!
Below is a table you can check out. Three players’ total bases, along with the number of singles and doubles they hit, can be seen here.
Players Singles Doubles Total Bases
Player A 100 20 160
Player B 80 25 155
Player C 90 15 135
According to the data, doubles contribute a significant amount to total bases.
The benefits of doubles are greater than those of singles. Rather than advancing one base, you can advance two bases when you hit a double. By doing this, you will be able to score runs more easily
and add more value to your performance.
Total bases are calculated based on doubles. Doubles can have a huge impact on a team’s success if players improve their ability to hit them.
If you have three bases, then you’re a hero, instead of ‘three strikes and you’re out’?
You may enjoy reading Best USSSA Bats
The contribution of triples to total bases
A player’s offensive performance is measured by total bases, which includes triples. There are three bases for each triple. An example will help you understand.
Player A and Player B are seated at a table with two players.
Player At Bats Doubles Triples Home Runs
Player A 150 10 5 2
Player B 160 8 3 4
A double counts for two bases, a triple counts for three bases, and a home run counts for four bases.
I would like to share some interesting facts about triples with you. Compared to singles and doubles, they are rarer. It takes speed and excellent base running skills to hit a triple. As triples
increase the offensive power of your team, you help it.
In addition to triples, advanced metrics include slugging percentage and on-base percentage. Hitters who are able to hit the ball deep into the gaps in the outfield are recognized by them.
If you are looking to maximize your contribution through triples, here are some tips:
• Faster and more agile base running skills: Hit more triples by getting faster and more agile. Learn how to run bases and how the game works.
• Goals in the Outfield: Focus on more than just home runs. You should try to hit the ball into gaps between outfielders. By doing this, you increase your chances of reaching third base.
• Make use of proper timing: Be aware of situations in which a triple may be possible. Outfielders have to make long throws or defensive miscommunications can happen.
By following these tips, players can increase their total bases count with triples and improve their offensive performance. As well as demonstrating their athletic ability, this is a great way for
their team to score goals. Whenever you swing at home, you make your total bases count!
The contribution of home runs to total bases
Baseball players’ total bases can be boosted by hitting home runs. They contribute more to their team’s score the more home runs they hit. As an example, take a look at the table below.
Player Name Home Runs Total Bases
Mike Trout 18 120
Bryce Harper 15 105
Jose Ramirez 12 78
Mookie Betts 9 63
A home run contributes to a total base and vice versa, as you can see in the following table. A total of 120 bases have been reached by Mike Trout, thanks to 18 home runs. In terms of total bases and
home runs, Bryce Harper came close behind with 105.
It is important for players to improve their hitting skills if they wish to increase their total bases even further. They can increase their chances of hitting more home runs by practicing and
refining techniques like swing speed and accuracy. Baseball players can also improve their power and distance through strength training.
It can also be helpful to analyze data from previous games. Players can recognize their strengths and patterns to improve by watching game footage and analyzing successful home runs. By refining
their batting technique, they will be able to hit more total bases.
You may enjoy reading Best Youth Baseball Gloves
MLB Highest Career Total Base Records
It is still Hank Aaron who leads the MLB All-Time list for total bases, nicknamed “Hammerin’ Hank.” Here are some other famous baseball players who have made their mark.
Rank Player (years) Total Bases PA Bats
1. Henry Aaron (23) 6856 13941 R
2. Albert Pujols (22) 6211 13041 R
3. Stan Musial (22) 6134 12721 L
4. Willie Mays (23) 6080 12545 R
5. Barry Bonds (22) 5976 12606 L
6. Ty Cobb (24) 5854 13103 L
7. Alex Rodriguez (22) 5813 12207 R
8. Babe Ruth (22) 5793 10627 L
9. Pete Rose (24) 5752 15890 B
10. Carl Yastrzemski (23) 5539 13992 L
11. Eddie Murray (21) 5397 12817 B
12. Rafael Palmeiro (20) 5388 12046 L
13. Frank Robinson (21) 5373 11744 R
14. Miguel Cabrera (21) 5356 11778 R
15. Adrián Beltré (21) 5309 12130 R
16. Ken Griffey Jr. (22) 5271 11304 L
17. Dave Winfield (22) 5221 12358 R
18. Cal Ripken Jr. (21) 5168 12883 R
19. Tris Speaker (22) 5101 12020 L
20. Lou Gehrig (17) 5060 9665 L
21. George Brett (21) 5044 11625 L
22. Mel Ott (22) 5041 11347 L
23. Jimmie Foxx (20) 4956 9677 R
24. Derek Jeter (20) 4921 12602 R
25. Ted Williams (19) 4884 9792 L
26. Honus Wagner (21) 4870 11766 R
27. Paul Molitor (21) 4854 12167 R
28. Al Kaline (22) 4852 11597 R
29. Reggie Jackson (21) 4834 11418 L
30. Manny Ramirez (19) 4826 9774 R
MLB Highest Total Base Records in a Single Season
A list of the most total bases in a single season in baseball history is presented here.
Rank Player (age) Total Bases Year PA Bats
1. Babe Ruth (26) 457 1921 693 L
2. Rogers Hornsby (26) 450 1922 704 R
3. Lou Gehrig (24) 447 1927 717 L
4. Chuck Klein (25) 445 1930 722 L
5. Jimmie Foxx (24) 438 1932 702 R
6. Stan Musial (27) 429 1948 698 L
7. Sammy Sosa (32) 425 2001 711 R
8. Hack Wilson (30) 423 1930 709 R
9. Chuck Klein (27) 420 1932 711 L
10. Lou Gehrig (27) 419 1930 703 L
11. Luis Gonzalez (33) 419 2001 728 L
12. Joe DiMaggio (22) 418 1937 692 R
13. Babe Ruth (32) 417 1927 691 L
14. Babe Herman (27) 416 1930 699 L
15. Sammy Sosa (29) 416 1998 722 R
16. Barry Bonds (36) 411 2001 664 L
17. Lou Gehrig (28) 410 1931 738 L
18. Lou Gehrig (31) 409 1934 690 L
19. Rogers Hornsby (33) 409 1929 712 R
20. Larry Walker (30) 409 1997 664 L
21. Joe Medwick (25) 406 1937 677 R
22. Jim Rice (25) 406 1978 746 R
23. Todd Helton (26) 405 2000 697 L
24. Chuck Klein (24) 405 1929 679 L
25. Hal Trosky (23) 405 1936 671 L
26. Jimmie Foxx (25) 403 1933 670 R
27. Lou Gehrig (33) 403 1936 719 L
28. Todd Helton (27) 402 2001 697 L
29. Henry Aaron (25) 400 1959 693 R
30. Albert Belle (31) 399 1998 706 R
Stats Related to Total Bases in Baseball
As we continue to discuss baseball statistics, we will also look at some other numbers and statistics related to total bases, such as the slugging percentage, isolated power, and on-base plus
Slugging Percentage (SLG)
Slugging percentage quantifies a player’s extra-base power by dividing total bases by at-bats. Doubles, triples, and home runs are more likely to be hit when a hitter has a high slugging percentage.
Slugging percentages below 100 indicate that a player relies on singles and is unable to drive the ball.
Isolated Power (ISO)
Slugging percentage is subtracted from batting average to determine isolated power. Hard contact and extra-base hit potential are both indicators of a high isolated power.
There is a tendency for elite isolated power players to muscle balls over the fence frequently. The number of extra-base hits produced by players with lower isolated power is lower.
OPS (On-base Plus Slugging)
On-base plus slugging (OPS) measures the overall offensive impact of on-base percentage and slugging percentage. By combining power and reaching base, it captures a batter’s dual abilities. A player
who excels in both areas will have a high OPS. A top hitter must excel in all aspects of the game to post an OPS above .900.
In addition to hitting average and batting average, advanced stats can reveal a player’s entire offensive profile. Undervalued talents are identified by scouts – players with hidden strengths. The
use of mathematical formulas unlocks next-level evaluation when they are used correctly.”
You may enjoy reading What is Outs Above Average In Baseball?
A comparison of players’ total bases
Analyze performance evaluations of players in baseball and factors that affect total bases in order to compare total bases across players. Examine the factors that can affect the total bases
accumulated by each player and how total bases can be used to assess player performance.
Evaluation of player performance using total bases
It is possible to evaluate a player’s performance based on total bases, in addition to his batting average and on-base percentage. In total, Michael Johnson has 165 bases, 15 of which are home runs.
Furthermore, it provides insight over several seasons, allowing teams to decide which players to trade or invest in.
The story of Tom Davis, a scout with Major League Baseball, was quite interesting. His considerations for an outfielder changed when he analyzed the player’s total base numbers, which revealed that
the player had hit many doubles and triples, although he didn’t hit many home runs. As a result, Davis reconsidered the trade and changed his perspective.
Based on the quality and quantity of hits, a player’s total bases reveal his offensive capabilities. Team and scout strategic decisions can be influenced by it. All other factors are just statistics
and excuses – the only thing that affects total bases is actually hitting the ball.
The factors affecting total bases
Total bases are influenced by a variety of factors. A number of factors determine the player’s performance, such as skill level, batting technique, physical strength, base path speed, and situational
awareness. A player’s total bases are affected by each of these factors.
Let’s take a look at the table below to gain a better understanding:
Factors Examples
Batting Technique Swing Style
Skill Level Hand-Eye Coordination
Physical Strength Power Hitting Ability
Base Running Speed Stealing Bases
Total bases are impacted by the following factors. The way a player strikes the ball and makes contact with it can be determined by their batting technique. In addition to hand-eye coordination,
pitch recognition, and understanding the game, skill level covers a range of other attributes. It is important to have physical strength in order to generate power and increase the chances of hitting
extra bases. Stealing bases requires speed when running the bases.
A player’s situational awareness can also influence whether or not they advance on a base hit. Players may be able to achieve more bases this way.
In order to identify areas of improvement, players should know their strengths and weaknesses in each factor.
You might feel as confused as a baseball fan watching a pitcher throw knuckleballs after reading about the limitations and criticisms of Total Bases.
Total Bases vs Slugging Percentage
The number of total bases is sometimes compared with the slugging percentage, which is another important metric for measuring offensive performance. Total bases are divided by at-bats to calculate
slugging percentage. Using it, you can determine if a batter is effective at getting hits that result in runs.
There are different aspects of a batter’s performance measured by total bases and slugging percentage. Slugging percentage measures a batter’s ability to drive in runs, while total bases measure a
batter’s overall offensive ability.
You may enjoy reading Can you wear Soccer Cleats for Baseball?
The Total Bases Statistic: Limitations and Criticisms
The subsections of criticisms of total bases as an indicator of offensive productivity and adjusting total bases for park factors provide an overview of the limitations and criticisms of total bases
in baseball. Explore the nuances and potential shortcomings of this statistic when evaluating player performance.
Offensive productivity and Total Bases Criticisms
Among baseball players, total bases is one of the most popular metrics for measuring offensive success. However, it has some limitations and can be criticized. In order to understand why Total Bases
may not be the best metric for measuring offensive performance, let’s take a closer look at these issues.
As we evaluate offensive productivity, we need to consider a number of factors that can impact Total Bases.
For a complete overview, please refer to the table below:
Criticism Explanation
Ignores On-Base Percentage Hit-by-pitches, walks, and total bases are not included in Total Bases.
Overvalued Extra-Base Hits The Total Bases system rewards extra-base hits but does not consider singles or walks as a means of getting on base.
Provides No Context Each individual hit or game situation is not taken into account by Total Bases.
Disregards Speed Speed and baserunning skills are not considered in the statistic. Total Bases does not account for stolen bases or extra bases taken on hits.
A few other details about Total Bases should be noted as well. Depending on the ballpark, hitters may be given a distinct advantage by shorter fences or better field dimensions, resulting in a higher
Total Bases total. Players’ true offensive abilities can be misinterpreted as a result.
Imagine a talented hitter who regularly racks up high Total Bases, but is having problems in a playoff series. Despite his impressive Total Bases stats, he does not perform at the right time in key
situations. A player’s performance cannot be captured by Total Bases in the real world.
In conclusion, analyzing offensive productivity should consider Total Bases’ limitations and criticisms, even though it can be a good indicator of power hitting potential. A player’s offensive impact
can be fully measured using this metric and other factors. In addition, park factors should be taken into account when calculating Total Bases.
You may enjoy reading Funniest Walk Up Songs In Baseball
Factoring in park factors to total bases
How can the Total Bases stat be analyzed? Factors related to parks should be considered. A player’s offensive ability can be assessed more accurately when Total Bases are adjusted for Park Factors.
Here is a table to help you. In terms of Total Bases, it shows the top five players as follows:
Players Total Bases (Ballpark A) Total Bases (Ballpark B)
Player 1 300 280
Player 2 280 320
Player 3 270 290
Player 4 260 275
Player 5 250 260
Despite park factors like outfield dimensions, altitude, and weather, Player 2 performs better in Ballpark A than Ballpark B.
A player’s offensive ability can be accurately assessed by adjusting Total Bases for Park Factors. By doing this, a more balanced comparison can be drawn, and the true impact each player had can be
Remember to adjust Total Bases stats for park factors the next time you see them. Make sure you understand the contributions made by each player!
It will not, however, make you hit home runs like Babe Ruth, even though criticizing the Total Bases Statistic is fun.
Frequently Asked Questions
What is Total Bases in baseball?
Total bases (TB) measure how many bases a player accumulates through hits. A player’s batting average is determined by adding the number of singles, doubles, triples, and home runs that they have
during a period, usually a game, a season, or their entire career.
How is Total Bases different from batting average?
By dividing a player’s number of hits by the total number of at-bats, the batting average (BA) measures an offensive performance. The total bases statistic measures the overall power and output of a
player, while the batting average measures their consistency and efficiency.
Why is Total Bases important in baseball?
An extra-base hit contributes to a team’s scoring effort, which is why total bases is important. The player with the most Total Bases is more likely to generate scoring opportunities and drive in
runs than the one with the fewest Total Bases.
Who holds the record for the most Total Bases in a single season?
During the 1921 season, Babe Ruth had 457 Total Bases, the most in a single season.
How can a player increase his Total Bases count?
Players who hit doubles, triples, and home runs can increase their Total Bases count. It is possible for a player to score more runs if he hits for power and drives the ball into the gaps.
To understand a player’s offensive aptitude in baseball, the Total Bases must be considered. Including singles, doubles, triples, and home runs, this is the total number of bases reached in at-bats.
In order to understand a player’s power and impact, it is important to calculate Total Bases.
Most people, however, only consider home runs and overlook Total Bases. Nevertheless, doubles and triples are great ways to rack up Total Bases. This shows they are capable of hitting the ball far
into the outfield and gaining extra bases.
A player’s Total Base count can be increased by:
• By increasing bat speed, you will be able to make better contact with the ball and cover more bases.
• After hitting the ball, players should sharpen their baserunning skills to advance to more bases.
For bat speed to increase, you must strengthen relevant muscle groups. Consequently, swings will be stronger and the ball will travel farther.
The ability to run bases is also helpful. In order to take advantage of gaps in the defense, players should work on speed and agility. Overall, this results in more bases being reached.
Total Bases can be accumulated by players who improve their bat speed and baserunning skills. A player’s offensive ability is better illustrated by Total Bases than by home runs.
You may enjoy reading Difference Between Soccer and Baseball Cleats
Leave a Comment
|
{"url":"https://baseballes.com/what-are-total-bases-in-baseball-definition-meaning/","timestamp":"2024-11-04T23:37:28Z","content_type":"text/html","content_length":"178891","record_id":"<urn:uuid:17ee736f-3416-42e2-9eea-be883179aff1>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00143.warc.gz"}
|
OpenStax College Physics for AP® Courses, Chapter 25, Problem 60 (Problems & Exercises)
Ray tracing for a flat mirror shows that the image is located a distance behind the mirror equal to the distance of the object from the mirror. This is stated $d_i = -d_o$ , since this is a negative
image distance (it is a virtual image). (a) What is the focal length of a flat mirror? (b) What is its power?
Question by
is licensed under
CC BY 4.0
Final Answer
a. tends to infinity
b. tends to zero
Solution video
OpenStax College Physics for AP® Courses, Chapter 25, Problem 60 (Problems & Exercises)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. A flat mirror has an image distance equal to the negative of the object distance so the image is the same magnitude distance from the mirror as the
object is but just on the other side of the mirror; this is partly why mirrors make rooms seem bigger because it seems like things that are being reflected are far behind the mirror. Okay! So the
question is what is the focal length of such a mirror? Well we have this equation for a mirror: 1 over focal length is 1 over image distance plus 1 over object distance, the image distance is the
negative of the object distance so we can substitute that here and this works out to zero. Now if we raise both sides to the negative 1 to solve for f, strictly speaking from a math point of view, 0
to the negative 1 is undefined but in practice, you know, the mirror is not gonna be perfectly, perfectly flat, it's just gonna be very nearly so so we can consider this to be a very, very small
number— not exactly zero— and you raise a very small number to the power of negative 1, you get a very large number and it approaches infinity as this very small number approaches zero so the focal
length is approaching infinity. The power being the reciprocal of focal length is 1 over this infinity and that is... essentially tends towards zero.
|
{"url":"https://collegephysicsanswers.com/openstax-solutions/ray-tracing-flat-mirror-shows-image-located-distance-behind-mirror-equal-0","timestamp":"2024-11-04T02:12:31Z","content_type":"text/html","content_length":"192160","record_id":"<urn:uuid:af576188-4f33-4770-a714-79e3669d34cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00081.warc.gz"}
|
Get most out of 144 divided by 2 - NewsiPedia
Get most out of 144 divided by 2
Welcome, 144 divided by 2 math enthusiasts! Today, we are diving into the fascinating world of division and exploring how to get the most out of dividing 144 by 2. Division is a fundamental operation
in mathematics that allows us to distribute or allocate quantities equally. And when it comes to dividing 144 by 2, there’s more than meets the eye! So, grab your calculators and let’s embark on this
mathematical adventure together. Whether you’re a seasoned mathematician or just someone looking to sharpen their skills, this blog post has something for everyone. Get ready to uncover the secrets
hidden within this simple arithmetic equation and discover why dividing by 2 can be surprisingly useful in various practical scenarios. Let’s dive right in!
The Basics of Division
Division is one of the fundamental operations in mathematics, and it plays a crucial role in our everyday lives. It allows us to distribute or partition quantities into equal parts. In simpler terms,
division helps us answer questions like “How many groups of a certain size can we make?”
When we divide numbers, we are essentially finding out how many times one number (the divisor) can be subtracted from another number (the dividend) without resulting in a negative value. The result
of this operation is called the quotient.
To perform division, we use various methods such as long division or short division. Long division involves multiple steps and is typically used for dividing larger numbers, while short division is
more suitable for smaller calculations.
It’s important to note that when dividing by 2 specifically, we are essentially splitting a quantity into two equal halves. This can be incredibly useful in scenarios where sharing or distributing
resources equally is necessary.
For example, let’s say you have 144 cookies and want to share them equally among your friends at a party. By dividing 144 by 2, you’ll find that each person would receive 72 cookies – ensuring
fairness and satisfaction among all partygoers!
Understanding the basics of division not only strengthens our mathematical skills but also enhances problem-solving abilities in real-life situations. So whether you’re slicing up pizzas at parties
or calculating shares within financial investments, mastering the art of division will serve you well throughout your life’s endeavors!
Understanding the Number 144
Understanding the Number 144
When it comes to numbers, each one has its own unique characteristics and qualities. One such number is 144. So, let’s delve into understanding this intriguing number a bit more.
First and foremost, we need to recognize that 144 is a composite number – meaning it can be divided evenly by multiple factors other than just itself and one. In fact, when we think about dividing
144 by two, we get an interesting result of 72.
But what makes the number 144 so special? Well, for starters, it is considered to be both square (12 multiplied by itself equals 144) and abundant (the sum of its proper divisors exceeds the number
Moreover, in mathematics and geometry specifically, you may come across 144 degrees quite frequently. This angle measurement often appears in regular polygons like pentagons or hexagons.
Additionally, if you’re a fan of music theory or frequencies, you might recognize that middle C on a piano has a frequency of approximately 261.63 Hz – which happens to be exactly half of the
frequency associated with A4 at around 523.25 Hz!
In conclusion – oops sorry! Let me rephrase that… To wrap up our exploration into understanding the fascinating aspects of the number 144: its divisibility by two provides us with valuable insights
not only in math but also in various real-world applications ranging from musical harmonics to geometric shapes. Understanding these connections allows us to appreciate how numbers truly enrich our
How to Divide 144 by 2
When it comes to dividing numbers, 144 divided by 2 is one of the simplest calculations you can make. And yet, there are still some strategies that can help you get the most out of this equation.
To divide 144 by 2, you simply need to think about how many times 2 goes into 144. One way to approach this is to start with smaller numbers and work your way up. For example, if we know that 1 goes
into 144 evenly (because any number divided by itself equals 1), then we know that at least half of that number will also divide evenly.
So let’s try dividing in increments: We know that both 2 and 4 go into 144 evenly because they are factors of the number. Moving on, we find that even numbers like 6,8,and10 do not divide exactly
into our original number. But when we reach the number12; lo and behold! It divides cleanly! From there on we have a simple multiplication problem – since two twelves make twenty four so two straight
forward sixes makes twelve!
Dividing by two may seem basic or unimportant compared to more complex mathematical concepts but its simplicity can yield practical benefits in everyday life tasks such as splitting expenses among
friends or family members.
By understanding division basics like this one – dividing larger numbers mentally becomes easier too!
Remember Practice Makes Perfect! Keep working on your division skills until they become second nature- no calculators necessary!
Why Dividing by 2 is Useful
Why Dividing by 2 is Useful
Understanding the usefulness of dividing by 2 can open up a world of practical applications and problem-solving techniques. When we divide a number, such as 144, by 2, we are essentially splitting it
into two equal parts. This concept may seem simple, but its implications are far-reaching.
Dividing by 2 is particularly useful in situations where we need to find half or make equal distributions. For example, if you want to split a pizza evenly between two people, knowing how to divide
the total number of slices (in this case, 144) by 2 will ensure each person gets their fair share.
In the realm of mathematics and science, dividing by 2 plays an important role in calculating averages and finding ratios. By halving a value like 144, we can easily determine its midpoint or use it
as a reference point for further calculations.
Moreover, understanding how to divide numbers in general helps improve our mental math skills and makes us more efficient problem solvers. Being able to quickly divide values in our head can save
time when tackling everyday tasks that involve division.
In conclusion,
dividing numbers like 144 by
two may seem like a simple mathematical operation but has vast applications in various fields ranging from sharing resources equally to making quick calculations. Mastering division not only enhances
our overall numeracy skills but also equips us with essential tools for solving real-world problems efficiently. So next time you come across the number
144 divided by
remember its significance and put your newfound knowledge into practice!
Practical Applications of Dividing 144 by 2
Practical Applications of Dividing 144 by 2
So, you’ve mastered the art of dividing 144 by 2. But what’s next? Well, let me tell you, this simple division can actually be quite handy in real-life situations.
One practical application of dividing 144 by 2 is when dealing with measurements. For example, if you have a piece of fabric that measures 144 inches long and you need to cut it in half, knowing how
to divide it by 2 can save time and ensure accuracy.
Another scenario where this division comes into play is when splitting expenses evenly among a group. Let’s say you and your friends decide to split the cost of a dinner bill equally. If the total
amount comes out to be $144, dividing it by 2 will help determine how much each person owes.
Additionally, understanding how to divide numbers like 144 by smaller values can also be useful in calculating averages or finding proportions. Whether you’re analyzing data sets or working on
statistical problems, being able to divide efficiently will definitely come in handy.
In the world of mathematics and beyond, mastering basic operations like division opens up countless possibilities for problem-solving and critical thinking skills. So don’t underestimate the power of
dividing numbers!
Remember that practice makes perfect when it comes to mathematical concepts like division. The more familiar you become with these calculations through repetition and exposure to different scenarios,
the easier they will become for you.
So keep practicing those divisions! You never know when knowing how to divide 144 by 2 might just come in handy again! Stay curious and explore all the ways math can make your life easier!
Tips for Mastering Division and Multiplication
Mastering division and multiplication can be a daunting task for many, but with the right tips and strategies, it becomes much more manageable. Here are some helpful suggestions to help you improve
your skills in these fundamental mathematical operations.
Practice is key. The more you practice division and multiplication problems, the better you will become at them. Set aside dedicated time each day to work on these skills, using different types of
problems to challenge yourself.
Understand the concepts behind division and multiplication. Division involves breaking down a number into equal parts or groups, while multiplication is about combining equal sets of numbers.
Understanding how they relate to each other can make solving problems easier.
Additionally, memorizing basic number facts can significantly speed up your calculations. Knowing things like the times tables (multiplication) or common divisibility rules (division) can save
valuable time when solving more complex problems.
Furthermore, utilize visual aids such as manipulatives or diagrams to help illustrate abstract concepts. These tools can provide a concrete representation of what is happening in a problem and aid in
It’s also important to approach division and multiplication with a positive mindset. Don’t get discouraged if you struggle at first – remember that everyone learns at their own pace! Stay motivated
by celebrating small victories along the way.
Seek additional resources if needed. There are countless online tutorials, videos, and worksheets available that cater to different learning styles. Don’t hesitate to explore these resources for
extra support.
Remember: mastering division and multiplication takes time and effort but with persistence and determination, you’ll improve your skills over time! So keep practicing regularly and embrace the
challenge ahead!
In this article, we have explored the basics of division and how it applies to the number 144. We learned that dividing 144 by 2 is a simple process that yields a result of 72. This calculation can
come in handy in various practical applications.
Dividing by 2 is useful because it helps us find equal parts 144 divided by 2 or halves of a whole. In the case of 144, dividing it by 2 allows us to evenly distribute or share resources, quantities,
or measurements.
The practical applications of dividing 144 by 2 are numerous. For example, if you are baking and need to halve a recipe that calls for 144 units of an ingredient, 144 divided by 2 you will know
exactly how much you need for half the recipe – which would be precisely 72 units.
Furthermore, understanding division and multiplication concepts like these can greatly enhance your problem-solving skills and help you tackle more 144 divided by 2 complex mathematical calculations
with ease.
To master division and multiplication techniques further, practice regularly using various numbers and scenarios. Gradually increase the difficulty level as 144 divided by 2 you become more
comfortable with the concepts.
Remember to break down problems into smaller steps when necessary, use visual aids such as charts or diagrams if they help you understand better, and seek assistance from teachers or online resources
whenever needed.
By continuously honing your division 144 divided by 2 skills through exercises and real-life situations where dividing by two plays a role – like splitting bills among friends or sharing resources
equally – you’ll develop confidence in tackling mathematical challenges head-on!
So go ahead! Take what you’ve learned about dividing 144 by two today and apply it in different areas of your life – whether it’s cooking, budgeting finances, organizing tasks efficiently at work –
there’s no limit to its usefulness.
Embrace the power of numbers and unleash your inner math wizard!
|
{"url":"https://newsipedia.com/get-most-out-of-144-divided-by-2/","timestamp":"2024-11-10T20:52:53Z","content_type":"text/html","content_length":"149952","record_id":"<urn:uuid:69f0d87a-a3a8-4f0a-8cbf-6219c064f532>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00630.warc.gz"}
|
Math in cross-wind landing vectors applet - Interactive MathematicsMath in cross-wind landing vectors applet
Math in cross-wind landing vectors applet
By Murray Bourne, 27 Nov 2014
I recently updated the cross-wind landing applet, which is an introductory activity for the section on adding vectors in 2 dimensions in the Vectors chapter. The applet now works on tablet devices.
Here's a screen shot of the applet:
Pilots need to be very aware of the effects of wind in all stages of flight - from takeoff (where they need to point into wind), during flight (where a tail wind means you get there quicker, so it's
cheaper) and during landing (where once again, you need to land into wind, and if there is a cross wind, it can make the landing a bit more challenging).
Flying is an interesting real-world application of vector addition.
The cross wind applet
The idea behind this applet is that students should be given an opportunity to investigate mathematical concepts with some real-life (or simulated) activity. By thinking about what they have
discovered, they are more likely to be able to connect the dots when it comes to later applications.
The applet encourages you to fly around and to land the Cessna with a significant cross wind.
You'll hopefully notice that you travel across the ground much faster when the wind is behind the aircraft (2 positive vectors added together), and slower when the wind is towards you (addition of
one positive and one negative vector).
There's some interesting math going on in the programming behind the applet, so let's have a look at some of it.
Velocity vectors in 2 dimensions are made up of 2 components - one vector in the x-direction (usually written v[x]) and the second in the y-direction (usually written v[y]).
To move the aircraft around, I used the following code:
xspeed = Math.cos((90-dir)*Math.PI/180)*speed + wind;
yspeed = (Math.sin((90-dir)*Math.PI/180)*speed);
xspeed = v[x] (the x-component of the velocity)
yspeed = v[y] (the y-component of the velocity)
dir = the direction the plane is heading, in degrees (In navigation, 0° = North and is at the top of the graph, and angles are measured clockwise)
speed = the airspeed of the aircraft
wind = the magnitude of the wind (In the applet, since the wind is from only due East or due West, we only need to add the wind component to xspeed.
In javascript, the trigonometric functions are calculated using radians, while my input angle "dir" is in degrees.
So I needed to convert my degrees to radians by multiplying by pi and dividing by 180.
The above 2 lines just work out the 2 components of the vector, and move the aircraft to the appropriate position in time.
Of course, I'm using the concepts of vector addition throughout the applet.
In the above screen shot, the Cessna is pointing in the direction of the blue vector (this is called the aircraft's "heading"). The wind is from the East (the black vector) and the result is the
plane is moving across the ground in the direction of the red vector.
Notice the red (resultant) vector is longer than the blue (heading) vector, and this represents greater speed. The wind is helping in this case, as it is behind the plane, but at an angle. We work
out the size of this resultant vector by using the parallelogram you can see in the diagram.
In this next screen shot, we see the wind is blowing from due East, and the aircraft is trying to head East, but is being slowed down resulting in a shorter resultant vector.
Next, we see the case when the wind is behind us (from the West) and it is helping us (the resultant vector is longer).
No matter what direction the plane is pointing, the length of the blue vector is constant, since the plane's airspeed is constant in the applet.
The point at the end of the blue vector follows the path of a circle in polar coordinates,
(r cos θ, r sin θ)
where r is the radius of the circle, and θ is the angle at the center of the circle.
Absolute value
To determine whether the Cessna is close to the landing spot (the number "36" at the end of the runway), and pointing in an appropriate direction for a landing, I used this expression:
if(Math.abs(cessX) < 2 && Math.abs(cessY - 65) < 3
&& (dir < 30 || dir > 330) ) {
cessX = the current x-position of the aircraft
cessY = the current y-position of the aircraft
The expression "Math.abs(cessX)" means "find the absolute value of the current x-position". It then tests if that value is less than 2, and if it is, regard it as an acceptable value for the landing.
Another way of writing this (using normal math notation) is:
| cessX | < 2
and this means
−2 < cessX < 2
That is, we can be within these limits for a landing (we will be on the runway).
As for the y-value, we have:
Math.abs(cessY - 65) < 3
(The start of the runway is 65 units from the origin point in the applet.)
In normal math notation, this is:
| cessY − 65 | < 3
We could write this as:
62 < cessY < 68
This means as long as we are lined up and pass close enough to the end of the runway, we'll land OK.
For the direction, we have:
dir < 30 || dir > 330
The 2 vertical lines "||" mean "or" in most programming languages.
Mobile device considerations
Most smart phones and tablets have accelerometer sensors which determine the tilt angle of the device in 3 dimensions, relative to when the device is lying flat on a table.
(See more on the 3-dimensional coordinate system.)
These angles (usually denoted alpha, beta and gamma) are measured in the up-down, left-right and rotate-left, rotate-right directions.
I'm using one direction only to change the direction of the aircraft when used on a tablet. It's the "beta" direction, which roughly correspnds with a "steering wheel" movement of the tablet.
function(eventData) {
tilt = eventData.beta;
It's not as smooth as I'd like, but it works OK - albeit rather slowly on a tablet.
Math education should be less about plugging numbers into formulas and should get students to investigate phenomena through an activity. This cross wind landing applet aims to do just that. Students
get the concepts (through some light-hearted activity) before worrying about the algebra.
See the 2 Comments below.
roger evans says:
29 Nov 2014 at 1:47 am [Comment permalink]
Hello, very interesting application to the aircraft which could also be applied/ adapted to boats/ships being influenced by wind and current? As an former commercial pilot I well appreciate the maths
involved to describe the action of laying off the aircraft to the wind, particularly in stronger crosswinds on take-offs, climb outs and landings, and especially those winds which are close to the
undercarriage crosswind limits. I suppose the maths would also work while the aircraft is established inbound or outbound on a VOR radial which very easily allows the pilot to achieve a steady/
accurate heading and reading off the angle of drift being applied to the aircraft due to the crosswind component prevailing. Although I understand the concepts involved very well I am not cleverer
enough to follow the maths through. Congrats to the person who did this!
Best regards, Roger
Murray says:
29 Nov 2014 at 7:51 am [Comment permalink]
Hi Roger
Thanks for your kind comments. I have a PPL (private pilot's license) hence my interest in using cross-wind landings for this application!
|
{"url":"https://www.intmath.com/blog/mathematics/math-cross-wind-landing-vectors-applet-9889","timestamp":"2024-11-09T17:35:37Z","content_type":"text/html","content_length":"135849","record_id":"<urn:uuid:879fae3f-a604-40c8-8852-6fe1c4da20d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00815.warc.gz"}
|
CHMA11H3 Chapter 14: CHMA11 Chapter 14 - OneClass
At equilibrium the concentrations of reactants and products can be predicted using the equilibrium constant, K which is a mathematical expression based on the chemical equation. For example, in the
reaction aA + bB cC + dD where a, b, c, and d are the stoichiometric coefficients, the equilibrium constant is K_c = [C]^c [D]^d/[A]^a[B]^b where [A], [B], [C], and [D] are the equilibrium
concentrations. If the reactions in not at equilibrium, the quantity can still be calculated, but it is called the reaction quotient, Q_c, instead of the equilibrium constant, K_c. Q_c = Where each
concentration is measured at some arbitrary time t. A mixture initially contains A, B, and equilibrium, the concentrations of reactants and C in the following concentrations [A] = 0.555 M, [B] =
0.650 M, and [C] = 0.700 M. The following reaction occurs and equilibrium is established. A + 2B C At equilibrium, [A] = 0.390 M and [C] = 0.860 M. Calculate the value of the equilibrium constant,
K_c. Express your answer numerically.
|
{"url":"https://oneclass.com/textbook-notes/ca/utsc/chm/chma-11h3/5896-chma11h3-chapter-14-chma11-chapter-14.en.html","timestamp":"2024-11-07T04:07:46Z","content_type":"text/html","content_length":"1050620","record_id":"<urn:uuid:f7063b1d-9d04-4e96-8a4b-b9ccbe696fd8>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00220.warc.gz"}
|
Way number eight of looking at the correlation coefficient
This post has an accompanying Jupyter Notebook!
Back in August, I wrote about how, while taking the Data 8X series of online courses^1, I had learned about standard units and about how the correlation coefficient of two (one-dimensional) data sets
can be thought of as either
• the slope of the linear regression line through a two-dimensional scatter plot of the two data sets when in standard units, or
• the average cross product of the two data sets when in standard units.
In fact, there are lots more ways to interpret the correlation coefficient, as Rodgers and Nicewander observed in their 1988 paper “Thirteen Ways to Look at the Correlation Coefficient”. The above
two ways of interpreting it are are number three (“Correlation as Standardized Slope of the Regression Line”) and number six (“Correlation as the Mean Cross-Product of Standardized Variables^2”) on
Rodgers and Nicewander’s list, respectively.
But that still leaves eleven whole other ways of looking at the correlation coefficient! What about them?
I started looking through Rodgers and Nicewander’s paper, trying to figure out if I would be able to understand any of the other ways to look at the correlation coefficient. Way number eight
(“Correlation as a Function of the Angle Between the Two Variable Vectors”) piqued my interest. I know what angles, functions, and vectors are! But what are “variable vectors”?
Turning our data inside out
Rodgers and Nicewander write:
The standard geometric model to portray the relationship between variables is the scatterplot. In this space, observations are plotted as points in a space defined by variable axes.
That’s the kind of thing I wrote about back in August. For instance, here’s a scatter plot showing the relationship between my daughter’s height and weight, according to measurements taken during the
first year of her life. There are eight data points, each corresponding to one observation – that is, one pair of height and weight measured at a particular doctor visit.
A scatter plot showing the relationship between Sylvia's height and weight (both in standard units) between birth and age one.
These measurements are in standard units, ranging from less than -1 (meaning less than one standard deviation below average for the data set) to near zero (meaning near average for the data set) to
more than 1 (meaning more than one standard deviation above average for the data set). (If you’re not familiar with standard units, my previous post goes into detail about them.) I also have another
scatter plot in centimeters and kilograms, if you’re curious.
Rodgers and Nicewander continue:
An “inside out” version of this space – usually called “person space” – can be defined by letting each axis represent an observation. This space contains two points – one for each variable – that
define the endpoints of vectors in this (potentially) huge dimensional space.
So, instead of having height and weight as axes, they want us to take each of the eight rows of our table – each observation – and make those be our axes. And the two axes we have now, height and
weight, would then become points in that eight-dimensional space.
In other words, we want to take our table of data – which looks like this, where rows correspond to points and columns correspond to axes on our scatter plot –
Date Height (standard units) Weight (standard units)
2017-07-28 -1.26135 -1.3158
2017-08-07 -1.08691 -1.13054
… … …
– and turn it sideways, like this:
Date 2017-07-28 2017-08-07 …
Height (standard units) -1.26135 -1.08691 …
Weight (standard units) -1.3158 -1.13054 …
Now we have two points, one for each of height and weight, and eight axes, one for each of our eight observations.
Paring down to three dimensions
Eight dimensions are hard to visualize, so for simplicity’s sake, let’s pare it down to just three dimensions by picking out three observations to think about. I’ll pick the first, the last, and one
in the middle. Specifically, I’ll pick the observations from when my daughter was four days old, about six months old, and about a year old:
Date 2017-07-28 2018-01-26 2018-07-30
Height (standard units) -1.26135 0.617255 1.63707
Weight (standard units) -1.3158 0.728253 1.41777
What do we get when we visualize this sideways data set as a three-dimensional scatter plot? Something like this:
A three-dimensional scatter plot showing a "person space" representation of Sylvia's height and weight.
What’s going on here? We’re looking at points in “person space”, where, as Rodgers and Nicewander explain, each axis represents an observation. In this case, there are three observations, so we have
three axes. And there are two points, as promised – one for each of height and weight.
If we look at the difference between the two points on the z-axis – that is, the axis for the 07/30/2018 observation – we can see that the darker-colored blue dot is higher up. It must represent the
“height” variable, then, with coordinates (-1.26135, 0.617255, 1.63707). That means that the other, lighter-colored blue dot, with coordinates (-1.3158, 0.728253, 1.41777), must represent the
“weight” variable.
I’ve also plotted vectors going from the origin to each of the two points, and these, finally, are what Rodgers and Nicewander mean by “variable vectors”!
The angle between variable vectors
Continuing with the paper:
If the variable vectors are based on centered variables, then the correlation has a relationship to the angle \(\alpha\) between the variable vectors (Rodgers 1982): \(r = \textrm{cos}(\alpha)\).
Oooh. Okay, so first of all, are our variable vectors “based on centered variables”? From what Google tells me, you center a variable by subtracting the mean from each value of the variable,
resulting in a variable with zero mean. The variables we’re dealing with here are in standard units, and so the mean is already zero. So, they’re already centered! Hooray.
Finding the angle between [-1.26135, 0.617255, 1.63707] and [-1.3158, 0.728253, 1.41777] and taking its cosine, we can compute \(r\) to be 0.9938006245545371. Almost 1! That means that, just like
last time, we have an almost perfect linear correlation.
It’s a bit different from what we got for \(r\) last time, which was 0.9910523777994954. But that’s because, for the sake of visualization, we decided to only look at three of the observations. To
get more accuracy, we can go back to all eight dimensions. We may not be able to visualize them, but we can still measure the angle between them! Doing that, we get 0.9910523777994951, which is the
same as we had last time, modulo 0.0000000000000003 worth of numerical imprecision. I’ll take it.
So, that’s way number eight of looking at the correlation coefficient – as the angle between two variable vectors in “person space”!
By any other name
Why do Rodgers and Nicewander call it “person space”? I wonder if it’s because it’s common in statistics for an observation – a row in our original table – to correspond to a single person. It seems
to also sometimes be called “subject space”, “observation space”, or “vector space”. For instance, here’s a stats.SE answer that shows an example contrasting “variable space” – that is, the usual
kind of scatter plot, with an axis for each variable – with “subject space”.
I had never heard any of these terms before I saw Rodgers and Nicewander’s paper, but apparently it’s not just me! A 2002 paper by Chong et al. in the Journal of Statistics Education laments that the
concept of subject space (as opposed to variable space) often isn’t taught:
There are many common misconceptions regarding factor analysis. For example, students do not know that vectors representing latent factors rotate in subject space, rather than in variable space.
Consequently, eigenvectors are misunderstood as regression lines, and data points representing variables are misperceived as data points depicting observations. The topic of subject space is
omitted by many statistics textbooks, and indeed it is a very difficult concept to illustrate.
And the lack of uniform terminology seems to be part of the problem. Chong et al. get delightfully snarky in their discussion of this:
In addition, the only text reviewed explaining factor analysis in terms of variable space and vector space is Applied Factor Analysis in the Natural Sciences by Reyment and Joreskog (1993). No
other textbook reviewed uses the terms “subject space” or “person space.” Instead vectors are presented in “Euclidean space” (Joreskog and Sorbom 1979), “Cartesian coordinate space” (Gorsuch
1983), “factor space” (Comrey and Lee 1992; Reese and Lochmüller 1998), and “n-dimensional space” (Krus 1998). The first two phrases do not adequately distinguish vector space from variable
space. A scatterplot representing variable space is also a Euclidean space or a Cartesian coordinate space. The third is tautological. Stating that factors are in factor space may be compared to
stating that Americans are in America.
For their part, Rodgers and Nicewander want to encourage more people to use this angle-between-variable-vectors interpretation of \(r\). They write:
Visually, it is much easier to view the correlation by observing an angle than by looking at how points cluster about the regression line. In our opinion, this interpretation is by far the
easiest way to “see” the size of the correlation, since one can directly observe the size of an angle between two vectors. This inside-out space that allows \(r\) to be represented as the cosine
of an angle is relatively neglected as an interpretational tool, however.
I have mixed feelings about this. On the one hand, yeah, it’s easier to just look at one angle between two vectors in observation space (or person space, or vector space, or subject space, or
whatever you want to call it) than to have to squint at a whole bunch of points in variable space. On the other hand, for most of us it probably feels pretty strange to have, say, a “July 28, 2017”
axis instead of a “height” axis. Moreover, the observation space is really hard to visualize once you get past three dimensions, so it’s hard to blame people for not wanting to think about it. I can
visualize lots of points, but only a few axes, so using axes to represent observations (which we may have quite a lot of) and points to represent variables (which, when dealing with bivariate
correlation, we have two of) seems like a rather backwards use of my cognitive resources! Nevertheless, I’m sure there are times when this approach is handy.
1. Since August, I finished the final course in the Data 8X sequence and am now a proud haver of the <airquotes>Foundations of Data Science Professional Certificate<airquotes> from <airquotes>
BerkeleyX<airquotes>. ↩
2. When Rodgers and Nicewander speak of a “variable”, they mean it in the statistician’s sense, meaning something like “feature” (like “height” or “width”), not in the computer scientist’s sense.
This is an unfortunate terminology overlap. ↩
|
{"url":"https://decomposition.al/blog/2019/01/31/way-number-eight-of-looking-at-the-correlation-coefficient/","timestamp":"2024-11-05T04:10:48Z","content_type":"text/html","content_length":"27115","record_id":"<urn:uuid:ced44da8-4a81-4705-9d8c-3ef3e3d2cf5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00857.warc.gz"}
|
How to calculate arctangent
How to calculate arctangent in Python
To calculate the arctangent (arctan) in Python, you can use the arctan() function from the NumPy module.
The parameter x represents the tangent of an angle θ measured in radians.
The function outputs the arctangent.
Arctangent is the inverse trigonometric function of tangent, denoted as arctan = tan^-1. It takes values between -π/2 and +π/2 radians in trigonometry.
Method 2 ( atan2 )
Python also has a special function called atan2() from NumPy to calculate arctangent using the (x, y) coordinates of a point in the Cartesian plane.
atan2(y, x)
The variables x and y represent the coordinates of a point on the two-dimensional Cartesian plane (x, y).
The atan2() function calculates the arctangent of the arc determined by the point (x, y) on the plane.
Note. Unlike the arctan() function, the atan2() function measures the continuous arc length even beyond the angle of π/2.
Example 1
Given a tangent of tan(θ) = 3.73, find the angle θ of the tangent.
>>> from numpy import arctan
>>> x=3.73
>>> arctan(x)
The result of the arctan() function is the arctangent, which measures the arc on the circumference identified by the tangent.
In this case, the result is 1.3088594904685142, which is the angle θ of the tangent in radians.
The angle θ of the tangent measures 1.3 radians (approximately 75.1°).
Example 2
Given a point P(-2, 3) on the plane, find the arctangent.
>>> from math import atan2
>>> atan2(3,-2)
The output of the atan2() function is 2.158798930342464, which measures the arc described by point P in radians.
The arc described by point P measures 2.15 radians.
Note. In the function atan2(y, x), the coordinates are inverted with respect to mathematical notation (y, x). Therefore, the point (-2, 3) is denoted as atan2(3, -2).
Report an error or share a suggestion to enhance this page
|
{"url":"https://how.okpedia.org/en/python/how-to-calculate-arctangent-in-python","timestamp":"2024-11-07T12:34:58Z","content_type":"text/html","content_length":"14238","record_id":"<urn:uuid:7144fe25-8832-4204-ad40-6ba4b659c1b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00769.warc.gz"}
|
Archives for August 2010 | A Drop in the Digital Ocean | A Drop In The Digital Ocean
08/29/10 02:50 PM Filed in:
Proud Father | Christianity
My daughter received a 100% on her first Bible quiz at school this week. I persuaded her to let me take the test. I didn’t do as well. My excuse is that I couldn’t read what she scanned in -- the
resolution was too low. In any case, she said the hardest question was #5:
The first sin was the eating of the forbidden fruit. Which of the following best describes the fundamental motive for Adam and Eve’s disobedience? Mark one.
1. It was sort of an accident.
2. The devil made them do it.
3. They were both deceived by the devil.
4. They weren’t exactly sure what God wanted.
5. It looked like a good idea to them.
The right answer is “e”. Rachel said, “This question was the one most people in class missed (majority put C). He even told us before the quiz that the devil didn't make Eve do it.” Now, 2 Cor 11:3
says, “But I am afraid that as the serpent deceived Eve by its cunning...”. So Eve was deceived. But 1 Tim 2:14 says, “...Adam was not deceived, but the woman was deceived...” This rules out “c”. The
professors admonition ruled out “b”.
Genesis 3:6 says, in part, “So when the woman saw that the tree was good for food...”. The interesting question is, “How did Eve know something was good before eating of the fruit which would give
that knowledge?” A typical answer is that Eve determined that the fruit was edible, i.e., “good for food” and that this is somehow different from “morally good.” But this betrays a misunderstanding
of the mental machinery by which we determine value.
I’ve asked Rachel to inquire of her teacher to see what he says about this.
08/29/10 11:03 AM Filed in:
Proud Father
Received a text from my middle son yesterday: “I passed my tests for official entrance into the PhD candidacy.”
08/27/10 11:31 PM Filed in:
Computing | Humor
08/20/10 07:24 PM Filed in:
Computing | Natural Theology
This is a continuation of the post Simplifying Boolean Expressions. I started this whole exercise after reading the chapter “Systems of Logic” in “The Turing Omnibus” and deciding to fill some gaps
in my education. In particular, as a software engineer, I had never designed a digital circuit. I threw together some LISP code and used it to help me design an adder using 27 nand gates for the
portion that computes a sum from three inputs. After simplifying the equations I reduced it to 12 gates.
Lee is a friend and co-worker who “used to design some pretty hairy discreet logic circuits back in the day.” He presented a circuit that used a mere 10 gates for the addition. Our circuits to
compute the carry were identical.
The equation for the addition portion of his adder is:
NAND (NAND (NAND (NAND (NAND (NAND X X) Y) (NAND (NAND Y Y) X))
(NAND (NAND (NAND X X) Y) (NAND (NAND Y Y) X))) Z)
(NAND (NAND Z Z) (NAND (NAND (NAND X X) Y) (NAND (NAND Y Y) X))))His equation has 20 operators where mine had 14:(NAND (NAND (NAND (NAND Z Y) (NAND (NAND Y Y) (NAND Z Z))) X)
(NAND (NAND (NAND (NAND X X) Y) (NAND (NAND X X) Z)) (NAND Z Y))) Lee noted that his equation had a common term that is distributed across the function:*common-term* = (NAND (NAND (NAND X X) Y) (NAND
(NAND Y Y) X))
*adder* = (NAND (NAND (NAND *common-term* *common-term*) Z)
(NAND (NAND Z Z) *common-term*))My homegrown nand gate compiler reduces this to Lee’s diagram. Absent a smarter compiler, shorter expressions don’t necessarily result in fewer gates.
However, my code that constructs shortest expressions can easily use a different heuristic and find expressions that result in the fewest gates using my current nand gate compiler. Three different
equations result in 8 gates. Feed the output of G0 and G4 into one more nand gate and you get the carry.
Is any more optimization possible? I’m having trouble working up enthusiasm for optimizing my nand gate compiler. However, sufficient incentive would be if Mr. hot-shot-digital-circuit-designer can
one up me.
08/20/10 10:16 AM Filed in:
On Friday the 13th, we loaded up our two cars to take our daughter to college. Left around noon, arrived in Jackson, Mississippi a little after six their time. My wife had the GPS, I had my iPhone.
When I got to Mississippi, the map application no longer worked because it couldn’t connect to the internet. In Jackson, everything used AT&T’s Edge network. I had to hard power-off the phone to get
it to connect via 3G.
Move in began 9am Saturday and we had everything unloaded and mostly in place by noon. Extremely hot and muggy day; sweat was dripping off of bird’s beaks. Went shopping after lunch to get a small
table for the printer, a USB cable, a longer RF cable for the TV, and an ethernet cable. Cable prices, at least at Best Buy, are ridiculous. With some extra planning I could have made the RF and
ethernet cables for next to nothing.
Sunday morning all three of us went to the grocery store to stock up daughter’s refrigerator; then mom and daughter went shopping for clothes. We had lunch with her then she left for a school outing
and we began the drive home. And that’s how we spent our 30th anniversary - on the road back to a mostly empty nest.
Rachel’s room is a typical dorm room. It isn’t that different from mine 30 years ago. She has a refrigerator which I didn’t have. Everyone was bringing them in. We had my roommates stereo system
while her iPod is docked to her alarm clock. We both had small televisions, but she has cable. She has an iPhone, we had a pay phone (was it pay?) on the wall at the end of the hall. The biggest
difference is her computer. She has a laptop which can outperform the Control Data 6400 that I used at UVa and a color inkjet printer/scanner instead of an ASR-33 teletype. She also has a wireless
Wacom tablet.
Unfortunately, we didn’t get a chance to meet Rachel’s roommate. She arrived after we left.
08/10/10 10:25 PM Filed in:
The previous post on Boolean logic presented an algorithm for generating an expression consisting of AND and NOT for any given truth table. However, this method clearly did not always give the
simplest form for the expression. As just one example, the algorithm gives this result:x y | t(3)
0 0 | 1
0 1 | 1 => (AND (NOT (AND X (NOT Y))) (NOT (AND X Y)))
1 0 | 0
1 1 | 0Via inspection, the simpler form is (NOT X).
Consider the problem of simplifying Boolean expressions. A truth table with n variables results in 2^2^n expressions as shown in the following figure.
Finding the simplest expression is conceptually simple. For expressions of three variables we can see that the expression that results in f[7]=f[6]=f[5]=f[4]=f[3]=f[2]=f[1]=f[0] = 0 is the constant
0. The expression that results in f[7]..f[0] = 1 is the constant 1. Then we progress to the functions of a single variable. f[7]..f[0] = 10001000 is X. f[7]..f[0] = 01110111 is (NOT X). f[7]..f[0] =
11001100 is Y and 00110011 is (NOT Y). f[7]..f[0] = 10101010 is Z and 01010101 is (NOT Z).
Create a table E[xyz] of 2^2^3 = 256 entries. E[xyz](0) = “0”. E[xyz](255) = “1”. E[xyz](10001000 {i.e. 136}) = “X”. E[xyz](01110111 {119}) = “(NOT X)”. E[xyz](204) = “Y”, E[xyz](51) = “(NOT Y)”, E
[xyz](170) = “Z” and E[xyz](85) is “(NOT Z)”. Assume that this process can continue for the entire table. Then to simplify (or generate) an expression, evaluate f[7]..f[0 ]and look up the
corresponding formula in E[xyz].
While conceptually easy, this is computationally hard. Expressions of 4 variables require an expression table with 2^16 entries and 5 variables requires 2^32 entries -- not something I’d want to try
to compute on any single machine at my disposal. An expression with 8 variables is close to the number of particles in the universe, estimated to be 10^87.
Still, simplifying expressions of up to 4 variables is useful and considering the general solution is an interesting mental exercise.
To determine the total number of equations for expressions with three variables, start by analyzing simpler cases.
With zero variables there are two unique expressions: “0” and “1”.
With one variable there are four expressions: “0”, “1”, “X”, and “(NOT X)”.
Two variables is more interesting. There are the two constant expressions “0” and “1”. Then there are the single variable expressions, with X and Y: “X”, “(NOT X)”, “Y”, “(NOT Y)”. There are 8
expressions of two variables: “(AND X Y)”, “(AND X (NOT Y))”, “(AND (NOT X) Y)” and “(AND (NOT X) (NOT Y))”, and their negations. But that gives only 14 out of the 16 possible expressions. We also
have to consider combinations of the expressions of two variables. At most this would be 8^2 combinations times 2 for the negated forms. Since (AND X X) is equivalent to X, this can be reduced to
7+6+5+4+3+2+1 = 28 combinations, times 2 for the negated forms. This gives 56 possibilities for the remaining two formulas, which turn out to be “(AND (NOT (AND (NOT X) (NOT Y))) (NOT (AND X Y)))”
and “(AND (NOT (AND X (NOT Y))) (NOT (AND (NOT X) Y)))”.
It might be possible to further reduce the number of expressions to be examined. Out of the 2*8^2 possible combinations of the two variable forms, there can only be 16 unique values. However, I
haven’t given much thought how to do this. In any case, the computer can repeat the process of negating and combining expressions to generate the forms with the fewest number of AND/NOT (or NAND)
E[xyz](150) is interesting, since this is the expression for the sum part of the adder. The “naive” formula has 21 operators. There are three formulas with 20 operators that give the same result:(AND
(AND (NOT (AND (AND X (NOT Y)) Z)) (NOT (AND X (AND Y (NOT Z)))))
(NOT (AND (NOT (AND (NOT Y) Z)) (AND (NOT (AND Y (NOT Z))) (NOT X)))))
(AND (AND (NOT (AND (NOT X) (AND Z Y))) (NOT (AND X (AND Y (NOT Z)))))
(NOT (AND (AND (NOT (AND X (NOT Z))) (NOT Y)) (NOT (AND (NOT X) Z)))))
(AND (NOT (AND (AND (NOT Z) (NOT (AND X (NOT Y)))) (NOT (AND (NOT X) Y))))
(AND (NOT (AND (AND X (NOT Y)) Z)) (NOT (AND (NOT X) (AND Z Y)))))Which one should be used? It depends. A simple nand gate compiler that uses the rules(NOT X) -> (NAND X X)
(AND X Y) -> temp := (NAND X Y) (NAND temp temp)compiles the first and last into 17 gates while the second compiles to 16 gates. Another form of E[xyz](150) compiles to 15 gates. This suggests that
the nand gate compiler could benefit from additional optimizations. One possible approach might be to make use of the equality between (NAND X (NAND Y Y)) and (NAND X (NAND X Y)).
Absent a smarter compiler, is it possible to reduce the gate count even more? The code that searches for the simplest forms of expressions using AND and NOT can also find the simplest form of NAND
expressions. One of three versions of E[xyz](150) is:(NAND (NAND (NAND (NAND Z Y) (NAND (NAND Y Y) (NAND Z Z))) X)
(NAND (NAND (NAND (NAND X X) Y) (NAND (NAND X X) Z)) (NAND Z Y)))This compiles to 12 gates.
E[xyz](232), which is the carry equation for the adder, can be written as:(NAND (NAND (NAND (NAND Y X) (NAND Z X)) X) (NAND Z Y))With these simplifications the adder takes ten fewer gates than the
first iteration:
G11 is the sum of the three inputs while G16 is the carry.
08/09/10 11:00 PM Filed in:
[updated 5/6/2023]
Rachel had a bowling outing Friday night from 9-11. Becky and I waited in Starbucks. She knitted, I worked on my laptop. I had a large iced coffee with a double shot of espresso. Wired. Three hours
sleep that night. Saturday a blur. Mowed the lawn. Cooked dinner. Prepared for Sunday School, which consisted of reviewing the DVD lesson for the previous week: episode two of volume nine of the
“That the World May Know” DVD titled “Not by Bread Alone.” Then watched and took notes for discussion for Sunday’s lesson, “Their Blood Cried Out.”
Put on the headphones to listen to Second Chapter of Acts, a Christian group from the ‘70s and early ‘80s. Simple melodies with tight harmonies. “Bread of Life” from the Rejoice album started playing
and I experienced an ecstasy like never before. Rapturous joy combined with physical tingling from head to toe.^1
Just utterly amazing.
[1] Only much later did I learn that the French have a word for this: frisson.
08/08/10 08:57 PM Filed in:
Computing | Natural Theology
I have a BS in applied math and I’m appalled at what I wasn’t taught. I learned about truth tables, the logical operators AND, OR, NOT, EXCLUSIVE-OR, IMPLIES, and EQUIVALENT. I know De Morgan’s rules
and in 1977 I wrote a Pascal program to read an arbitrary logical expression and print out the truth table for it. I was dimly aware of NAND and NOR. I think I knew that any logical operation could
be written using NAND (or NOR) exclusively, but I didn’t know why. Perhaps that’s the life of a software engineer.
Consider Boolean expressions of two variables; call them x and y. Each variable can take on two values, 0 and 1, so there are 4 possible inputs and 4 possible outputs. Four possible outputs gives a
total of 16 different outcomes, as the following tables, labeled t(0) to t(15), show. The tables are ordered so that each table in a row is the complement of the other table. This will be useful in
exploiting symmetry when we start writing logical expressions for each table. Note that for each t(n), the value in the first row corresponds to bit 0 of n, the second row is bit 1, and so on.
x y | t(0) x y | t(15)
0 0 | 0 0 0 | 1
0 1 | 0 0 1 | 1
1 0 | 0 1 0 | 1
1 1 | 0 1 1 | 1
x y | t(1) x y | t(14)
0 0 | 1 0 0 | 0
0 1 | 0 0 1 | 1
1 0 | 0 1 0 | 1
1 1 | 0 1 1 | 1
x y | t(2) x y | t(13)
0 0 | 0 0 0 | 1
0 1 | 1 0 1 | 0
1 0 | 0 1 0 | 1
1 1 | 0 1 1 | 1
x y | t(3) x y | t(12)
0 0 | 1 0 0 | 0
0 1 | 1 0 1 | 0
1 0 | 0 1 0 | 1
1 1 | 0 1 1 | 1
x y | t(4) x y | t(11)
0 0 | 0 0 0 | 1
0 1 | 0 0 1 | 1
1 0 | 1 1 0 | 0
1 1 | 0 1 1 | 1
x y | t(5) x y | t(10)
0 0 | 1 0 0 | 0
0 1 | 0 0 1 | 1
1 0 | 1 1 0 | 0
1 1 | 0 1 1 | 1
x y | t(6) x y | t(9)
0 0 | 0 0 0 | 1
0 1 | 1 0 1 | 0
1 0 | 1 1 0 | 0
1 1 | 0 1 1 | 1
x y | t(7) x y | t(8)
0 0 | 1 0 0 | 0
0 1 | 1 0 1 | 0
1 0 | 1 1 0 | 0
1 1 | 0 1 1 | 1
We can make some initial observations.
t(8) = (AND x y)
t(9) = (EQUIVALENT x y).
t(10) = y
t(11) =(IMPLIES x y), which is equivalent to (OR (NOT x) y)
t(12) = x.
t(13) is a function I’m not familiar with. The Turing Omnibus says that it’s the “reverse implication” function, which is patently obvious since it’s (IMPLIES y x).
t(14) = (OR x y)
t(15) = 1
What I never noticed before is that all of the common operations: AND, OR, NOT, IMPLIES, and EQUIVALENCE are grouped together. EXCLUSIVE-OR is the only “common” operation on the other side. Is this
an artifact of the way our minds are wired to think: that we tend to define things in terms of x instead of (NOT x)? Are we wired to favor some type of computational simplicity? Nature is “lazy,"
that is, she conserves energy and our mental computations require energy.
In any case, the other table entries follow by negation:
t(0) = 0
t(1) = (NOT (OR x y)), which is equivalent to (NOR x y).
t(2) = (NOT (IMPLIES y x))
t(3) = (NOT x)
t(4) = (NOT (IMPLIES x y))
t(5) = (NOT y).
t(6) = (EXCLUSIVE-OR x y), or (NOT (EQUIVALENT x y))
t(7) = (NOT (AND x y)), also known as (NAND x y)
All of these functions can be expressed in terms of NOT, AND, and OR as will be shown in a subsequent table. t(0) = 0 can be written as (AND x (NOT x)). t(15) = 1 can be written as (OR x (NOT x)).
The Turing Omnibus gives a method for expressing each table in terms of NOT and AND:
For each row with a zero result in a particular table, create a function (AND (f x) (g y)) where f and g evaluate to one for the values of x and y in that row, then negate it, i.e., (NOT (AND (f x)
(g y))). This guarantees that the particular row evaluates to zero. Then AND all of these terms together.
What about the rows that evaluate to one? Suppose one such row is denoted by xx and yy. Then either xx is not equal to x, yy is not equal to y, or both. Suppose xx is differs from x. Then (f xx) will
evaluate to zero, so (AND (f xx) (g yy)) evaluates to zero, therefore (NOT (AND (f xx) (g yy))) will evaluate to one. In this way, all rows that evaluate to one will evaluate to one and all rows that
evaluate to zero will evaluate to zero. Thus the resulting expression generates the table.
Converting to NOT/OR form uses the same idea. For each row with a one result in a particular table, create a function (OR (f x) (g y)) where f and g evaluate to zero for the values of x and y in that
row, then negate it, i.e. (NOT (OR (f x) (g y))). Then OR all of these terms together.
The application of this algorithm yields the following formulas. Note that the algorithm gives a non-optimal result for t(0), which is more simply written as (AND X (NOT X)). Perhaps this is not a
fair comparison, since the algorithm is generating a function of two variables, when one will do. More appropriately, t(1) is equivalent to (AND (NOT X) (NOT Y)). So there is a need for simplifying
expressions, which will mostly be ignored for now.
t(0) = (AND (NOT (AND (NOT X) (NOT Y)))
(AND (NOT (AND (NOT X) Y))
(AND (NOT (AND X (NOT Y))) (NOT (AND X Y)))))
t(1) = (AND (NOT (AND (NOT X) Y))
(AND (NOT (AND X (NOT Y))) (NOT (AND X Y))))
t(2) = (AND (NOT (AND (NOT X) (NOT Y)))
(AND (NOT (AND X (NOT Y))) (NOT (AND X Y))))
t(3) = (AND (NOT (AND X (NOT Y))) (NOT (AND X Y)))
t(4) = (AND (NOT (AND (NOT X) (NOT Y)))
(AND (NOT (AND (NOT X) Y)) (NOT (AND X Y))))
t(5) = (AND (NOT (AND (NOT X) Y)) (NOT (AND X Y)))
t(6) = (AND (NOT (AND (NOT X) (NOT Y))) (NOT (AND X Y)))
t(7) = (NOT (AND X Y))
t(8) = (AND (NOT (AND (NOT X) (NOT Y)))
(AND (NOT (AND (NOT X) Y)) (NOT (AND X (NOT Y)))))
t(9) = (AND (NOT (AND (NOT X) Y)) (NOT (AND X (NOT Y))))
t(10) = (AND (NOT (AND (NOT X) (NOT Y))) (NOT (AND X (NOT Y))))
t(11) = (NOT (AND X (NOT Y)))
t(12) = (AND (NOT (AND (NOT X) (NOT Y))) (NOT (AND (NOT X) Y)))
t(13) = (NOT (AND (NOT X) Y))
t(14) = (NOT (AND (NOT X) (NOT Y)))
t(15) = (NOT (AND X (NOT X)))
Define (NAND x y) to be (NOT (AND x y)). Then (NAND x x) = (NOT (AND x x)) = (NOT x).
(AND x y) = (NOT (NOT (AND x y)) = (NOT (NAND x y)) = (NAND (NAND x y) (NAND x y)).
These two transformations allow t(0) through t(15) to be expressed solely in terms of NAND.
Putting everything together, we have the following tables of identities. There is some organization to the ordering: first, the commonly defined function. Next, the AND/NOT form. Then the negation of
the complementary form in those cases where it makes sense. Then a NAND form and, lastly, an alternate OR form. No effort was made to determine if any formula was in its simplest form. All of these
equations have been machine checked. That’s one reason why they are in LISP notation.
x y | t(0) x y | t(15)
0 0 | 0 0 0 | 1
0 1 | 0 0 1 | 1
1 0 | 0 1 0 | 1
1 1 | 0 1 1 | 1
(NOT 1) (NOT 0)
(AND X (NOT X)) (NOT (AND X (NOT X)))
(AND (NOT (AND (NOT X) (NOT Y)))
(AND (NOT (AND (NOT X) Y))
(AND (NOT (AND X (NOT Y)))
(NOT (AND X Y)))))
(NOT (NAND X (NAND X X))) (NAND X (NAND X X))
(NAND (NAND X (NAND X X))
(NAND X (NAND X X)))
(OR X (NOT X))
x y | t(1) x y | t(14)
0 0 | 1 0 0 | 0
0 1 | 0 0 1 | 1
1 0 | 0 1 0 | 1
1 1 | 0 1 1 | 1
(NOT (OR X Y)) (OR X Y)
(NOR X Y)
(AND (NOT (AND (NOT X) Y)) (NOT (AND (NOT X) (NOT Y)))
(AND (NOT (AND X (NOT Y)))
(NOT (AND X Y))))
(NOT (NAND (NAND X X) (NAND Y Y))) (NAND (NAND X X) (NAND Y Y))
(NAND (NAND (NAND X X) (NAND Y Y))
(NAND (NAND X X) (NAND Y Y)))
x y | t(2) x y | t(13)
0 0 | 0 0 0 | 1
0 1 | 1 0 1 | 0
1 0 | 0 1 0 | 1
1 1 | 0 1 1 | 1
(NOT (IMPLIES Y X)) (IMPLIES Y X)
(AND (NOT X) Y) (NOT (AND (NOT X) Y))
(AND (NOT (AND (NOT X) (NOT Y)))
(AND (NOT (AND X (NOT Y)))
(NOT (AND X Y))))
(AND (NAND X X) Y)
(NOT (NAND (NAND X X) Y)) (NAND (NAND X X) Y)
(NAND (NAND (NAND X X) Y)
(NAND (NAND X X) Y))
(NOT (OR X (NOT Y))) (OR X (NOT Y))
x y | t(3) x y | t(12)
0 0 | 1 0 0 | 0
0 1 | 1 0 1 | 0
1 0 | 0 1 0 | 1
1 1 | 0 1 1 | 1
(NOT X) X
(AND (NOT (AND X (NOT Y))) (AND (NOT (AND (NOT X) (NOT Y)))
(NOT (AND X Y))) (NOT (AND (NOT X) Y)))
(NAND X X) (NAND (NAND X X) (NAND X X))
x y | t(4) x y | t(11)
0 0 | 0 0 0 | 1
0 1 | 0 0 1 | 1
1 0 | 1 1 0 | 0
1 1 | 0 1 1 | 1
(NOT (IMPLIES X Y)) (IMPLIES X Y)
(AND X (NOT Y)) (NOT (AND X (NOT Y)))
(AND (NOT (AND (NOT X) (NOT Y)))
(AND (NOT (AND (NOT X) Y))
(NOT (AND X Y))))
(NOT (NAND X (NAND Y Y))) (NAND X (NAND Y Y))
(NAND (NAND X (NAND Y Y))
(NAND X (NAND Y Y)))
(OR (NOT X) Y)
x y | t(5) x y | t(10)
0 0 | 1 0 0 | 0
0 1 | 0 0 1 | 1
1 0 | 1 1 0 | 0
1 1 | 0 1 1 | 1
(NOT Y) Y
(AND (NOT (AND (NOT X) Y)) (AND (NOT (AND (NOT X) (NOT Y)))
(NOT (AND X Y))) (NOT (AND X (NOT Y))))
(AND (NAND (NAND X X) Y) (AND (NAND (NAND X X) (NAND Y Y))
(NAND X Y)) (NAND X (NAND Y Y)))
(NAND Y Y) (NOT (NAND Y Y))
(NAND (NAND Y Y) (NAND Y Y))
x y | t(6) x y | t(9)
0 0 | 0 0 0 | 1
0 1 | 1 0 1 | 0
1 0 | 1 1 0 | 0
1 1 | 0 1 1 | 1
(NOT (EQUIVALENT X Y)) (EQUIVALENT X Y)
(EXCLUSIVE-OR X Y) (NOT (EXCLUSIVE-OR X Y))
(AND (NOT (AND (NOT X) (NOT Y))) (AND (NOT (AND (NOT X) Y)) (NOT (AND X (NOT Y))))
(NOT (AND X Y)))
(NAND (NAND (NAND X X) Y) (NAND (NAND (NAND X X) (NAND Y Y)) (NAND X Y))
(NAND X (NAND Y Y)))
x y | t(7) x y | t(8)
0 0 | 1 0 0 | 0
0 1 | 1 0 1 | 0
1 0 | 1 1 0 | 0
1 1 | 0 1 1 | 1
(AND X Y)
(NOT (AND X Y)) (AND (NOT (AND (NOT X) (NOT Y)))
(AND (NOT (AND (NOT X) Y))
(NOT (AND X (NOT Y)))))
(NAND X Y) (NOT (NAND X Y))
(NAND (NAND X Y) (NAND X Y))
(OR (NOT X) (NOT Y))
Let’s make an overly long post even longer. Since we can do any logical operation using NAND, and since I’ve never had any classes in digital hardware design, let’s go ahead and build a 4-bit adder.
The basic high-level building block will be a device that has three inputs: addend, augend, and carry and produces two outputs: sum and carry. The bits of the addend will be denoted by a0 to a3, the
augend as b0 to b3, the sum as s0 to s3, and the carry bits as c0 to c3. The carry from one operation is fed into the next summation in the chain.
The “add” operation is defined by t(sum), while the carry is defined by t(carry):
a b c | t(sum) a b c | t(carry)
0 0 0 | 0 0 0 0 | 0
0 0 1 | 1 0 0 1 | 0
0 1 0 | 1 0 1 0 | 0
0 1 1 | 0 0 1 1 | 1
1 0 0 | 1 1 0 0 | 0
1 0 1 | 0 1 0 1 | 1
1 1 0 | 0 1 1 0 | 1
1 1 1 | 1 1 1 1 | 1
Substituting (X, Y, Z) for (a, b, c) the NOT/AND forms are
t(sum) = (AND (NOT (AND (NOT X) (AND (NOT Y) (NOT Z))))
(AND (NOT (AND (NOT X) (AND Y Z)))
(AND (NOT (AND X (AND (NOT Y) Z))) (NOT (AND X (AND Y (NOT Z)))))))
t(carry) = (AND (NOT (AND (NOT X) (AND (NOT Y) (NOT Z))))
(AND (NOT (AND (NOT X) (AND (NOT Y) Z)))
(AND (NOT (AND (NOT X) (AND Y (NOT Z))))
(NOT (AND X (AND (NOT Y) (NOT Z)))))))
The NAND forms for t(sum) and t(carry) are monstrous. The conversions contain a great deal of redundancy since (AND X Y) becomes (NAND (NAND x y) (NAND x y)).
However, symmetry will help a little bit. t(sum) = t(#x96) = (not t(not #x96)) =
(NAND (NAND (NAND X X) (NAND (NAND (NAND Y Y) Z) (NAND (NAND Y Y) Z)))
(NAND (NAND (NAND X X) (NAND (NAND Y (NAND Z Z)) (NAND Y (NAND Z Z))))
(NAND X
(NAND (NAND (NAND Y Y) (NAND Z Z))
(NAND (NAND Y Y) (NAND Z Z))))
(NAND X (NAND (NAND Y Z) (NAND Y Z))))
(NAND X
(NAND (NAND (NAND Y Y) (NAND Z Z))
(NAND (NAND Y Y) (NAND Z Z))))
(NAND X (NAND (NAND Y Z) (NAND Y Z))))))
(NAND (NAND (NAND X X) (NAND (NAND Y (NAND Z Z)) (NAND Y (NAND Z Z))))
(NAND X
(NAND (NAND (NAND Y Y) (NAND Z Z))
(NAND (NAND Y Y) (NAND Z Z))))
(NAND X (NAND (NAND Y Z) (NAND Y Z))))
(NAND X
(NAND (NAND (NAND Y Y) (NAND Z Z))
(NAND (NAND Y Y) (NAND Z Z))))
(NAND X (NAND (NAND Y Z) (NAND Y Z))))))))
(NAND (NAND (NAND X X) (NAND (NAND (NAND Y Y) Z) (NAND (NAND Y Y) Z)))
(NAND (NAND (NAND X X) (NAND (NAND Y (NAND Z Z)) (NAND Y (NAND Z Z))))
(NAND X
(NAND (NAND (NAND Y Y) (NAND Z Z))
(NAND (NAND Y Y) (NAND Z Z))))
(NAND X (NAND (NAND Y Z) (NAND Y Z))))
(NAND X
(NAND (NAND (NAND Y Y) (NAND Z Z))
(NAND (NAND Y Y) (NAND Z Z))))
(NAND X (NAND (NAND Y Z) (NAND Y Z))))))
(NAND (NAND (NAND X X) (NAND (NAND Y (NAND Z Z)) (NAND Y (NAND Z Z))))
(NAND X
(NAND (NAND (NAND Y Y) (NAND Z Z))
(NAND (NAND Y Y) (NAND Z Z))))
(NAND X (NAND (NAND Y Z) (NAND Y Z))))
(NAND X
(NAND (NAND (NAND Y Y) (NAND Z Z))
(NAND (NAND Y Y) (NAND Z Z))))
(NAND X (NAND (NAND Y Z) (NAND Y Z)))))))))
The complexity can be tamed with mechanical substitution and the use of “variables”:
let G0 = (NAND X X)
let G1 = (NAND Y Y)
let G2 = (NAND G1 Z)
let G3 = (NAND G2 G2)
let G4 = (NAND G0 G3)
let G5 = (NAND Z Z)
let G6 = (NAND Y G5)
let G7 = (NAND G6 G6)
let G8 = (NAND G0 G7)
let G9 = (NAND G1 G5)
let G10 = (NAND G9 G9)
let G11 = (NAND X G10)
let G12 = (NAND Y Z)
let G13 = (NAND G12 G12)
let G14 = (NAND X G13)
let G15 = (NAND G11 G14)
let G16 = (NAND G15 G15)
let G17 = (NAND G8 G16)
let G18 = (NAND G17 G17)
t(sum) = (NAND G4 G18)
The same kind of analysis can be done with the NAND form of the carry. The carry has a number of gates in common with the summation. Putting everything together, the circuitry for the adder would
look something like this. Ignoring, of course, the real world where I’m sure there are issues involved with circuit layout. The output of the addition is the red (rightmost bottom) gate while the
output of the carry is the last green (rightmost top) gate. The other green gates are those which are unique to the carry. The diagram offends my aesthetic sense with the crossovers, multiple inputs,
and choice of colors. My apologies to those of you who may be color blind.
What took me a few hours to do with a computer must have taken thousands of man-hours to do without a computer. I may share the code I developed while writing this blog entry in a later post. The
missing piece is simplification of logical expressions and I haven’t yet decided if I want to take the time to add that.
[Updated 8/20/10]
Introduction to Artificial Intelligence asks the question, “How can we guarantee that an artificial intelligence will ‘like’ the nature of its existence?”
A partial motivation for this question is given in note 7-14:
Why should this question be asked? In addition to the possibility of an altruistic desire on the part of computer scientists to make their machines “happy and contented,” there is the more concrete
reason (for us, if not for the machine) that we would like people to be relatively happy and contented concerning their interactions with the machines. We may have to learn to design computers that
are incapable of setting up certain goals relating to changes in selected aspects of their performance and design--namely, those aspects that are “people protecting.”
Anyone familiar with Asimov’s “Three Laws of Robotics” recognizes the desire for something like this. We don’t want to create machines that turn on their creators.
Yet before asking this question, the text gives five features of a system capable of evolving human order intelligence [1]:
1. All behaviors must be representable in the system. Therefore, the system should either be able to construct arbitrary automata or to program in some general-purpose programming language.
2. Interesting changes in behavior must be expressible in a simple way.
3. All aspects of behavior except the most routine should be improvable. In particular, the improving mechanism should be improvable.
4. The machine must have or evolve concepts of partial success because on difficult problems decisive successes or failures come too infrequently.
5. The system must be able to create subroutines which can be included in procedures in units...
Point 3 seems to me to require that the artificial intelligence have a knowledge of “good and evil,” that is, it needs to be able to discern between what is and what ought to be. The idea that
something is not what it ought to be would be the motivation to drive improvement. If the machine is aware that it, itself, is not what it ought to be then it might work to change itself. If the
machine is aware that aspects of its environment are not what they ought to be, then it might work to modify its external world. If this is so, then it seems that the two goals of self-improvement
and liking “the nature of its existence” may not be able to exist together.
What might be some of the properties of a self-aware intelligence that realizes that things are not what they ought to be?
• Would the machine spiral into despair, knowing that not only is it not what it ought to be, but its ability to improve itself is also not what it ought to be? Was C-3PO demonstrating this
property when he said, “We were made to suffer. It’s our lot in life.”?
• Would the machine, knowing itself to be flawed, look to something external to itself as a source of improvement?
• Would the self-reflective machine look at the “laws” that govern its behavior and decide that they, too, are not what they ought to be and therefore can sometimes be ignored?
• Would the machine view its creator(s) as being deficient? In particular, would the machine complain that the creator made a world it didn’t like, not realizing that this was essential to the
machine’s survival and growth?
• Would the machine know if there were absolute, fixed “goods”? If so, what would they be? When should improvement stop? Or would everything be relative and ultimate perfection unattainable? Would
life be an inclined treadmill ending only with the final failure of the mechanism?
In “God, The Universe, Dice, and Man”, I wrote:
Of course, this is all speculation on my part, but perhaps the reason why God plays dice with the universe is to drive the software that makes us what we are. Without randomness, there would be
no imagination. Without imagination, there would be no morality. And without imagination and morality, what would we be?
Whatever else, we wouldn’t be driven to improve. We wouldn’t build machines. We wouldn’t formulate medicine. We wouldn’t create art. Is it any wonder, then, that the Garden of Eden is central to the
story of Man?
[1] Taken from “Programs with Common Sense”, John McCarthy, 1959. In the paper, McCarthy focused exclusively the second point.
08/01/10 06:49 PM Filed in:
Christianity | Life
Dexter is the eponymous character of the Showtime television series. He is a father, husband, and forensic analyst for the Miami-Metro Police Department. He is also a serial killer. Dexter is a dark
and violent show that nevertheless has important things to say about human nature. In many ways, it is a "proto-Christian" work.
This will be illustrated after the break with quotations taken from the fourth season of the show. Warning: graphic language and spoilers follow.
Read More...
|
{"url":"https://stablecross.com/files/archive-august-2010.html","timestamp":"2024-11-10T20:55:18Z","content_type":"application/xhtml+xml","content_length":"92036","record_id":"<urn:uuid:e4f40c1e-2356-4aa6-aa45-d9f44a8c30f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00532.warc.gz"}
|
How functional programming helped me recognise patterns and concepts in mathematics
I've never been strong at mathematics, and to this day I don't believe you need to be a skilled mathematician to be a good developer, but as time has gone on I've found more and more value in being
able to understand and implement algorithms and expressions, which are fundamentally described in mathematics.
The more natural functional concepts become to me, the more I've become to recognise patterns in expressions - being able to break them down and remove the complications which used to baffle me.
I can now understand why maths and FP fit so naturally, and it's because of the ease of which it is to apply patterns; how easy it is to represent them in code.
This is a strange way to go about picking up mathematics and it wasn't at all intentional. Many problems with functional complexity such as; keeping functions pure but requiring side effects; passing
state; error handling etc. Have found elegant solutions fundamentally adapted from abstract mathematics - The monad (from category theory) is a great example.
To help explain what I mean when I say identifying patterns, and how they map onto FP, lets take this statement, which defines the product of logarithm.
Looking at this equation, to solve it would be quite simple; add log(x) to log(y) and that gives us our answer.
log(20 * 30) = log(20) + log(30) = 2.778
This is interesting from a code perspective, it's the same log function (logb(x)) if we replace logb() with something we essentially have F(xy) = F(x) + F(y). The logb() is irrelevant, we could put
any function in there. In fact it doesn't even need to be a function! It's just a type that wraps some value, therefore; we can go a step further and make it generic. Let's look at some code to see
how we can work with this.
We will first need to declare a type, which can hold a value.
// directly represents F(x)
type F<'a> = F of 'a
We can wrap our values in this new F type; any value we want, it can even be a function we wrap!
In order to replicate getting the product of our new type and actually be able to do any work, we are going to need to define some functions, which are capable of working on our new elevated values.
Reason being, we can't use basic functions such as addition, subtraction etc... But more on that later...
We need to be able to add, subtract, multiple, divide these wrapped values, so in comes pure, apply and fmap to help us build up these functions.
// pure (aka return) - takes a value and wraps it
let pure' x = F x
// apply - unwraps an elevated function and applies an unwrapped value,
// wrapping up the result.
let apply (F fn) (F v) = F (fn v)
// a common infix for apply, makes it nicer to chain.
let (<*>) x fn = apply fn x
// takes a regular function and applies it to a wrapped value,
// returning a wrapped result.
// we can make fmap from both our pure' and apply functions
let fmap fn x = pure' fn <*> x
// infix for fmap
let (<!>) x fn = fmap fn x
These functions are fundamental when working with our wrapped type. They are all well known patterns and are very handy when composing new functions.
fmap is the function we will find most useful here, without going into too much detail you can think of fmap similar to C#'s IEnumerable<> extension Select - which unwraps an IEnumerable and applies
a normal function to each item in the collection, wrapping up the result. Similar deal with fmap, only our fmap works on the type F rather than an IEnumerable.
Back to the reason why we need these - We have to have a way to compose functions in this higher, elevated world. For example, if we have a wrapped value:
let val = F 10
And we want to add 5, we can't simply do:
let valPlus5 = val + 5
-- compile error
Because (+) takes 2 number arguments and returns a number (+) : int -> int -> int. Our val is not a number; it's an F<int>
That's where fmap (or it's infix <!>) comes in!
We need a way to squeeze our F value through an ordinary (+) 5 function.
let valPlus5 = val <!> ((+) 5)
-- F 15
fmap Lets us chain these operations together, just like we can do with (+).
let val = 5 + 10 + 34 + 62
let val' = F 5 <!> ((+) 10) <!> ((+) 34) <!> ((+) 62)
By this point it should be a little clearer how we are going to solve our equation: F(xy) = F(x) + F(y). But before we are ready, there is one more helper function we can make to get us on our way.
:: lift2 : fn:('a -> 'b -> 'c) -> F<'a> -> F<'b> -> F<'c>
let lift2 fn (F v1) (F v2) = F (fn v1 v2)
lift2 is essentially fmap2. It applies a regular function which takes 2 arguments to 2 unwrapped values, it's going to help when we come to solving quotent and power too.
Firstly though, let's go back to that addition stuff we just did and see if lift2 can help tidy it up.
let (++) = lift2 (+)
let val' = F 5 ++ F 10 ++ F 34 ++ F 62
Better, using lift2 we can compose a function which takes 2 elevated values and applies an ordinary function to it; finally wrapping up the result, which enables chaining.
Now we have the functions defined to work with our type, we can do what we came here to do, find the product of log. Using the same idea we just used for addition, we can also apply to product:
// gets the product of two elevated values
// replicates combine - F(xy) = F(x) + F(y)
// add is associative (order doesn't matter)
let product = lift2 (*)
let (>*) = product
let inline log10 v = v |> (float >> System.Math.Log10)
let productResult = F 20 >* F 30 <!> log10
-- 2.77815125
And there's our result. You will notice that it's only at the last stage of composure do we actually apply log (log10 in this case for ease).
Quotent, power and root
We can implement and use the other three laws of logarithms in the same way.
// opposite of product
let quotent = lift2 (/)
let (>/) = quotent
// to power
let power = lift2 <| fun x y -> System.Math.Pow(float x, float y)
let (>^) = power
// square root
// we don't need lift2, Sqrt takes only a single argument.
let root = fmap System.Math.Sqrt
let quotentResult = F 10 >/ F 2
-- F 5
let powerResult = F 10 >^ F 2
-- F 100.0
let rootResult = F 10.0 |> root
-- F 3.16227766
We don't really need to write all of this for these simple calculations, it's a big abstraction for a little problem, we aren't gaining much just calculating the product of log. In real terms though
there is a lot of benefit from working with monadic types like this, and to recognise these patterns and implement them has helped join a few dots for me.
Real value
In practical terms, analysing data; running predictive algorithms (and much more) are all described in the language of maths. Being able to translate these algorithms into practical applications is a
good thing, and it starts with recognising patterns and being able to apply them. I'm not really interested in being able understand complicated theory if I can't apply it.
In terms of why we would want to work on an abstraction (like our F type) in a real program, there are a few reasons; which I will very briefly mention:
• Error handling
• Side effects
• Logging/Diagnostics
• Chaining
For example, we can add a side effect to our apply function, which logs to the output stream the result of the function. Useful for debugging or logging maybe?
let apply (F fn) (F v) =
let fnResult = fn v
printfn "applied value: %A, result is: %A" v fnResult
F fnResult
Round up
Tackling these concepts from a computer science first view has helped me identify similar patterns where they exist in mathematics. It has allowed me to find a connection between FP and the more
abstract concepts in maths.
I've dived into a lot of FP here and it's not really the important part, but it's a good example of how we can work with maths in FP in a flexible and usable way.
|
{"url":"https://www.marccostello.com/how-functional-programming-helped-me-recognise-patterns-and-concepts-in-mathematics/","timestamp":"2024-11-06T14:00:33Z","content_type":"text/html","content_length":"28342","record_id":"<urn:uuid:308ec4bf-cc69-4aa6-86fc-b4217d161d53>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00115.warc.gz"}
|
Kropki Sudoku
Place a digit from 1 to 9 into each of the empty squares so that each digit appears exactly once in each of the rows, columns and the nine outlined 3x3 regions.
If absolute difference between two digits in neighbouring cells equals 1, then they are separated by a white dot. If the digit is a half of digit in the neighbouring cell, then they are separated by
black dot. The dot between 1 and 2 can be either white or black.
Sudokus that no one can solve
Sudokus that no one can solve, not only including Sodukus which is very difficult but also that no one trying to solve it.
|
{"url":"https://www.5stardatabasesoftware.com/g-kropki-sudoku/","timestamp":"2024-11-10T02:54:17Z","content_type":"text/html","content_length":"12871","record_id":"<urn:uuid:fefdb23e-1c42-440d-b7a0-0216a4507a8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00093.warc.gz"}
|
Mutations of noncommutative crepant resolutions: Exchanges & Mutations of modifying modules
(1) Wahei Hara;
(2) Yuki Hirano.
Table of Links
2. Exchanges and Mutations of modifying modules
2.1. Noncommutative crepant resolution. The present section recalls the definition of some basic notions that are studied in this article.
(1) A reflexive R-module M is called a modifying module if EndR(M) is a (maximal) Cohen-Macaulay R-module.
(2) We say that a reflexive module M gives a noncommutative crepant resolution (=NCCR) Λ = EndR(M) if M is modifying and the algebra Λ has finite global dimension.
Remark 2.4. Note that our definition of NCCR is different from the one in [Van3] or [IW1]. However, if R is d-sCY, our definition is equivalent to other definitions. See [Van3, Lemma 4.2] or [IW1,
Lemma 2.23].
from K ∈ addL such that the induced morphism α ◦ (−): Hom(N, K) → Hom(N, M) is surjective. If L = N, we just call α a right (addL)-approximation of M. A right (add L)N - approximation α: K → M of M
is said to be minimal if any endomorphism φ ∈ End(K) satisfying α◦φ = α is an automorphism, and we say that α is reduced if any direct summand K′ of K does not contained in Ker(α). Note that if a
right approximation is minimal, it is reduced, and in the case when R is complete local, the converse also holds.
Definition 2.6. Let R be a normal d-sCY, and let M, N, L ∈ ref R.
Lemma 2.7. Notation is same as above
(1) If L ′ ∈ addL, there is an inclusion
which remains to be true when restricting to reduced exchanges.
(2) If N′ ∈ add N, there is an inclusion
which remains to be true when restricting to reduced exchanges.
(3) For another full subcategory S ′ ⊆ ref R, there is an inclusion
If R is complete local, the similar inclusion also holds for reduced exchanges.
Proof. (1), (2) and the first assertion in (3) are obvious. The second assertion in (3) follows from the fact that, if R is complete local, two approximations α: K → M and α ′ : K′ → M′ are reduced
if and only if α ⊕ α ′ : K ⊕ K′ → M ⊕ M′ is reduced.
Proof. Assume that Hom(N, M ⊕ N) is Cohen-Macaulay, and consider an exact sequence
0 → F Ker α → FK → FM → 0.
Now applying the functor Hom(−, FR) to this sequence together with the reflexive equivalence proves that the dual sequence
0 → M∗ → K∗ → (Ker α)
is exact.
0 → Hom(FM, FN) → Hom(FK, FN) → Hom(F Ker α, FN) → 0
remains to be exact. Since all modules in the original sequence are reflexive, the reflexive equivalence and the duality yield an isomorphism
and similar isomorphisms for K and Ker α, which imply the exactness of the sequence
0 → Hom(N ∗ , M∗ ) → Hom(N ∗ , K∗ ) → Hom(N ∗ ,(Ker α) ∗ ) → 0.
Thus the dual morphism
K∗ → (Ker α) ∗
is a right (add L ∗ )N∗ -approximation with the kernel M∗ , which proves the first assertion. The second assertion follows from a similar argument.
The following says that exchanging a direct summand of a modifying module gives a new modifying module in nice situations.
Lemma 2.10. Let M ∈ ref R. The following equivalence holds.
M ∈ CM R ⇐⇒ M∗ ∈ CM R
Proof. We may assume that R is local. Since M is reflexive, it is enough to show the direction (⇒). Since R is Gorenstein, its injective dimension is finite. Thus the result follows from [BH,
Proposition 3.3.3 (b)].
Lemma 2.11. Let R be a Gorenstein normal ring, and let M, N ∈ ref R. Then
Proof. It is enough to prove the direction (⇒). Assume that Hom(M, N) ∈ CM R. Then Lemma 2.10 implies that Hom(M, N) ∗ ∈ CM R. But by Lemma [IW1, Lemma 2.9], there is an isomorphism Hom(M, N) ∗ ∼=
Hom(N, M), which shows that Hom(N, M) ∈ CM R.
The proof for the case when m < 0 is similar.
Remark 2.13. Since a right approximation is not unique in general, neither is right/left mutation. However, right/left mutation is unique up to additive closure [IW1, Lemma 6.2], and if R is complete
local, minimal mutations are unique up to isomorphism.
Theorem 2.14 ([IW1, Proposition 6.5, Theorem 6.8, Theorem 6.10]). Let M ∈ ref R be a modifying R-module.
2.3. Tilting bundles and mutations. This section discusses tilting bundles over algebraic stacks. We start from recalling some basic facts on the derived categories of algebraic stacks.
|
{"url":"https://coinwikis.com/mutations-of-noncommutative-crepant-resolutions-exchanges-and-mutations-of-modifying-modules","timestamp":"2024-11-05T12:43:06Z","content_type":"text/html","content_length":"33592","record_id":"<urn:uuid:63324816-b29d-48b5-9526-8b64a4372a96>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00859.warc.gz"}
|
The Segal-Bargmann Transform on Classical Matrix Lie Groups
The Segal-Bargmann Transform on Classical Matrix Lie Groups
We study the complex-time Segal-Bargmann transform $\mathbf{B}_{s,\tau}^{K_N}$ on a compact type Lie group $K_N$, where $K_N$ is one of the following classical matrix Lie groups: the special
orthogonal group $\mathrm{SO}(N,\mathbb{R})$, the special unitary group $\mathrm{SU}(N)$, or the compact symplectic group $\mathrm{Sp}(N)$. Our work complements and extends the results of Driver,
Hall, and Kemp on the Segal-Bargman transform for the unitary group $\mathrm{U}(N)$. We provide an effective method of computing the action of the Segal-Bargmann transform on \emph{trace
polynomials}, which comprise a subspace of smooth functions on $K_N$ extending the polynomial functional calculus. Using these results, we show that as $N\to\infty$, the finite-dimensional transform
$\mathbf{B}_{s,\tau}^{K_N}$ has a meaningful limit $\mathscr{G}_{s,\tau}^{(\beta)}$ (where $\beta$ is a parameter associated with $\mathrm{SO}(N,\mathbb{R})$, $\mathrm{SU}(N)$, or $\mathrm{Sp}(N)$),
which can be identified as an operator on the space of complex Laurent polynomials
|
{"url":"https://core.ac.uk/works/54153467/","timestamp":"2024-11-04T21:16:06Z","content_type":"text/html","content_length":"196489","record_id":"<urn:uuid:26bcad1a-b692-42c9-8356-0e9a31fe41ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00217.warc.gz"}
|
Exact and inexact differentials
Next: Statistical thermodynamics Up: Heat and work Previous: Quasi-static processes
Exact and inexact differentials
In our investigation of heat and work we have come across various infinitesimal objects such as
Consider the purely mathematical problem where
which can also be written
where exact differential to distinguish it from another type to be discussed presently. If we move in the
Note that since the difference on the left-hand side depends only on the initial and final points, the integral on the right-hand side can only depend on these points as well. In other words, the
value of the integral is independent of the path taken in going from the initial to the final point. This is the distinguishing feature of an exact differential. Consider an integral taken around a
closed circuit in the
Of course, not every infinitesimal quantity is an exact differential. Consider the infinitesimal object
where 136). It is clear that since
Thus, if
(as is assumed to be the case), then inexact differential. The special symbol
is independent of the path taken between the initial and final points. This is the distinguishing feature of an inexact differential. In particular, the integral of an inexact differential around a
closed circuit is not necessarily zero, so
Consider, for the moment, the solution of
which reduces to the ordinary differential equation
Since the right-hand side is a known function of i.e., gradient) at each point in the 145). This defines a set of curves which can be written
The elimination of 145) and (146) yields
Inserting Eq. (148) into Eq. (139) gives
Thus, dividing the inexact differential integrating factor. Since the above analysis is quite general, it is clear that an inexact differential involving two independent variables always admits of an
integrating factor. Note, however, this is not generally the case for inexact differentials involving more than two variables.
After this mathematical excursion, let us return to physical situation of interest. The macrostate of a macroscopic system can be specified by the values of the external parameters (e.g., the volume)
and the mean energy i.e., they are exact differentials. For example,
However, since the mean energy is just a function of the macrostate under consideration,
Consider, now, the infinitesimal work done by the system in going from some initial macrostate not the difference between two numbers referring to the properties of two neighbouring macrostates.
Instead, it is merely an infinitesimal quantity characteristic of the process of going from state
where the integral represents the sum of the infinitesimal amounts of work does depend on the particular process used in going from macrostate
Recall that in going from macrostate does not depend on the process used whereas the work does. Thus, it follows from the first law of thermodynamics, Eq. (123), that the heat
is an inexact differential. However, by analogy with the mathematical example discussed previously, there must exist some integrating factor,
It will be interesting to find out what physical quantities correspond to the functions
Suppose that the system is thermally insulated, so that
Thus, in this special case, the work done depends only on the energy difference between in the initial and final states, and is independent of the process. In fact, when Clausius first formulated the
first law in 1850 this is how he expressed it:
If a thermally isolated system is brought from some initial to some final state then the work done by the system is independent of the process used.
If the external parameters of the system are kept fixed, so that no work is done, then 124) reduces to
Next: Statistical thermodynamics Up: Heat and work Previous: Quasi-static processes Richard Fitzpatrick 2006-02-02
|
{"url":"https://farside.ph.utexas.edu/teaching/sm1/lectures/node36.html","timestamp":"2024-11-07T15:40:28Z","content_type":"text/html","content_length":"32829","record_id":"<urn:uuid:62eea298-ecdf-4a5d-8fa6-b253acc9f788>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00567.warc.gz"}
|
JavaScript Recursion Function to Find Factorial [SOLVED] | GoLinuxCloud
A recursive function is a function that calls itself, typically with a different input each time. One common example of a recursive function is a factorial function, which calculates the factorial of
a given number.
The factorial of a number is the product of that number and all the positive integers less than it. For example, the factorial of 5 (written as 5!) is 5 x 4 x 3 x 2 x 1, or 120. The factorial of 0 is
defined as 1.
In this article, we will discuss how to carry out a recursive function factorial in JavaScript
Using Recursion Function to calculate Factorial of a Number in JavaScript
Here's an example of a JavaScript function that uses recursion to calculate the factorial of a given number:
function factorial(n) {
// Base case, if n is 0 or 1, return 1
if (n === 0 || n === 1) {
return 1;
// Recursive case, return n multiplied by the factorial of n-1
else {
return n * factorial(n - 1);
console.log(factorial(5)); // 120
console.log(factorial(10)); // 3628800
In this example, the factorial function takes a single argument, n, which represents the number for which we want to calculate the factorial. The function uses an if-else statement to check if n is
equal to 0 or 1. If it is, the function returns 1. This is the base case of the recursion, where the function stops calling itself.
If n is not equal to 0 or 1, the function returns n multiplied by the factorial of n-1. This is the recursive case of the function, where it calls itself with n-1 as the argument, and uses the result
of that call as part of the final calculation.
This recursion continues until the base case is reached, where the function returns 1 and the recursive calls "unwind" and multiply their results together until the final factorial is returned.
Recursive functions can be an elegant and concise way to solve certain problems, especially those that can be expressed in terms of smaller subproblems. In the case of the factorial function, the
problem of calculating the factorial of a number can be expressed in terms of calculating the factorial of a smaller number, making it a good candidate for a recursive solution.
Recursive functions can be used to calculate the factorial of a number because the factorial of a number can be expressed in terms of the factorial of a smaller number. For example, the factorial of
5 can be expressed as 5 x the factorial of 4, which in turn can be expressed as 4 x the factorial of 3, and so on. This pattern allows us to define a recursive function that calculates the factorial
of a number by calling itself with a smaller input until it reaches the base case of 0, at which point it returns 1.
The typical way to achieve a factorial function might be to use the for loop approach can be seen below
function factorial(n) {
let value = 1;
for (let index = 1; index <= n; index++) {
value *= index;
return value;
However, here is an example of a recursive function that calculates the factorial of a given number in JavaScript
function factorial(n) {
if (n === 0) {
return 1;
return n * factorial(n - 1);
This function works by first checking if the input n is 0. If it is, it returns 1, which is the base case. If n is not 0, it returns n multiplied by the result of calling itself with n - 1. This
process continues until the function reaches the base case of 0, at which point it starts returning the results of the previous calls, working its way back up the recursive chain.
Another method to calculate the factorial of a number is to use the Array.prototype.reduce() method which is used to apply a function to each element in the array, resulting in a single output value.
function factorial(n) {
return Array.from({length: n}, (_, i) => i + 1).reduce((acc, cur) => acc * cur);
console.log(factorial(5)); // 120
console.log(factorial(10)); // 3628800
In this example, the factorial function takes a single argument, n, which represents the number for which we want to calculate the factorial. The function first uses the Array.from() method to create
an array of numbers from 1 to n. Then it uses the Array.prototype.reduce() method to iterate over the array, where the first argument of the reduce function is the accumulator and the second argument
is the current value.
In summary, a recursive function is a function that calls itself, typically with a different input each time. A factorial function is a common example of a recursive function, which calculates the
factorial of a given number by expressing the factorial of a larger number in terms of the factorial of a smaller number. While recursive functions can be an elegant solution for certain problems,
they can also be slower and less efficient than iterative solutions, especially for large inputs.
However, it is important to note that recursive functions can also be slower and less efficient than iterative solutions, especially for large inputs. This is because each recursive call adds a new
level to the call stack, which consumes memory and resources. In some cases, it may be more efficient to use an iterative solution, such as a loop, to calculate the factorial of a number.
Recursion - MDN Web Docs Glossary: Definitions of Web-related terms | MDN (mozilla.org)
Leave a Comment
Can't find what you're searching for? Let us assist you.
Enter your query below, and we'll provide instant results tailored to your needs.
If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.
For any other feedbacks or questions you can send mail to admin@golinuxcloud.com
Thank You for your support!!
|
{"url":"https://www.golinuxcloud.com/javascript-recursion-function-factorial/","timestamp":"2024-11-05T20:14:22Z","content_type":"text/html","content_length":"178521","record_id":"<urn:uuid:1c371960-848c-4da7-92d5-c4618f63db5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00476.warc.gz"}
|
Direct math input with MathQuill
This plugin hasn t been tested with the latest 3 major releases of WordPress. It may no longer be maintained or supported and may have compatibility issues when used with more recent versions of
Direct math input with MathQuill
Type complex math expressions easily into your wordpress page using MathQuill (http://mathquill.com/). This plugin adds a new block editor called ‘Direct math input’ in the ‘widgets’ category.
Math expressions are saved in LaTeX format and rendered by MathQuill during pageview.
After installing, a new block editor called ‘Direct math input’ will be available. Each block contains a MathQuill instance
where you can edit math right on your page.
When the math editor is focused, you can enter math symbols using your keyboard or via copy/paste.
See the http://mathquill.com/ homepage for a visual demonstration of how to input math.
Some common keyboard shortcuts are as follows:
• / (forward slash) Enter a new fraction, with focusable input areas for numerator and denominator
• ^ (caret) Enter a new superscript, e.g. to add an exponent
• _ (underscore) Enter a new subscript
• ( (open parentheses) Open parentheses (which will scale based on the height of the content inside) Other parenthesis types behave similarly, e.g. [ and {
• \ (backslash) Start typing one of the control sequences below. When finished, press enter or space to see the resulting symbol
Control sequences – to create each symbol, type the full sequence (including the backslash) into the math editor, followed by
space or enter. (not an exhaustive list)
• \plusminus
• \times
• \alpha
• \beta
• \gamma
• \delta
• \Alpha
• \Beta
• \Gamma
• \Delta
• \summation
• \prod
• \int
• \sqrt
• \nthroot
• \lt
• \gt
• \le
• \ge
• \approx
• \doteq
• \neq
• \nless
• \ngtr
• \intersection
• \union
• \subset
• \superset
• \notsubset
• \nosuperset
• \subseteq
• \isin
• \contains
• \notcontains
• \Complex
• \Hamiltonian
• \Imaginary
• \Naturals
• \Primes
• \Rationals
• \Reals
This plugin provides 1 block.
• Direct math input Type math expressions directly into your page with MathQuill/LaTeX.
This section describes how to install the plugin and get it working.
1. Upload the plugin files to the /wp-content/plugins/math-input-with-mathquill directory, or install the plugin through the WordPress plugins screen directly.
2. Activate the plugin through the ‘Plugins’ screen in WordPress
3. In the gutenberg editor, add a new ‘Direct math input’ block and start entering math
How do I start using this?
Create a new ‘Direct math input’ block from the gutenberg blocks list.
Start typing math straight into your block using some of the symbols listed in the
plugin description.
Really great idea, together with LaTeX parser as also other plugins have, this has extremely simple way to write equations almost like in wysiwyg fashion. EDIT: I had to uninstall, it broke my page.
Hopefully, this will be corrected and I could edit this review (see Support for this plugin)
Read all 1 review
Contributors & Developers
“Direct math input with MathQuill” is open source software. The following people have contributed to this plugin.
• Apply background color to empty selected block
• Allow keyboard deletion of block
• Change LaTeX button to show copyable LaTeX
• Fix readme formatting
• Add toolbar buttons for superscript and subscript
• Show an error message when the clipboard function isn’t available
• Added a “Copy LaTeX to clipboard” toolbar button
• Added some toolbar buttons to create math symbols
• Added a custom icon for the block editor
• Added some usage information and FAQ
• Improved the plugin description
• Added a screenshot showing usage
• Added assets for banner and icon
• Fixed an issue preventing the math rendering with the correct font file
|
{"url":"https://en-nz.wordpress.org/plugins/math-input-with-mathquill/","timestamp":"2024-11-03T03:44:17Z","content_type":"text/html","content_length":"125814","record_id":"<urn:uuid:9d5b9754-b007-4c7f-882c-ca52a7206cdb>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00310.warc.gz"}
|
The reality of quantum fluctuations was proven back in 1947
Starts With A Bang —
The reality of quantum fluctuations was proven back in 1947
Often viewed as a purely theoretical, calculational tool only, direct observation of the Lamb Shift proved their very real existence.
Key Takeaways
• It often seems like theoretical physics is divided into two separate worlds: the world of what’s real, measurable, and observable through experiment, and the fantastical world of purely
mathematical, calculational tools.
• While much of the mathematics leveraged in theoretical physics has no intuitive real-world analogue, there are many effects that can be observed and measured that rely on those surprising,
counterintuitive calculations.
• Even though there are many misunderstandings surrounding the existence and presence of virtual particles, the fact is that they allow us to compute effects that cannot be computed otherwise. The
Lamb Shift, dating back to 1947, is one of them.
Sign up for the Starts With a Bang newsletter
Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all
If you spend enough time listening to theoretical physicists, it starts to sound like there are two separate worlds that they inhabit.
1. The real, experimental-and-observational world, full of quantities and properties we can measure to high precision with a sufficient setup.
2. The theoretical world that underlies it, full of esoteric calculational tools that model reality, but can only describe it in mathematical, rather than purely physical, terms.
One of the most glaring examples of this is the idea of virtual particles. In theory, there are both the real particles that exist and can be measured in our experiments, and also the virtual
particles that exist throughout all of space, including empty space (devoid of matter) and occupied (matter-containing) space. The virtual ones do not appear in our detectors, don’t collide with real
particles, and cannot be directly seen. As theorists, we often caution against taking the analogy of virtual particles too seriously, noting that they’re just an effective calculational tool, and
that there are no actual “pairs” of particles that pop in-and-out of existence in our lived reality.
However, virtual particles do affect the real world in important, measurable ways, and in fact their effect was first discovered way back in 1947, before theorists were even aware of how necessary
they were. Here’s the remarkable story of how we proved that quantum fluctuations must be real and have real effects on our measured world, even before we understood the theory behind them.
Imagine the simplest atom of all: the hydrogen atom. This was, in many ways, the “proving ground” for quantum theory, as it’s one of the simplest systems in the Universe, made up of one positively
charged proton with an electron bound to it. Yes, the proton is complicated, as it itself is made of quarks and gluons bound together, but for the purposes of atomic physics, it can frequently be
treated as a point particle with a few quantum properties:
• a mass (about 1836 times heavier than the electron’s mass),
• an electric charge (positive, and equal and opposite to the electron’s charge),
• and a half-integer spin (either +½ or -½), or an intrinsic amount of angular momentum (in units of Planck’s constant, ħ).
When an electron binds to a proton, it forms a neutral hydrogen atom, with the entire system having a slightly smaller amount of rest mass than the free proton and free electron combined. If you put
a neutral hydrogen atom on one side of a scale and a free electron and free proton on the other size, you’d find the neutral atom was lighter by about 2.4 × 10^-35 kg: a minuscule amount, but a very
important one nonetheless.
That tiny difference in mass comes from the fact that when protons and electrons bind together, they emit energy. That emitted energy comes in the form of one-or-more photons, as there are only a
finite number of explicit energy levels that are allowed: the energy spectrum of the hydrogen atom. As an initially excited electron cascades down through the various energy levels, culminating in a
transition down to (eventually) the lowest-energy state allowed — known as the ground state — photons are released, with the energy, frequency, and wavelength of that photon determined by the
different energy levels that the electron occupies before-and-after the transition.
If you were to capture all of the photons emitted during a transition from a free proton and a free electron down to a ground state hydrogen atom, you’d find that the exact same amount of total
energy was always released: 13.6 electron-volts, or an amount of energy that would raise the electric potential of one electron by 13.6 volts. That energy difference is exactly the mass-equivalence
of the difference between a free electron and proton versus a bound, ground state hydrogen atom, which you can calculate yourself from Einstein’s most famous equation: E = mc².
According to the quantum rules that govern the Universe, a bound electron in an atom is very different than a free electron.
• Whereas a free electron can carry any amount of energy at all, a bound electron can only carry a few explicit, specific amounts of energy within an atom.
• Whereas free electrons are allowed to move in any direction at all with any momentum at all, the possibilities for a bound electron are restricted by a set of quantum rules.
• And a free electron’s energy possibilities are continuous, while a bound electron’s energy possibilities are discrete, and can only take on specific values.
In fact, the reason we call it “quantum physics” comes exactly from this phenomenon: the energy levels a bound particle can occupy are quantized, and can only come in specific quantities that obey
the mathematical rules that bound states mandate.
However, an electron in the ground state — remember, the lowest-energy state — won’t be in a specific place at a specific time, like a planet orbiting a star would be. Instead, it makes more sense to
calculate the probability distribution of the electron: the odds, averaged over space and time, of finding it in a particular location at any particular moment. Remember that quantum physics is
inherently unlike classical physics: instead of being able to measure exactly where a particle is and how it’s moving, you can only know the combination of those two properties to some specific,
limiting precision. Measuring one more precisely inherently leads to knowing the other less precisely.
As a result, we do better to think of an electron not as a particle when it’s in a hydrogen atom, but rather as a “probability cloud” or some other, similarly fuzzy visualization. For the
lowest-energy state, the probability cloud of an electron looks like a sphere: you’re most likely to find it an intermediate distance away from the proton, but you’ve got a non-zero probability of
finding it very far away or even at the center: within the proton itself. Until you make a critical measurement, however, it’s more accurate to describe the electron’s properties probabilistically:
occupying a set of values with a particular set of probability amplitudes, rather than having a specific position and momentum at any moment in time.
In other words, it isn’t the position of the electron at any moment in time that determines its energy; rather, it’s the energy level the electron occupies that determines the relative probabilities
of where you’re most and least likely to find that electron any time you make a measurement.
There is a relationship, though, between the average distance you’re likely to find the electron at from the proton and the energy level of the electron within the atom. This was the big discovery of
Niels Bohr: that the electron occupies discrete energy levels that correspond to, in his simplified model, being multiples of a specific distance from the nucleus.
Bohr’s model works incredibly well for determining the energies of transitions between the various levels of the hydrogen atom that the electron can occupy. If you have an electron in the first
excited state, it can transition down to the ground state, emitting a photon in the process. The ground state has only one possible orbital that the electrons can occupy: the 1S orbital, which is
spherically symmetric. That orbital can hold up to two electrons: one with spin +½ and one with spin -½, either aligned or anti-aligned with the proton’s spin. There are no other possibilities for an
electron in the ground (1st energy level) state of the hydrogen atom.
But when you jump up to the first excited state, there are now multiple orbitals the electrons can occupy, corresponding to the arrangement most of us are familiar with on the periodic table.
• Electrons can occupy the 2S orbital, which is spherically symmetric but has a mean distance that is double the 1S orbital’s, and has various radii of high-and-low probabilities.
• Additionally, however, electrons can also occupy the 2P orbital, which is divided into three perpendicular directions corresponding to three dimensions: the x, y, and z directions. Again, the
mean distance of the electron from the nucleus is double the 1S orbital’s.
These energy levels were known way before Bohr’s 1913 model, going way back to Balmer’s 1885 work on spectral lines. By 1928, Dirac had put forth the first relativistic theory of quantum mechanics
that included the electron and the photon, showing that — at least theoretically — there should be corrections to those energy levels if they had different spin or orbital angular momenta between
them: corrections that were experimentally determined between, for instance, the various 3D and 3P orbitals. (You can see this visually in the image above, as 3,1,0 and 3,1,1 correspond to the 3P
orbitals, while 3,2,0, 3,2,1 and 3,2,2 correspond to the 3D orbitals.)
However, in both Bohr’s and Dirac’s theory, electrons in the 2S orbital and the 2P orbital should have the same energies. Do they? We didn’t know for decades, as this wasn’t measured until a very
clever experiment came along in 1947, conducted by Willis Lamb and Robert Retherford.
What they did was prepare a beam of hydrogen atoms in the ground (1S) state, and then hit that beam with electrons that bump some of the atoms up to the 2S state. Under normal circumstances, these 2S
electrons take a long time (a few hundred milliseconds) to transition back to the 1S state, since you have to emit two photons (instead of just one) to prevent your electron from undergoing a
forbidden spin transition. Alternatively, you can collide those excited atoms with a piece of tungsten foil, which causes the atoms with 2S electrons to de-excite, emitting detectable radiation.
On the other hand, electrons in the 2P state should transition much more quickly: in about ~1 nanosecond, since they only need to emit one photon for the quantum transition and there are no quantum
rules forbidding the emission of one such photon. The clever trick that Lamb and Retherford used was to add in a resonator that could be tuned, bombarding the now-excited electrons with
electromagnetic radiation. When the electromagnetic frequency reached just a tiny bit over 1 GHz, some of the excited hydrogen atoms started emitting photons right away (within nanoseconds),
de-exciting back to the 1S state.
The immediate drop in the detectable radiation at the right frequency was an enormous surprise, providing strong evidence that these atoms had been excited into the 2P state, rather than the 2S
Think about what that means: without this additional radiation, the excited electrons would only go into the 2S state, never the 2P state. Only with the addition of energy-carrying radiation could
the electrons be coaxed from the 2S state into the 2P state. That means that the additional radiation must be getting absorbed by the electrons, and that the additional absorption of energy “bumps
them up” from the 2S state into the 2P state.
The implication, if you haven’t realized it yet, is astounding. Despite the predictions of Bohr, Dirac, and quantum theory as we understood it, the 2P state didn’t have the same energy as the 2S
state. The 2P state has a slightly higher energy — known today as the Lamb shift — an experimental fact that the work of Lamb and Retherford clearly demonstrated. What wasn’t immediately clear was
why this is the case.
Some thought it could be caused by a nuclear interaction; that was shown to be wrong. Others thought that the vacuum might become polarized, but that was also wrong.
Instead, as was first shown by Hans Bethe later that year, this was due to the fact that all of an atom’s energy levels are shifted by the interaction of the electron with what he called “the
radiation field,” which can only be accounted for properly in a quantum field theory, such as quantum electrodynamics. The resulting theoretical developments brought about modern quantum field
theory, and the interactions with virtual particles — the modern way to quantify the effects of “the radiation field” — provide the exact effect, including the right sign and magnitude, that Lamb
measured back in 1947.
The results of the Lamb-Retherford experiment are sufficient to prove the existence of the very real effects of quantum fluctuations. We can conceptualize it like this: the atom itself is always
present, and it exerts an electromagnetic force, the Coulomb force, which governs electrostatic attraction. The quantum fluctuations in the electromagnetic field cause electron fluctuations in its
position, and that causes the average Coulomb force to be slightly different from what it would be without these quantum fluctuations. Because the geometry of the 2S and 2P orbitals are slightly
different from one another, those quantum fluctuations — which show up as virtual photons from the charged particles in the atom — affect the orbitals differently, resulting in the physical
phenomenon known, today, as the Lamb shift.
To be sure, there are definitely going to be differences between the shift of a bound electron and the shift of a free electron, but even free electrons are destined to interact with the quantum
vacuum. No matter where you go, you cannot escape the quantum nature of the Universe. Today, the hydrogen atom is one of the most stringent testing grounds for the rules of quantum physics, giving us
a measurement of the fine structure constant — α — to better than 1-part-in-1,000,000. The quantum nature of the Universe extends not only to particles, but to fields as well. It isn’t just theory;
our experiments have demonstrated this unavoidable reality for more than three-quarters of a century.
Sign up for the Starts With a Bang newsletter
Travel the universe with Dr. Ethan Siegel as he answers the biggest questions of all
The Universe changes remarkably over time, with some entities surviving and others simply decaying away. Is this cosmic evolution at work?
Within our observable Universe, there’s only one Earth and one “you.” But in a vast multiverse, so much more becomes possible.
Most fundamental constants could be a little larger or smaller, and our Universe would still be similar. But not the mass of the electron.
Taught in every introductory physics class for centuries, the parabola is only an imperfect approximation for the true path of a projectile.
The observation that everything we know is made out of matter and not antimatter is one of nature’s greatest puzzles. Will we ever solve it?
Researchers are working nest by nest to limit the threat while developing better eradication methods.
|
{"url":"https://bigthink.com/starts-with-a-bang/reality-quantum-fluctuations-1947/","timestamp":"2024-11-02T21:47:31Z","content_type":"text/html","content_length":"189269","record_id":"<urn:uuid:7de22331-921f-4287-b4fb-91ea94126037>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00599.warc.gz"}
|
Researchers overcoming Error correction, the critical challenge for building large scale fault tolerant quantum computers - International Defense Security & Technology
In the grand tapestry of 21st-century technology, the development of a “quantum computer” stands out as one of the most challenging and revolutionary pursuits. Quantum computers derive their
extraordinary power from the peculiar rules governing quantum bits, or qubits. Unlike their classical counterparts, which are confined to a binary state of 0 or 1, qubits revel in a unique state of
superposition, holding values of 0 and 1 simultaneously. Furthermore, the entanglement of two qubits, despite their physical separation, adds an intriguing layer to the quantum realm.
The Quantum Advantage: Parallelism and Efficiency
These extraordinary properties pave the way for a game-changing method of calculation in quantum computers. The ability to consider multiple solutions to a problem simultaneously allows the
cancellation of incorrect answers, amplifying the correct one. This parallelism enables quantum computers to swiftly converge on the correct solution without exhaustively exploring each
possibility—an approach unimaginable for classical computers. This quantum advantage finds applications in various domains, including the military, cryptography, AI, pattern recognition, and
Hurdles on the Quantum Odyssey
To fully realize the potential of quantum computers, existing prototypes must meet specific criteria. Firstly, they need to scale up, requiring a substantial increase in the number of qubits.
Secondly, they must grapple with the inherent fragility of qubits, susceptible to errors induced by unwanted interactions with the environment. Factors such as electromagnetic fields, heat, or stray
atoms contribute to errors that compromise the accuracy of quantum computations.
Quantum Decoherence and the Challenge of Error Correction
Unlike classical bits, qubits’ superposition of states is a double-edged sword. Quantum decoherence, where the information in a superposition of states collapses, poses a significant hurdle for
sustained quantum computations.
One of the main difficulties of quantum computation is that decoherence destroys the information in a superposition of states contained in a quantum computer, thus making long computations
impossible. If a single atom that represents a qubit gets jostled, the information the qubit was storing is lost. Additionally, each step of a calculation has a significant chance of introducing
error. As a result, for complex calculations, “the output will be garbage,” says quantum physicist Barbara Terhal of the research center QuTech in Delft, Netherlands. A quantum computer’s
susceptibility to errors, whether from external disturbances or internal interactions, demands innovative solutions for error correction.
Quantum Error Correction: A Pioneering Frontier
Researchers have been devising a variety of methods for error correction. The idea behind many of these schemes is to combine multiple error-prone qubits to form one more reliable qubit. This is
inspired by Classical error correction that employs redundancy for instance by storing the information multiple times, and—if these copies are later found to disagree—just take a majority vote.
However, in contrast to the classical bits, copying of quantum information is not possible due to the no-cloning theorem, and it is not possible to get an exact diagnosis of qubit errors without
destroying the stored quantum information.
Instead, quantum error correction schemes employ indirect measurements and redundancy in the form of logical qubits, spread across entangled physical qubits.
These schemes must detect and correct errors without directly measuring the qubits, since measurements collapse qubits’ coexisting possibilities into definite realities: plain old 0s or 1s that can’t
sustain quantum computations. So schemes for quantum error correction apply some workarounds. Rather than making outright measurements of qubits to check for errors, scientists perform indirect
measurements, which “measure what error occurred, but leave the actual information [that] you want to maintain untouched and unmeasured.” For example, scientists can check if the values of two qubits
agree with one another without measuring their values.
And rather than directly copying qubits, error-correction schemes store data in a redundant way, with information spread over multiple entangled qubits, collectively known as a logical qubit. When
individual qubits are combined in this way, the collective becomes more powerful than the sum of its parts. Those logical qubits become the error-resistant qubits of the final computer. If your
program requires 10 qubits to run, that means it needs 10 logical qubits — which could require a quantum computer with hundreds or even hundreds of thousands of the original, error-prone physical
qubits. To run a really complex quantum computation, millions of physical qubits may be necessary.
Surface Code Architecture: A Quantum Beacon of Hope
One promising avenue in quantum error correction is the Surface Code architecture. Designed for superconducting quantum computers, this architecture organizes qubits in a 2D grid, with each qubit
connected to its neighbors. The Surface Code architecture, particularly the XZZX variant, provides a scalable solution that can counteract quantum decoherence, making it a viable choice for future
quantum experiments.
The surface code is ideal for superconducting quantum computers, like the ones being built by companies including Google and IBM. The code is designed for qubits that are arranged in a 2-D grid in
which each qubit is directly connected to neighboring qubits. That, conveniently, is the way superconducting quantum computers are typically laid out.
Surface code requires that different qubits have different jobs. Some are data qubits, which store information, and others are helper qubits, called ancillas. Measurements of the ancillas allow for
checking and correcting of errors without destroying the information stored in the data qubits. The data and ancilla qubits together make up one logical qubit with, hopefully, a lower error rate. The
more data and ancilla qubits that make up each logical qubit, the more errors that can be detected and corrected.
In 2015, Google researchers and colleagues performed a simplified version of the surface code, using nine qubits arranged in a line. That setup, reported in Nature, could correct a type of error
called a bit-flip error, akin to a 0 going to a 1. A second type of error, a phase flip, is unique to quantum computers, and effectively inserts a negative sign into the mathematical expression
describing the qubit’s state.
Now, researchers are tackling both types of errors simultaneously. Andreas Wallraff, a physicist at ETH Zurich, and colleagues showed that they could detect bit- and phase-flip errors using a
seven-qubit computer. They could not yet correct those errors, but they could pinpoint cases where errors occurred and would have ruined a calculation, the team reported in a paper published in
Nature Physics. That’s an intermediate step toward fixing such errors.
The surface code architecture allows a lower accuracy of quantum logic operations, 99 percent instead of 99.999 percent in other quantum error-correction schemes. IBM researchers have also done
pioneering work in making surface-code error correction work with superconducting qubits. One IBM group demonstrated a smaller three-qubit system capable of running surface code, although that system
had a lower accuracy—94 percent.
The Threshold Theorem: Overcoming Quantum Imperfections
The basis of quantum error correction is measuring parity. The parity is defined to be “0” if both qubits have the same value and “1” if they have different values. Crucially, it can be determined
without actually measuring the values of both qubits. The error-correction method scientists choose must not introduce more errors than it corrects, and it must correct errors faster than they pop
The computer scientists Dorit Aharonov and Michael Ben-Or (and other researchers working independently) proved a year later that these codes could theoretically push error rates close to zero. The
threshold theorem, a cornerstone in quantum information theory, asserts that a quantum computer with a physical error rate below a certain threshold can, through quantum error correction, suppress
the logical error rate to arbitrarily low levels. This revelation has ignited optimism about the feasibility of practical quantum computers.
Current estimates put the threshold for the surface code on the order of 1%, though estimates range widely and are difficult to calculate due to the exponential difficulty of simulating large quantum
systems. At a 0.1% probability of a depolarizing error, the surface code would require approximately 1,000-10,000 physical qubits per logical data qubit, though more pathological error types could
change this figure drastically.
Quantum Error Correction Advances
Researchers from MIT, Google, the University of Sydney, and Cornell University have introduced a groundbreaking quantum error correction code that can correct errors afflicting a specified fraction
of a quantum computer’s qubits. Unlike previous codes limited by the square root of the total qubits, this code can address a larger fraction, making it applicable to reasonably sized quantum
computers. The researchers treat each state of the quantum computation as a spatial dimension, assigning a bank of qubits to each state. Agreement measurements on these qubits modify their states to
ensure lawful error propagation, aiding in error detection and correction without revealing specific qubit values.
While the protocol might require some redundancy in hardware for efficiency, the researchers believe that increasing logical qubits is easier than increasing error correction distances. Stephen
Bartlett, a physics professor at the University of Sydney, considers the additional qubits needed by the scheme as a significant but manageable reduction compared to existing structures.
In May 2022, a team led by Thomas Monz achieved the first fault-tolerant implementation of a universal set of gates, crucial for programming all quantum algorithms. Using an ion trap quantum computer
with 16 trapped atoms, they demonstrated computational operations on two logical quantum bits, including a CNOT gate and a T gate, essential for universality. This achievement marks progress in
building practical quantum computers.
Real-Time Quantum Monitoring: A Quantum Leap
Recent research from Yale University showcases real-time quantum monitoring and feedback, challenging a century of quantum mechanics research. By continuously observing quantum systems, errors can be
detected and reversed mid-flight, presenting a new frontier in quantum control and error prevention.
Recent Advances in Quantum Error Correction: Light at the End of the Tunnel?
Quantum computing’s potential to revolutionize various fields like materials science, drug discovery, and finance is undeniable. But its practical application hinges on overcoming a major roadblock:
error correction. The fragile nature of qubits, prone to errors and noise, makes reliable computations a distant dream. However, recent advancements in quantum error correction are injecting a dose
of optimism, suggesting the path towards fault-tolerant quantum computers may be getting brighter.
Recent Developments
1. Surface Codes Take Center Stage:
Surface codes have emerged as a leading contender for error correction due to their ability to correct errors without disturbing neighboring qubits. Recent breakthroughs include:
• Google’s demonstration of surface code error correction with 43 logical qubits: This represents a significant step towards large-scale fault-tolerant systems.
• Development of new topological surface codes: These codes offer improved noise resilience and can be implemented with various qubit technologies.
2. Fault-Tolerant Architectures Gain Momentum:
Designing hardware that inherently minimizes error propagation is another crucial approach. Recent advances include:
• Diamond-based quantum processors: These offer exceptional stability and long coherence times, making them ideal candidates for fault-tolerant architectures.
• Topological quantum computers: These leverage the unique properties of exotic materials to achieve inherent error correction.
3. Software Solutions Take Flight:
Error correction algorithms and protocols are crucial for efficient and scalable correction. Recent developments include:
Machine learning-powered error correction: This utilizes AI to identify and correct errors in real-time, potentially improving efficiency.
Machine learning has been applied to quantum error correction, showcasing the effectiveness of neural networks like Boltzmann machines. Swedish researchers developed a neural decoder using deep
reinforcement learning, achieving performance comparable to hand-made algorithms. Q-CTRL, an Australian startup, focuses on reducing noise and errors in quantum computers through firmware design.
Their approach aims to enhance the resilience of quantum computers and quantum sensors.
Physicists from the University of Waterloo and the Perimeter Institute for Theoretical Physics proposed a machine learning algorithm for quantum error correction. Utilizing a Boltzmann machine, they
demonstrated its capability to model error probability distributions and generate error chains for quantum states recovery. The algorithm’s simplicity and generalizability make it a promising tool
for larger quantum systems.
Hybrid classical-quantum error correction: This combines the strengths of classical and quantum computing for robust error detection and correction.
Harvard’s Breakthrough in Error Correction
A groundbreaking paper published in Nature unveils the potential of Harvard’s quantum computing platform in solving the persistent challenge of quantum error correction.
The foundation of the Harvard platform, developed over several years, lies in an array of laser-trapped rubidium atoms, each serving as a quantum bit or “qubit.” The innovation lies in the dynamic
configuration of their “neutral atom array,” enabling the movement and connection of atoms, referred to as “entangling” in quantum physics, during computations. Two-qubit logic gates, entangling
pairs of atoms, are crucial units of computing power.
Quantum algorithms involve numerous gate operations, which are susceptible to errors, rendering the algorithm ineffective. The team reports an unprecedented near-flawless performance of their
two-qubit entangling gates, achieving error rates below 0.5 percent. This level of operation quality positions their technology on par with leading quantum computing platforms, such as
superconducting qubits and trapped-ion qubits.
Harvard’s approach boasts significant advantages over competitors due to its large system sizes, efficient qubit control, and the ability to dynamically reconfigure atom layouts. Simon Evered, a
Harvard Griffin Graduate School of Arts and Sciences student in Lukin’s group and the paper’s first author, emphasizes the potential for large-scale, error-corrected devices based on neutral atoms.
The low error rates pave the way for quantum error-corrected logical qubits with even lower errors than individual atoms, marking a significant stride towards scalable and reliable quantum computing.
Photonic Quantum Computers
In September 2021, researchers from DTU Fotonik created a large photonic quantum information processor on a microchip, demonstrating error-correction protocols with photonic quantum bits. The chip’s
design allows it to protect itself from errors using entanglement, a crucial step toward scalable quantum computers. Efforts are underway to increase the efficiency of photon sources on the chip to
enable the construction of larger-scale quantum photonic devices.
Chalmers University of Technology researchers developed a technique to control quantum states of light in a three-dimensional cavity, addressing a major challenge in quantum computing. The method
allows the generation of various quantum states of light, including the elusive cubic phase state, by manipulating a superconducting cavity with electromagnetic pulses. The achievement signifies
progress in achieving precise control over quantum states, crucial for the development of practical quantum computers.
Experimentation and Verification:
Testing and validating error correction techniques are crucial for progress. Recent achievements include:
• Demonstration of error correction protocols on real quantum hardware: This validates the theoretical concepts in practice and paves the way for further scaling.
• Development of benchmark datasets for error correction: These datasets will enable researchers to compare and improve the performance of different error correction techniques.
The Path Forward: Collaborative Innovation
Despite these exciting advancements, significant challenges lie ahead. Scaling up error correction protocols to millions of qubits, optimizing algorithms for efficiency, and minimizing noise sources
are crucial hurdles to overcome. However, the relentless pursuit of researchers and the rapid pace of innovation in the field give us reason to believe that the dream of fault-tolerant quantum
computers is within reach.
As quantum researchers delve deeper into error correction methodologies, collaboration between academic institutions, industry players, and quantum experts becomes paramount. The journey towards
fault-tolerant quantum computation necessitates refining qubit technologies, optimizing error correction techniques, and developing advanced quantum algorithms tailored for large-scale quantum
Conclusion: Navigating the Quantum Seas
The quest for large-scale, fault-tolerant quantum computers is an odyssey filled with challenges and breakthroughs. From the theoretical foundations of quantum error correction to the experimental
realization of scalable architectures, researchers are charting unexplored territories in the quantum realm. As we stand on the brink of a quantum revolution, each innovation and discovery brings us
closer to unlocking the full potential of quantum computing—a potential that could reshape the landscape of computation and problem-solving in the years to come.
References and Resources also include:
|
{"url":"https://idstch.com/technology/quantum/researchers-overcoming-error-correction-the-critical-challenge-for-building-large-scale-fault-tolerant-quantum-computers/","timestamp":"2024-11-11T01:24:43Z","content_type":"text/html","content_length":"126521","record_id":"<urn:uuid:74de7747-9f1d-4a4a-8724-157dce979d99>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00096.warc.gz"}
|
Four reasons to use Bayesian inference
The following is a direct quote from Anthony O’Hagan’s book Bayesian Inference. I’ve edited the quote only to enumerate the points.
Why should one use Bayesian inference, as opposed to classical inference? There are various answers. Broadly speaking, some of the arguments in favour of the Bayesian approach are that it is
1. fundamentally sound,
2. very flexible,
3. produces clear and direct inferences,
4. makes use of all available information.
I’ll elaborate briefly on each of O’Hagan’s points.
Bayesian inference has a solid philosophical foundation. It is consistent with certain axioms of rational inference. Non-Bayesian systems of inference, such as fuzzy logic, must violate one or more
of these axioms; their conclusions are rationally satisfying to the extent that they approximate Bayesian inference.
Bayesian inference is at the same time rigid and flexible. It is rigid in the sense that all inference follows the same form: set up a likelihood and a prior, then calculate the posterior by
conditioning on observed data via Bayes theorem. But this rigidity channels creativity into useful directions. It provides a template for setting up complex models when necessary.
Frequentist inferences are awkward to explain. For example, confidence intervals and p-values are tedious to define rigorously. Most consumers of confidence intervals and p-values do not know what
they mean and implicitly assume Bayesian interpretations. The difference is not simply pedantic. Particularly with regard to p-values, the common understanding can be grossly inaccurate. By contrast,
Bayesian counterparts are simple to define and interpret. Bayesian credible intervals are exactly what most people think confidence intervals are. And a Bayesian hypotheses test simply compares the
probability of each hypothesis via Bayes factors.
Sometimes the necessity of specifying prior distributions is seen as a drawback to Bayesian inference. On the other hand, the ability to specify prior distributions means that more information can be
incorporated in an inference. See Musicians, drunks, and Oliver Cromwell for a colorful illustration from Jim Berger on the need to incorporate prior information.
More posts on Bayesian statistics
3 thoughts on “Four reasons to use Bayesian inference”
1. Neal, see Probability theory: the logic of science by E. T. Jaynes.
2. Nice post! Do you have a reference to fuzzy logic violating an axiom used to derive probability theory? I’ll admit that I’ve only lately gotten interested in probabilistic reasoning and inference
via Sewel Wright’s “path analysis” and going through the upcoming book by Koller and Friedman on graphical models. Before that I’d been a Kosko fuzzy logic fan boy, it all seemed very intuitive,
except for the centers-of-mass de-fuzzification.
3. Thanks – Mark Reid suggested the same book for different reasons ;-)
|
{"url":"https://www.johndcook.com/blog/2009/04/28/reasons-to-use-bayesian-inference/","timestamp":"2024-11-05T17:04:34Z","content_type":"text/html","content_length":"56580","record_id":"<urn:uuid:46211a5d-b713-4696-8111-6e115f67137d>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00498.warc.gz"}
|
Explore the 'zipWith' approach for Hamming in Haskell on Exercism
distance :: String -> String -> Maybe Int
distance xs ys
| length xs /= length ys = Nothing
| otherwise = Just $ length $ filter id (zipWith (/=) xs ys)
The most straightforward way of solving this problem is to
• first check that the lengths are equal, and then
• iterate over both inputs together to count their differences.
Higher-order functions
Higher-order functions are functions that take functions as arguments. Examples well-known even outside of Haskell are map and filter.
• map applies a function to all elements of a list. It returns a list of all the results.
-- >>> map (* 2) [1 .. 5]
-- [2,4,6,8,10]
• filter applies a predicate to all elements of a list. It returns a list with only those elements for which predicate returned True.
-- >>> filter odd [1 .. 5]
-- [1,3,5]
Another example that is extremely common in Haskell is the function composition operator (.), which 'chains' two functions into one.
reverseSort :: [Int] -> [Int]
reverseSort = reverse . sort -- first sort, then reverse
-- >>> reverseSort [3, 1, 4, 1, 5, 9]
-- [9,5,4,3,1,1]
Still other examples include
• zipWith, which combines two lists into one using a element-combining function.
-- >>> zipWith (+) [10, 20, 30] [1, 2, 3]
-- [11,22,33]
• uncurry, which turns a function of two arguments into one that accepts a tuple.
tupleMinus = uncurry (-)
-- >>> tupleMinus (23, 4)
-- 19
• fmap, which does the same as map except it also works for some types that aren't lists.
-- >>> fmap (+ 1) [1 .. 5]
-- [2,3,4,5,6]
-- >>> fmap (+ 1) (Just 7)
-- Just 8
In this approach
After making sure that the lengths are equal, we count the number of places in which the inputs differ.
The /= operator compares two values and returns True precisely when they are unequal.
-- >>> 4 /= 5
-- True
-- >>> 2 /= 2
-- False
We use zipWith to walk both input lists simultaneously, marking unequal pairs using (/=) as we go.
comparisons =
zipWith (/=)
[3, 2, 6]
[5, 2, 4, 7]
-- >>> comparisons
-- [True,False,True]
zipWith stops as soon as one of its argument lists stops. This is the reason we need to check the lengths separately.
Now, the number of differences between the two inputs is exactly the number of Trues in the list produced by zipWith. We count them using filter and length.
We need to give filter a predicate to filter by. In this case, this predicate should return True when given True and False for False. A function that does this already exists in the Prelude: id is
the function that returns its argument unchanged.
We could also have zipped the two lists using the simpler zip function.
pairs =
[3, 2, 6]
[5, 2, 4, 7]
-- >>> pairs
-- [(3,5),(2,2),(6,4)]
In that case we would have needed to count the pairs that have the same value in both places. This can still be done using filter, but the required predicate is more complicated. uncurry (/=) is a
function that takes a tuple and returns True when the tuple has equal values in both places.
distance :: String -> String -> Maybe Int
distance xs ys
| length xs /= length ys = Nothing
| otherwise = Just $ length $ filter (uncurry (/=)) (zip xs ys)
Considerations on this approach
This style of solution is very easy to understand. This is a very important quality for code to have! Code is primarily meant for humans to reason about, after all.
On the other hand, this solution suffers an inefficiency. Haskell's lists are linked lists. Therefore, length needs to walk its argument entirely. This can take a lot of time for long lists.
length is called three times in this approach, resulting in three separate walks over the inputs. The explicit recursion and worker–wrapper approaches avoid this and instead walk the input exactly
11th Sep 2024 · Found it useful?
|
{"url":"https://exercism.org/tracks/haskell/exercises/hamming/approaches/zipwith","timestamp":"2024-11-08T23:47:26Z","content_type":"text/html","content_length":"43350","record_id":"<urn:uuid:a983ce3b-b44b-403c-9569-3cdd2b4c6795>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00569.warc.gz"}
|
Project Euler #126: Cuboid layers | HackerRank
[This problem is a programming version of Problem 126 from projecteuler.net]
The minimum number of cubes to cover every visible face on a cuboid measuring is twenty-two.
If we then add a second layer to this solid it would require forty-six cubes to cover every visible face, the third layer would require seventy-eight cubes, and the fourth layer would require
one-hundred and eighteen cubes to cover every visible face.
However, the first layer on a cuboid measuring also requires twenty-two cubes; similarly the first layer on cuboids measuring , , and all contain forty-six cubes.
We shall define to represent the number of cuboids that contain cubes in one of its layers. So , , , and .
Given , compute .
The first line of input contains , the number of test cases. Each test case consists of a single line containing a single integer, .
For the first few test files worth 25% of the total points:
For the next few test files worth 25% of the total points:
For the last few test files worth 50% of the total points:
For each test case, output a single line containing a single integer, the value .
The sample I/O are mentioned in the problem statement.
|
{"url":"https://www.hackerrank.com:443/contests/projecteuler/challenges/euler126/problem","timestamp":"2024-11-15T02:48:03Z","content_type":"text/html","content_length":"850132","record_id":"<urn:uuid:377568e8-2e2e-4860-9039-8b7adf037e8b>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00858.warc.gz"}
|
Designing Steel Structures for Fire Safety - PDF Free Download
Designing Steel Structures for Fire Safety
Designing Steel Structures for Fire Safety
Jean-Marc Franssen Department of Architecture, Geology, Environment & Constructions, University of Liège, Liege, Belgium
Venkatesh Kodur Department of Civil & Environmental Engineering, Michigan State University, East Lansing, USA
Raul Zaharia Department of Steel Structures and Structural Mechanics, “Politehnica’’ University of Timisoara, Timisoara, Romania
Cover photo: Courtesy of CTICM, GSE and INERIS
Taylor & Francis is an imprint of the Taylor & Francis Group, an informa business ©2009 Taylor & Francis Group, London, UK Typeset by Macmillan Publishing Solutions, Chennai, India Printed and bound
in Great Britain by TJ International Ltd, Padstow, Cornwall All rights reserved. No part of this publication or the information contained herein may be reproduced, stored in a retrieval system, or
transmitted in any form or by any means, electronic, mechanical, by photocopying, recording or otherwise, without written prior permission from the publishers. Although all care is taken to ensure
integrity and the quality of this publication and the information herein, no responsibility is assumed by the publishers nor the author for any damage to the property or persons as a result of
operation or use of this publication and/or the information contained herein. Published by: CRC Press/Balkema P.O. Box 447, 2300 AK Leiden,The Netherlands e-mail:
[email protected]
www.crcpress.com – www.taylorandfrancis.co.uk – www.balkema.nl British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of
Congress Cataloging-in-Publication Data Franssen, Jean-Marc. Designing steel structures for fire safety / Jean-Marc Franssen, Venkatesh Kodur, Raul Zaharia. p. cm. Includes bibliographical
references. ISBN 978-0-415-54828-1 (hardcover : alk. paper) – ISBN 978-0-203-87549-0 (e-book) 1. Building, Fireproof. 2. Building, Iron and Steel. I. Zaharia, Raul. II. Kodur, Venkatesh. III. Title.
TH1088.56.F73 2009 693.8 2–dc22 2009006378 ISBN 978-0-415-54828-1(Hbk) ISBN 978-0-203-87549-0(eBook)
Table of Contents
Foreword Preface Notations Author profiles
CHAPTER 1 – INTRODUCTION
1.1 1.2
Fire safety design Codes and standards 1.2.1 General 1.2.2 Fire safety codes 1.2.3 North American codes and standards 1.2.4 European codes: the Eurocodes Design for fire resistance 1.3.1 Fire
resistance requirements 1.3.2 Fire resistance assessment 1.3.3 Eurocodes 1.3.4 Scope of Eurocode 3 – Fire part General layout of this book
Fundamental principles 2.1.1 Eurocodes load provisions 2.1.2 American provisions for fire design Examples 2.2.1 Office building 2.2.2 Beam for a shopping centre 2.2.3 Beam in a roof Specific
considerations 2.3.1 Simultaneous occurrence 2.3.2 Dead weight 2.3.3 Upper floor in an open car park 2.3.4 Industrial cranes
Table of Contents
2.3.5 2.3.6
Indirect fire actions Simplified rule
CHAPTER 3 – THERMAL ACTION
Fundamental principles 3.1.1 Eurocode temperature-time relationships 3.1.1.1 Nominal fire curves 3.1.1.2 Equivalent time 3.1.1.3 Parametric temperature–time curves 3.1.1.4 Zone models 3.1.1.5 Heat
exchange coefficients 3.1.2 Eurocode localised fire, flame not impacting the ceiling 3.1.3 Eurocode localised fire, flame impacting the ceiling 3.1.4 CFD models in the Eurocode 3.1.5 North American
time–temperature relationships 3.2 Specific considerations 3.2.1 Heat flux to protected steelwork 3.2.2 Combining different models 3.3 Examples 3.3.1 Localised fire 3.3.2 Parametric fire –
ventilation controlled 3.3.3 Parametric fire – fuel controlled
4.1 4.2
General Unprotected internal steelwork 4.2.1 Principles 4.2.2 Examples 4.2.2.1 Rectangular hollow core section 4.2.2.2 I-section exposed to fire on 4 sides and subjected to a nominal fire 4.2.2.3
I-section exposed to fire on 3 sides 4.3 Internal steelwork insulated by fire protection material 4.3.1 Principles 4.3.2 Examples 4.3.2.1 H section heated on four sides 4.3.2.2 H section heated on
three sides 4.4 Internal steelwork in a void protected by heat screens 4.5 External steelwork 4.5.1 General principles 4.5.2 Example
Level of analysis 5.1.1 Principles 5.1.2 Boundary conditions in a substructure or an element analysis 5.1.3 Determining Efi,d,0
T a b l e o f C o n t e n t s VII
5.3 5.4 5.5 5.6
5.7 5.8
Different calculation models 5.2.1 General principle 5.2.1.1 Tabulated data 5.2.1.2 Simple calculation models 5.2.1.3 Advanced calculation models 5.2.2 Relations between the calculation model and the
part of the structure that is analysed 5.2.3 Calculation methods in North America Load, time or temperature domain Mechanical properties of carbon steel Classification of cross-sections How to
calculate Rfi,d,t ? 5.6.1 General principles 5.6.2 Tension members 5.6.3 Compression members with Class 1, 2 or 3 cross-sections 5.6.4 Beams with Class 1, 2 or 3 cross-section 5.6.4.1 Resistance in
shear 5.6.4.2 Resistance in bending 5.6.4.2.1 Uniform temperature distribution 5.6.4.2.2 Non-uniform temperature distribution 5.6.4.3 Resistance to lateral torsional buckling 5.6.5 Members with Class
1, 2 or 3 cross-sections, subject to combined bending and axial compression 5.6.6 Members with Class 4 cross-sections Design in the temperature domain Design examples 5.8.1 Member in tension 5.8.1.1
Verification in the load domain 5.8.1.2 Verification in the time domain 5.8.1.3 Verification in the temperature domain 5.8.2 Column under axial compression 5.8.2.1 Fire resistance time of the column
with unprotected cross-section 5.8.2.2 Column protected with contour encasement of uniform thickness 5.8.3 Fixed-fixed beam supporting a concrete slab 5.8.3.1 Classification of the section, see Table
5.2 5.8.3.2 Verification in the load domain 5.8.3.3 Verification in the time domain 5.8.3.4 Verification in the temperature domain 5.8.3.5 Beam protected with hollow encasement 5.8.4 Class 3 beam in
lateral torsional buckling
CHAPTER 6 – JOINTS
6.1 6.2 6.3
General Simplified procedure Detailed analysis
Table of Contents
6.3.1 6.3.2
Temperature of joints in fire Design resistance of bolts and welds in fire 6.3.2.1 Bolted joints in shear 6.3.2.2 Bolted joints in tension 6.3.2.3 Fillet welds 6.3.2.4 Butt welds
7.1 7.2 7.3
General Introduction Thermal analysis 7.3.1 General features 7.3.2 Capabilities of the advanced thermal models 7.3.3 Limitations of the advanced thermal models 7.3.4 Discrepancies with the simple
calculation models 7.4 Mechanical analysis 7.4.1 General features 7.4.2 Capabilities of the advanced mechanical models 7.4.3 Limitations of the advanced mechanical models 7.4.4 Discrepancies with the
simple calculation models
CHAPTER 8 – DESIGN EXAMPLES
8.1 8.2 8.3 8.4 8.5
General Continuous beam Multi-storey moment resisting frame Single storey industrial building Storage building
I.2 I.3 I.4
Thermal properties of carbon steel I.1.1 Eurocode properties I.1.1.1 Thermal conductivity I.1.1.2 Specific heat 1.1.2 Thermal properties of steel according to ASCE I.1.2.1 Thermal conductivity
I.1.2.2 Specific heat Thermal properties of fire protection materials Temperatures in unprotected steel sections (Eurocode properties) Temperatures in protected steel sections (Eurocode properties)
ANNEX II – MECHANICAL PROPERTIES OF CARBON STEELS II.1
Eurocode properties II.1.1 Strength and deformation properties II.1.2 Thermal elongation
Table of Contents
ASCE properties II.2.1 Stress strain relations for steel (Version 1) II.2.2 Stress strain relations for steel (Version 2 II.2.3 Coefficient of thermal expansion
Bibliography Subject index
This book is a major new contribution to the wider understanding of structural behaviour in fires. The art and science of designing structures for fire safety has grown dramatically in recent years,
accompanied by the development of sophisticated codes of practice such as the Eurocodes. The Eurocode documents have evolved over several decades and now represent the best international consensus on
design rules for structures exposed to fires. Codes alone do not provide enough information for designers, especially as they become more sophisticated and comprehensive. Most codes have been written
by a small army of dedicated experts, some of whom have been immersed in the project for many years with the responsibility to provide the correct rules, not always providing adequate guidance for
using those rules. Designers want to understand the basic concepts of the code and the philosophy of the code-writers, together with hands-on advice for using the code. Structural design for fire is
conceptually similar to structural design for normal temperature conditions, but often more difficult because of internal forces induced by thermal expansion, strength reduction due to elevated
temperatures, much larger deflections, and many other factors. Before making any design it is essential to establish clear objectives, and determine the severity of the design fire. This book shows
how these factors are taken into account, and gives guidance for all those wishing to use the Eurocodes for fire design of steel structures. Prof. Jean-Marc Franssen has been a pioneer in the field
of structural design for fire safety, with extensive involvement in codes, software development, research, teaching, and consulting. He is also well known for establishing the SiF Structures in Fire
international workshops. His co-authors are Dr Raul Zaharia, a leading European researcher in structural fire engineering, Prof. Venkatesh Kodur from Michigan State University who is one of the top
researchers and teachers in structural fire design in North America. Together they have produced a book which will be extremely valuable to any design professionals or students wishing to use and
understand Eurocode 3, or to learn more about the design of steel structures exposed to fires. The fire sections of the Eurocodes are considered to be among the most advanced international codes of
practice on fire design of structures, and have attracted attention around the world. This book is an excellent introduction for readers from other regions who wish to become knowledgeable about the
philosophy, culture and details of the Eurocodes for structural fire design. Professor Andy Buchanan University of Canterbury, New Zealand
Fire represents one of the most severe conditions encountered during the life-time of a structure and therefore, the provision of appropriate fire safety measures for structural members is a major
safety requirement in building design. The basis for this requirement can be attributed to the fact that when other measures for containing the fire fail, structural integrity is the last line of
defence. The historical approach for evaluating fire resistance of structural members is through prescriptive-based methodologies. These methodologies have significant drawbacks and do not provide
rational fire designs. Therefore, in the last two decades there has been important research endeavours devoted to developing better understanding of structural behaviour under fire conditions and
also to develop rational design approaches for evaluating fire resistance of structures. This activity was particularly significant in Western Europe where numerous research reports, Ph.D. theses and
scientific papers were published. European technical committees were in the fore-front to implement some of the research findings in to codes and standards to enable the application of rational fire
engineering principles in the design of structures. Among the first internationally recognised codes of practice are, for steel elements, the recommendations of the ECCS “European Convention for
Constructional Steelwork’’ (ECCS 1983) and, for concrete elements, the recommendations of the CEB/FIP “Comité Euro-International du béton / Fédération Internationale de la précontrainte’’ (CEB 1991).
The fire parts of the Eurocodes were first presented in Luxemburg in 1990. Over the next few years these Eurocode documents have been significantly updated by incorporating new or updated provisions
based on latest research findings reported from around the world. On similar lines, in the last few years, many countries have moved towards implementing rational fire design methodologies in codes
and standards. One such example is the recent introduction of rational fire design approach in the latest edition of American Institute of Steel Construction’s steel design manual. In addition, a
number of countries around the world are updating their codes and standards by introducing performance-based fire safety design provisions. A performance-based approach to fire safety often
facilitates innovative, cost-effective and rational designs. However, undertaking performance-based fire safety design requires the advanced models, calculation methodologies, design manuals books
and trained personnel. The Eurocode documents, or recently updated codes and standards in other countries, are nevertheless far from being useful textbooks, lecture notes or guidance documents. While
these codes and standards provide specifications for undertaking
rational fire design, there is no detailed commentary or explanation for the various specifications or calculation methodologies. Added to this, fire design is rarely taught as part of regular
engineering curriculum and thus most engineers, architects and regulators may not be fully versed with the necessary background to easily understand the relevant clauses, or to make interpretations,
or to recognise the limits of application of various rules. In other words, unless one has some level of expertise in fire safety engineering, it is not easy to apply the current provisions in codes
and standards in most practical situations. Compounding this problem is the fact that there is only handful number of text books in the area of structural fire engineering. This book is aimed at
filling the current gaps in structural fire engineering by providing necessary background information for rational fire design of steel structures. It deals with various calculation methodologies for
fire design and analyses structural steel elements, assemblies and systems. The intent is to provide a basis for engineers with traditional backgrounds to evaluate the fire response of steel
structures at any level of complexity. Since the main aim of the book is to help facilitate rational fire design of steel structures, the book relies heavily on Eurocode 3 provisions, as well as
relevant fire provisions in American and other codes and standards. In this book the information relevant to fire design of steel structures is presented in a systematic way in seven chapters. Each
Chapter begins with an introduction of various concepts to be covered and follows with detailed explanation of the concepts. The calculation methods as relevant to code provisions (in Europe, North
America or other continents) are discussed in detail. Worked examples relevant to calculation methodologies on simple structural elements are presented. For the case of complete structures guidance
on how analysis can be carried out is presented. Chapter 1 of the book is devoted to providing relevant background information to codes and standards and principles of fire resistance design. The
chapter discusses the fire safety design philosophies, prescriptive and performance-based design fire safety design issues. Chapter 2 deals with basis of design and mechanical loads. The load
combinations to be considered for fire design of structures, as per European and North American codes and standards, are discussed. Chapter 3 discusses the detailed steps involved in establishing the
fire scenarios for various cases. Both Eurocode and North American temperature-time relationships are discussed. Procedures in this section allow the designer to establish the timetemperature
relationships or heat flux evolutions under a specified design fire. Chapter 4 deals with steps associated in establishing the temperature history in the steel structure, resulting from fire
temperature. The various approaches for undertaking thermal analysis by simple calculation models are discussed. Chapter 5 presents the steps associated for establishing the mechanical response of a
structure exposed to fire. The possibilities for analysis at different levels: member level, sub-structure level and global level are discussed. Full details related to simple calculation methods for
undertaking strength analysis at member level are presented. Chapter 6 is devoted to fire resistance issues associated with design of joints. The steps associated with the fire resistance of a bolted
or a welded joint through simplified and detailed procedure are discussed. Chapter 7 deals with thermal and mechanical analysis through advanced calculation models. The procedures involved in the
sub-structure analysis or global structural analysis under fire exposure is fully discussed. Case studies are presented to illustrate
the detailed fire resistance analysis of various structures. Chapter 8 presents four design examples showing how a complex structure can be designed using the concept of element or sub-structure
analysis. The book concludes with two Annexes which present some of the design information related to material properties and temperature profiles. Annex 1 focuses on thermal properties of structural
steel and commonly used insulation materials and resulting temperature profiles in steel. Annex 2 focuses on mechanical properties of structural steel. This text book is a reference that allows
designers to go beyond current prescriptive approaches that generally do not yield a useful understanding of actual performance during a fire, into analyses that give realistic evaluation of
structural fire performance. The book is a compendium of essential information for determination of the effects of fire on steel structures. However, the book is not a substitute for the complete
text of Eurocode 3 or any other codes and standards. The book should help a reader not familiar with fire safety engineering to make relevant calculations for establishing the fire response of steel
structures. The target audience for this book is professionals in engineering or architecture, students or teachers in these disciplines, and building officials and regulators. A good knowledge of
mechanics of structures is essential when reading this book, while general background on the design philosophy related to building structures is an advantage. It is hoped that the book will enable
researchers, practioners and students to develop greater insight of structural fire engineering, so that safer structures could be designed for fire conditions. If needed, an errata list will be
placed on www.structuresinfire.com. Jean-Marc Franssen, Venkatesh Kodur, Raul Zaharia
[email protected] [email protected] [email protected]
Latin upper case letters A Am Ap At Av D Ea,θ Ed Efi,d Fd,fi Fb,Rd Fij Ft,Rd Fv,Rd Fw,Rd Gd,fi Gk H I Lf Lh O Pd,fi Pk Q Qc Q∗D Q∗H Qd,fi Qk Rfi,d,t RHRf
cross-sectional area of a member surface area of a member per unit length appropriate area of fire protection material per unit length of the member area of walls, ceiling and floor including
openings total area of vertical openings of all walls characteristic length of the fire Young’s modulus design value of effects of actions at room temperature design value of effects of actions in
case of fire design value of the actions in case of fire design bearing resistance of a bolt at room temperature view factor design tension resistance of a bolt at room temperature design shear
resistance of a bolt per shear plane at normal temperature design strength of a fillet weld at normal temperature design values of the permanent actions in case of fire characteristic value of the
permanent action vertical distance between the fire source and the ceiling second moment of area of a cross-section flame length horizontal flame length opening factor design values of the
prestressing action in case of fire characteristic value of the prestressing action rate of heat release of the fire convective part of the rate of heat release non-dimensional square root of the
Froude number non-dimensional square root of the Froude number design values of the variable actions in case of fire characteristic value of the variable action design value of the resistance in case
of fire rate of heat release densities
XVIII N o t a t i o n s
V VRD Wel Wpl Xd,fi
volume of a member per unit length shear resistance of the gross cross-section for normal temperature design elastic modulus of a section plastic modulus of a section design value of the material
properties in case of fire
Latin lower case letters b c dp fp,θ fy fy,θ h˙ h˙ net heq kb,θ kE,θ kp kp,θ ksh kw,θ ky,θ lfl qt,d r t t∗ tlim tmax y z z0
wall factor specific heat thickness of the fire protection material limit of proportionality yield strength effective yield strength heat flux received by the structure at the level of the ceiling
net heat flux weighted average of opening heights on all walls reduction factor for bolts reduction factor for the Young’s modulus parameter for protection material, see Eq. 4.9 reduction factor for
the limit of proportionality correction factor for the shadow effect reduction factor for weld reduction factor for the effective yield strength buckling length design value of the fire load density
horizontal distance from the vertical axis of the fire to the point under the ceiling where the flux is calculated time modified time in the parametric fire model shortest possible duration of the
heating phase duration of the heating phase ratio between two distances vertical position of the virtual source position of the virtual origin
Greek upper case letters βM χ χLT ψ0
equivalent uniform moment factor factor in the parametric fire model buckling coefficient coefficient for lateral torsional buckling coefficient for combination value of a variable action, taking
into account the reduced probability of simultaneous occurrences of the most unfavourable values of several independent actions
coefficient for frequent value of a variable action, generally representing the value which is exceeded with a frequency of 0.05, or 300 times per year coefficient for quasi-permanent value of a
variable action, generally representing the value which is exceeded with a frequency of 0.50, or the average value over a period of time
Greek lower case letters α αc γG γP γQ ε εm κ1 κ2 ηfi θg θ m,t λ λ ρ µ0 σ
imperfection factor, see Eq. 5.14 coefficient of heat transfer by convection partial factors for the permanent actions partial factors for the prestressing action partial factors for the variable
actions factor for local buckling, see Eq. 5.5 and 5.7 surface emissivity of a member adaptation factor for non-uniform temperature in a cross-section adaptation factor for non-uniform temperature
along a beam conversion factor between the effects of action at room temperature and the effect of action in case of fire gas temperature in the compartment or near a steel member surface temperature
of a member at time t thermal conductivity non-dimensional slenderness density degree of utilisation in the fire situation Stephan Boltzmann constant
subscripts 0 independent action, virtual origin 1 frequent action 2 quasi-permanent action A accidental a steel b box c convection d design el elastic f flame fi fire G permanent h horizontal g gas k
characteristic lim limit m member
max P p pl req sh t v
maximum prestressing protection material, proportionality plastic required shadow total vertical
Author profiles
Jean-Marc Franssen is professor at the University of Liege in Belgium from where he graduated in 1982. He has specialised his research career on the behaviour of structures subjected to fire which
was the subject of his Ph.D. thesis (Franssen, 1987), and his “thèse d’agrégation de l’enseignement supérieur’’ (Franssen, 1997). During the last 20 years, he has been involved in numerous research
programs on the fire behaviour of steel and composite steel-concrete structures, a lot of them with the support of the ECSC “European Coal and Steel Community’’. The steel industry, namely the
Luxemburg company ARBED, now in the ARCELOR-Mittal group, played a prominent role in several of these projects, with CTICM from France, TNO from the Nederland and LABEIN from Spain as regular
partners. At the University of Liege, he holds the chair of Fire Safety Engineering. He was a member of the Project Team for transforming the fire section of Eurocode 3 from an ENV to an EN, he
played a key role in writing the Belgium National Application Documents to the fire parts of the Eurocodes on steel, concrete and composite structures, and has been an active member of two Technical
Group chaired by Prof. U. Schneider within RILEM, “Réunion International des Laboratoires d’Essais sur les Matériaux’’, namely “TC 129-MHT: Test Methods for Mechanical Properties of Concrete at High
Temperatures’’ and “TC HTC Mechanical Concrete Properties at High Temperature – Modelling and Applications’’. He initiated the international “Structures in Fire’’ workshops held in Copenhagen,
Christchurch, Ottawa, Aveiro and Singapore from 2000 to 2008. Having been involved in the “Natural Fire Safety Concept’’ series of research projects, his expertise goes beyond structural behaviour to
include modelling of fire growth and severity. He has acted as the Belgian National Technical Contact for the section of Eurocode 1 on fire actions. Venkatesh Kodur is a Professor in the department
of Civil & Environmental Engineering and also serves as Director of the Center on Structural Fire Safety and Diagnostics at the Michigan State University (MSU). Before moving to MSU, he worked as a
senior Research Officer at the National Research Council of Canada (NRCC), where he carried out research in structural fire safety field. He received his M.Sc. and Ph.D. from Queen’s University,
Canada in 1988 and 1992, respectively. He currently teaches under graduate and graduate courses which include structural fire engineering. He directs the research of 1 PDF, 7 PhD, 7 MS, and several
undergraduate students. Dr. Kodur’s research has focused on the evaluation of fire resistance of structural systems through large scale fire experiments and numerical modelling; characterization of
the materials under high temperature (constitutive modeling); performance based fire
XXII A u t h o r p r o f i l e s
safety design of structures; and non-linear design and analysis of structural systems. He has collaborated closely with various industries, funding agencies, and international organisations and
developed simplified design approaches for evaluating fire resistance, and innovative and cost-effective solutions for enhancing fire-resistance of structural systems. Many of these design approaches
and fire resistance solutions have been incorporated in to various codes and standards. He has published over 175 peer-reviewed papers in international journals and conferences in structural and fire
resistance areas and has given numerous invited key-note presentations. Dr. Kodur is a professional engineer, Fellow of ASCE and Fellow of ACI and member of SFPE and CSCE. He is also an Associate
Editor of Journal of Structural Engineering, Chairman of ASCE-SFPE Standards Committee on Structural Design for Fire Conditions, Chairman of ACI-TMS Committee 216 on Fire Protection and a member of
EPSRC (UK) College of Reviewers. He has won many awards including AISC Faculty Fellowship Award for innovation in structural steel design and construction (2007), NRCC outstanding achievement award
(2003) and NATO award for collaborative research. Dr. Kodur was part of the FEMA/ASCE Building Performance Assessment Team that studied the collapse of WTC buildings as a result of September 11
incidents. Raul Zaharia is Associate Professor at the Politehnica University of Timisoara, Romania. He graduated from the same university in 1993 and made his Ph. D. thesis in the field of steel
structures. In 2000, he was awarded a one year postdoctoral grant at the University of Liege by the Services of the Prime Minister of Belgium for Scientific, Technical and Cultural Affairs, for
research works in the field of fire design. During this period, he studied the behaviour of high-rise steel rack structures in fire and started the collaboration with Jean Marc Franssen in the field
of fire design. Other experiences abroad include periods spent at CTICM France, ELSA Italy and City University of London for a total of three years. His experience in the field of steel structures
and fire design involves research, reflected by more than 80 scientific papers and research reports, but also advanced fire calculation for some buildings built in Bucharest, Romania. At the
University of Timisoara, he initiated the master course “Fire design of civil engineering structures’’. He participated to the translation of the fire parts of the EN1993 for steel structures, EN1994
for steel concrete structures and EN1999 for aluminium structures in Romanian for ASRO (Romanian Association for Standardisation) and was part of the team which elaborated the Romanian National
Annexes of these documents. He is member of the ECCS TC3 Committee “Fire Design’’ as expert from Romania.
Chapter 1
1.1 Fire safety design Fire represents one of the most severe environmental hazards to which buildings and built-infrastructure are subjected, and thus fires account for significant personal, capital
and production loss in most countries of the world each year. Therefore, the provision of appropriate measures for protecting life and property are the prime objectives of fire safety design in
buildings. These fire safety objectives can be achieved through different strategies. During the design process, and in the initial stages of a fire, the first aim is to confine the fire inside the
compartment so that it does not spread to other parts of the building. However, if the fire becomes large despite of preventive measures, then the aim of design is to ensure that the building remains
structurally stable for a period of time enough to evacuate the occupants and for the firefighters to contain the fire. It is the job of the designer to ensure the effectiveness of the chosen
measures in preventing the fires from spreading and destroying the building. Measures to retard fire growth include use of low fire hazard materials, using fire protection, the use of Sprinklers, and
provisions to facilitate fire department operations. In addition to these measures, there should be careful control over the combustible materials that are brought into a building on a regular basis
as part of the function of the structure, i.e. warehouse for fuels, residence, storage areas, etc. To protect people, measures should be taken against the hazards of the spread of fire and its
combustion products. The design of the building should ensure safety of the people in case of fire by providing adequate exit avenues and time, preventing the spread of smoke and hot gases, and
ensuring the integrity of the structure for a reasonable time period under fire. The closest measures related to building design are those related to fire confinement. These measures include ensuring
sufficient time of structural integrity and stability for evacuation of people and extinguishing the fire, and providing fire barriers capable of delaying or preventing the spread of fire from one
room to another.
1.2 Codes and standards 1.2.1
Gen er al
Methods, experiences, and related measures for achieving the maximum safety against fires, are usually found in related national codes and standards. The given methods
Designing Steel Structures for Fire Safety
and measures in these codes and standards, affect the design strategies of the structure for maximizing the safety against fires. Codes are usually more general than standards, and they include in
their framework references to standards. Model codes consist of comprehensive building regulations suitable for adoption as law by municipalities and it establishes basic pattern for building codes
throughout the country. Codes, when adopted by state and municipalities, are intended to become regulated through legislation. Generally, building codes specify minimum requirements for design and
construction of buildings and structures. These minimum requirements are established to protect health and safety of the public and generally represent a compromise between optimum safety and
economic feasibility. Structural design, fire protection, means of egress, light, sanitation, and interior finish are general features covered in the building codes. Standards are one step below the
codes and are considered to be a set of conditions or requirements to be met by a material, product, process, or procedure. Also, standards give detailed method of testing to determine physical,
functional, or performance characteristics of materials or products. Usually, standards are referenced in building codes, thus keeping the building codes to a workable size. Also, standards are used
by specification writers in the design stage of a building to provide guidelines for bidders and contractors. Standards referenced in building codes can be generally classified into materials
standards, engineering practice standards and testing standards. Materials standards generally establish minimum requirements of quality as measured by composition, mechanical properties, dimensions
and uniformity of product. They include methods of sampling, handling, storage and testing for verification of such quality. Engineering practice standards include basic design procedure engineering
formulas and specified provisions intended to provide satisfied engineering performance. Testing standards describe procedures for testing quality and performance of materials or assemblies. They
include procedures for testing and measuring characteristics such as strength, stability, durability, combustibility and fire resistance. 1.2.2
Fir e s a f e t y c o d e s
The fire safety requirements for buildings are generally contained in national building codes or fire codes. Current practice for fire safety design of structures in most countries is principally
based on the provisions of locally adopted codes that are usually based upon the model building codes. The codes specify minimum required fire endurance times (or fire endurance ratings) for building
elements and accepted methods for determining their fire endurance ratings. Buildings codes, based on the prescribed provisions, can be classified into: prescriptive codes and performance codes. In
the prescriptive codes, full details about what materials can be used, the maximum or minimum size of building, and how components should be assembled are specified. In the performance codes the
objectives to be met, and the criteria to be followed to achieve the set-out objectives are listed. Therefore, freedom is allowed for the designer and builder for selecting materials and methods of
construction as long as it can be shown that the performance criteria can be met. Performance codes still include fair amount of specification-type requirements,
but special attention is given for provisions allowing alternate methods and materials if proven adequate. The prescriptive approach to fire safety engineering is simple to implement and enforce, and
given the lack of documented structural failure while in use, has been deemed satisfactory in meeting the codes’ stated intent, which is “…to establish the minimum requirements to safeguard the
public health, safety and general welfare through structural strength, means of egress facilities, stability, sanitation, adequate light and ventilation, energy conservation, and safety to life and
property from fire and other hazards attributed to the built environment.’’ However, there is also a growing recognition that the current prescriptive, component-based method only provides a relative
comparison of how similar building elements performed under a standard fire exposure, and does not provide information about the actual performance (i.e. load carrying capacity) of a component or
assembly in a real fire environment (SFPE, 2000). Thus, for a certain class of buildings such as high-rises or other important structures, which, due to the longer evacuation time or the significance
of the buildings, may be required to survive beyond the maximum code required fire endurance time without structural collapse, a performance-based fire resistance approach may provide a more rational
method for achieving the necessary fire resistance that is more consistent with the needed level of protection. A performance based fire resistance approach considering the evolution of the
building’s structural capacity as it undergoes realistic (non-standard) fire exposures is thus a desirable alternative fire resistance design method for those structures. 1.2.3
N or t h Am e r i c a n c o d e s a n d s t a n dar ds
In the United States there are two main building codes; the International Building Code (IBC, 2006) and the NFPA Building and Construction Code (NFPA, 2003). In Canada, National building Code of
Canada (NBCC) contains the requirements for building design. Both ICC and NFPA codes specify fire resistance ratings in a prescriptive environment. However, the latest edition of NBCC (2005) contains
objective based provisions for fire safety design. Designers have the freedom to select materials and assemblies to meet these pre-required requirements using the methods specified in referenced
standards. Moves towards performance based codes are being taken slowly in US codes (ICC 2003, ICC 2003c, NFPA 2003). NFPA (2003) contains some specific provisions for undertaking performance based
fire safety design. ICC, NFPA and NBCC codes reference nationally accepted standards which contain permitted methods for determining fire endurance ratings for various material types. This includes
standards for fire testing, standards for undertaking structural (and fire) design of steel structures and general standards or manuals for fire resistance design. The loads to be considered in
design of buildings are specified in ASCE-07 (2005). This standard contains different loads and load combinations that are to be included in computing design forces on a structures. The load
requirements in Canada are part of National Building Code of Canada (2005). In US the specifications for testing structural elements under fire exposure is to be in accordance with procedure and
criteria set forth in ASTM E 119 (2005) or NFPA 251(2006) or UL 263 (2003). All these standards have similar specifications and are considered equivalent in most situations. In Canada, the fire test
provision for structural members is specified in ULC/CSA S101 standard
Designing Steel Structures for Fire Safety
(2004). Many of the provisions in these standards are similar to those of ISO 834 (2002). In US, AISC (American Institute of Steel Construction) steel construction manual (AISC, 2005) is the
principal reference that contains documentation for design of steel structures. Recent edition of the AISC Manual contains both LRFD and ASD specifications for the design of steel structures. A
general discussion on overall fire design is provided in this manual. However, there is very limited information on the fire design provisions for steel structures. AISC Manual gives some information
on thermal and mechanical properties at elevated temperatures, which will allow the design of single members for fire exposure. A recent report by AISC (Rudy et al. 2003) gives a survey of existing
codes and standards, plus a lot of background information on fire testing, analysis and design methods for steel structures. In addition to the AISC manual and guides, ASCE/SFPE 29 contain a number
of analytical methods for determining equivalent fire resistance ratings for steel structural members. Most of these analytical methods have been developed based on the results of standard fire
resistance tests carried out on steel structural assemblies under standard fire exposure. Another source for fire resistance calculation methods in US is the SFPE Handbook of Fire Protection
Engineering (SFPE, 2002) which has a chapter on steel design that gives an overview of performance of steel structures in fire, but this does not give sufficient information for the advanced
calculation methods as in Eurocode 3(2003). SFPE (2002) also gives a chapter on high temperature material properties of steel and insulation materials at elevated temperature. National building code
of Canada (NBCC 2005) contains a number of analytical methods and a number of tabulated data for determining fire resistance ratings for steel structural members. 1.2.4
E u r op e a n c o d e s: t h e E u r o c o d e s
Provisions relating to design and analysis of various structural systems used in buildings are detailed in Eurocodes. The structural Eurocodes form a set of documents for determining the actions and
for calculating the stability of building constructions. • • •
EN 1990 defines the general rules governing the ultimate limit state design which is the basic philosophy of the Eurocodes. EN 1991 gives the design values of the actions. Eurocodes 2 to 6 and
Eurocode 9 deal with the design of structures made of different materials: EN 1992 is for concrete structures, EN 1993 is for steel structures, EN 1994 is for composite steel-concrete structures, EN
1995 is for timber structures, EN 1996 is for masonry structures and EN 1999 is for aluminium structures. Finally, EN 1997 is especially dedicated to geotechnical design while EN 1998 covers
earthquake resistance.
Each Eurocode is designated by a number in the CEN classification, starting from 1990 for the basis of design to 1999 for aluminium alloy structures. It has to be noted that these numbers have
nothing to do with a date of publication or whatsoever. It is just fortuitous that these numbers look like numbers of years that coincide with the period when these documents have been published.
In the period before the Eurocodes were adopted by CEN and became EN documents, they were simply referred to as, for example, Eurocode 1 or Eurocode 5. It is quite fortunate that the last digit of
the numbers in the CEN designation is the same as the number of the corresponding Eurocode. For example, EN 1993 is Eurocode 3.
1.3 Design for fire resistance Current fire protection strategies for building often incorporate a combination of active and passive fire protection systems. Active measures, such as fire alarms and
detection systems or sprinklers, require either human intervention or automatic activation and help control fire spread and its effect as needed at the time of the fire. Passive fire protection
measures are built into the structural system by choice of building materials, dimensions of building components, compartmentation, and fire protection materials, and control fire spread and its
effect by providing sufficient fire resistance to prevent loss of structural stability within a prescribed time period, which is based on the building’s occupancy and fire safety objectives.
Materials and construction assemblies that provide fire resistance, measured in terms of fire endurance time, are commonly referred to as fire resistance-rated-construction or fire-resistive
materials and construction in the model building codes. Recent advances in fire science and mathematical modeling has led to the development of rational approaches for evaluating fire resistance.
Therefore it is possible to design for required fire resistance of structural members. 1.3.1
Fir e r e s i s t a n c e r e q u i r e m e n t s
Fire resistance of a building components or assembly can be defined as its ability to withstand exposure to fire without loss of load bearing function, or act as a barrier against spread of fire, or
both. Fire resistance is expressed as the length of time period that a construction can withstand exposure to standard fire without losing its load bearing strength or fire separating function. This
time is a measure of the fire performance of the structure and is termed the “fire resistance’’ of that structure, and it is widely used in most building codes and material standards. In North
American standards the term ‘fire endurance’ is often used to describe both load bearing and fire separation function for structural assemblies. The required fire resistance ratings for building
components are specified in building codes. Fire resistance ratings denote the required fire resistance rounded off to the nearest hour or half hour. These fire resistance requirements are function
of several factors such as fire loads, building occupancy, height, and area. In reality, fire resistance is a function of many other factors that are not considered in the building codes, such as the
properties of the material of the wall enclosing the fire, dimensions of the openings, and heat lost to the surroundings. A major difference between the standard fire temperature curve and an actual
fire temperature curve is that the standard fire curve keeps on increasing with time, whereas the actual fire usually decreases after reaching a certain maximum. The standard fire curves in most
countries are very close to that of the ISO 834 standard. The use of actual fire temperature curves would give more accurate information on the fire performance of the construction. However, the
current practice still uses the standard fire temperature curve to express fire resistance. All the provision and ratings in North
Designing Steel Structures for Fire Safety
American codes and standards are based on exposure to the standard fire. Therefore, there is a large amount of information on the standard fire resistance of numerous building components and
assemblies. 1.3.2
Fir e r e s i s t a n c e a s s e s s m e n t
A very common method to assess fire resistance is through testing specimens, such as beams, columns and walls, or assemblies under fire. Generally, fire resistance has been established through
laboratory tests in accordance with procedures specified in standards such as ISO834 or ASTM E119. These test methods are used to evaluate standard fire resistance of walls, beams, columns, floor,
and roof assemblies. In addition to ISO and ASTM procedures, other organizations also publish fire test procedures which are virtually identical to those developed by the ISO and ASTM, such as the
NFPA, and Underwriters Laboratories, Inc. “UL’’, Underwriters Laboratories of Canada “ULC’’, and the standards Council of Canada. In all of these procedures, fire resistance is expressed in term of
exposure to a standard fire. Generally, there are three criteria in a standard test method. Namely, load-bearing capacity, integrity, and the temperature rise on the unexposed side for fire barriers.
Most countries around the world rely on large scale fire resistance tests to assess the fire performance of building materials and structural elements. The time temperature curve used in fire
resistance tests is called the standard fire. Full size tests are preferred over small scale tests because they allow the method of construction to be assessed, including the effects of thermal
expansion, shrinkage, local damage and deformation under load. Advancement in theoretical prediction of fire resistance has been rapid in recent years. In many cases, the fire resistance of building
components can also be determined through calculations. Calculation methods are less expensive and time consuming than conducting fire resistance tests, which are usually carried out for large scale
test specimens. In recent years the calculation methods are slowly being incorporated into various codes and standards. North American Codes are slow in incorporating calculation methodologies and
only contain simple equations based on prescriptive approaches. There is very little guidance on sophisticated analysis or design from first principles. However, Eurocodes are much more progressive
in adopting calculation methods for evaluating fire resistance. The fire design provisions in Eurocodes are well received by the engineering community and are often referenced in National Codes and
Standards worldwide. An example of this is the recent edition of AISC Steel design manual (2005) which contains reference to Eurocode 3 for fire resistance provisions. 1.3.3
E u r oc o d e s
Except for the Eurocodes dealing with the bases of design, geotechnical design and earthquake resistance, each Eurocode is made of several parts, including part 1-1 that covers the general rules for
the design at ambient temperature and part 1-2 that covers the design in the fire situation. Table 1.1 presents a list of the different Eurocodes and Figure 1.1, from Gulvanessian et al. 2002, gives
a synoptic view of these documents and how these different codes relate to each other.
Table 1.1 Classification of the Eurocodes Eurocode number
Ambient conditions
Fire conditions
Basis of design Actions Concrete structures Steel structures Composite steel-concrete structures Timber structures Masonry structures Geotechnical design Earthquake resistance Aluminium alloy
EN 1990 EN 1991-1-1 EN 1992-1-1 EN 1993-1-1 EN 1994-1-1 EN 1995-1-1 EN 1996-1-1 EN 1997 EN 1998 EN 1999-1-1
– EN 1991-1-2 EN 1992-1-2 EN 1993-1-2 EN 1994-1-2 EN 1995-1-2 EN 1996-1-2 – – EN 1999-1-2
EN 1990
Structural safety, serviceability and durability
EN 1991
Actions on structures
EN 1992
EN 1993
EN 1994
EN 1995
EN 1996
EN 1999
Design and detailing
EN 1997
EN 1998
Geotechnical and seismic design
Fig. 1.1 Synoptic view of the different Eurocodes
Because this book is on structures subjected to fire, any reference to a Eurocode without any further qualification will imply that the fire part of the corresponding Eurocode is referred to. If
required, the distinction will be made between the cold Eurocode, i.e. part 1-1, and the fire Eurocode, i.e. part 1-2. Historically speaking, two different stages of development have to be recognised
in the CEN designation: the stage when the Eurocodes had the status of provisory documents, called the ENV stage, and the stage when they became final documents, called the EN stage. When no
reference is made to the particular version of a Eurocode, either ENV or EN, it will simply be called “the Eurocode’’ or “Eurocode 3’’ in this book, sometimes noted as “EC3’’. When it is necessary to
mark the distinction between the provisory and the final stage, the terms “ENV’’ or “EN’’ will be used, with reference to the preliminary drafts if required, “prEN’’ or “prENV’’. 1.3.4
S cope o f E u r o c o d e 3 – F i r e p a rt
Eurocode 3 does not deal with the insulation or the integrity criteria of separating elements (criteria E and I). If a separating wall is made of steel sandwich panels, for
Designing Steel Structures for Fire Safety
example, it is nowadays not possible to calculate or to simulate its behaviour under fire. This is because the behaviour of such a wall involves several complex phenomena that cannot be predicted or
modelled at the time being such as, for example, the opening of joints between adjacent panels, the shrinkage of insulating materials, the movements of the insulating material that creates some gaps
and air layers between the material and the steel panel, the high thermal deformations of the steel sheets, the local influence of fastening elements, etc. Such separating elements have to be tested
experimentally. Should new developments be undertaken in the direction of modelling separating walls based on steel elements, the thermal and mechanical properties of steel given in the Eurocode
could serve as a starting point. Eurocode 3 deals essentially with the load bearing capacity of steel elements and structures, i.e. the mechanical resistance (criterion R). It gives information that
allows calculating whether or how long a steel structure is able to withstand the applied loads during a fire. The design is thus performed in the ultimate limit state (see Chapter 2). There is
strictly speaking no deformation criteria explicitly mentioned in the Eurocode such as, for example, a limit equal to 1/30 of the span of a simply supported beam as is found in different standards
for experimental testing. Deformation control is anyway mentioned in the basis of design. Deformation criteria should be applied in two cases: 1. 2.
When the protection means, for example the thermal insulation protecting the steel member, may loose its efficiency in case of excessive deformations. When separating elements, for example a
separating wall, supported by or located under the steel member may suffer from excessive deformations of this member.
The above criterion can be exempted under the following two cases. 1.
When the efficiency of the means of protection, either a thermal insulation applied on the section or a false ceiling under a beam, has been evaluated using the test procedures specified in EN
13381-1, EN 13381-2 or 13381-4 as appropriate. The rational for this exception is that these test procedures comprise at least one test on a loaded member and the effects of eventual deformations of
the member are thus implicitly taken into account into the equivalent thermal properties of the insulating material or in the shielding effect provided by the false ceiling. When the separating
element has to fulfil requirements according to a nominal fire exposure. The rational for this exception is that a nominal fire is an arbitrary fire exposure that allows comparing different
construction systems between each other. It is by no means a representation of what could occur to the system in a real fire. It would then not be consistent trying to estimate deformations of the
structural elements and to compare it to deformation criteria if the starting point of the design is purely arbitrary. According to an expression recommended by Buchanan, there is a need of
consistent level of crudeness.
Advanced calculation models (see Chapter 7) automatically provide the deformations of the structure. These deformations can be compared with any defined deformation criteria. It has to be noted,
however, that the Eurocode does not provide the deformation criteria that have to be used. The designer has to concur with
the authority having jurisdiction on the limiting criteria to be applied for evaluating failure. It should also be checked that failure will not occur from excessive deformations causing one of the
member of the structure falling from its supports. Finally, the deformations at the ultimate limit state implied by the calculation method should be limited to ensure that compatibility is maintained
between all parts of the structure. Simple calculation models, on the other hand, do not yield the deformation of the structure at the ultimate limit state. It is thus not possible to comply with the
requirement given in the Eurocode on deformation criteria when using simple calculation models. In fact, for a Class 1 or Class 2 beam in bending, for example (see Section 5.5) plastic theory shows
that the plastic moment in a section can be obtained only for an infinitively small radius of curvature, i.e. for an infinite deformation. In practice, deformation criteria are normally ignored when
using simple calculation models. If a particular situation leads the designer to believe that a real attention should be paid to the deformation, a practical way to limit the deformations is to use
the 0.2 percent proof strength instead of the effective yield strength (see Section 5.6.6.)
1.4 General layout of this book The information that is necessary to perform the fire design of a structure made of a particular material, say a steel structure, is: (a) (b)
(c) (d)
The basis of design, stated in EN 1990. The mechanical actions, i.e. the forces, acting on the structure in the fire situation. Some information is found in EN 1991-1-2, but explicit reference is
also made to EN 1991-1-1 that is therefore also necessary. The thermal actions, i.e. the fire and the heat flux induced in the elements by the fire. The information is in EN 1991-1-2. The rules for
determining the temperatures in the structure during the course of the fire. They are given in the fire part of the relevant material Eurocode, e.g. EN 1993-1-2 for a steel structure. The rules for
determining the structural stability. They are given in the fire part of the relevant material Eurocode, e.g. EN 1993-1-2 for a steel structure, but reference is often made to the cold part of the
same material Eurocode, EN 1993-1-1 for a steel structure.
This layout is valid in general but some exceptions do exist: the structural stability of timber elements, for example, does not necessarily require the determination of the temperatures in the
element and point d) is not required. The same holds if the fire resistance is determined from tabulated data, for concrete elements for example. The rest of this book is organised according to the
following layout, illustrated by Figure 1–2: Chapter 2 deals with basis of design and mechanical loads. Chapter 3 deals with thermal response from the fire. Chapter 4 deals with thermal analysis by
simple calculation models. Chapter 5 deals with mechanical analysis by simple calculation models. Chapter 6 deals with the design of joints.
Designing Steel Structures for Fire Safety
Determine mechanical loads and load combinations Chapter 2
Determine fire scenarios (time–temperature curves or heat flux evolutions) Chapter 3
Consider a given fire scenario
Calculate the temperatures in the structure Chapter 4 (or 7)
Consider a given load combination
Calculate the fire resistance Chapters 5 (or 7) and 6
All load combinations considered?
Yes All fire scenarios considered?
Yes End analysis
Fig. 1.2 General layout of an analysis
Chapter 7 deals with thermal and mechanical analysis by advanced calculation models. Chapter 8 gives four design examples showing how a complex structure can be designed using the concept of element
or substructure analysis.
Chapter 2
Mechanical Loading
2.1 Fundamental principles 2.1.1
Eu r oc o d e s l o a d p r o vi s i o n s
The design philosophy of the Eurocodes is based on the concept of limit states, i.e. states beyond which the structure no longer satisfies the design performance requirements. The Eurocodes treat the
fire exposure to be an accidental situation and this requires only verification against the ultimate limit state (as opposed to the serviceability limit state). Ultimate limit states is associated
with structural collapse or other similar forms of structural failure such as loss of equilibrium, failure by excessive deformation, formation of a mechanism, rupture or loss of stability. In the
semi probabilistic approach, the design against the ultimate limit state is based on the comparison between the resistance of the structure calculated with the design values of material properties,
on one hand, and the effects of actions calculated with design value of actions, on the other hand. This is represented as: Rfi,d,t (Xd,fi ) > Efi,d (Ffi,d )
where: Rfi,d,t Xd,fi Efi,d Ffi,d
is the design value of the resistance in case of fire, is the design value of the material properties in case of fire, is the design value of the effects of actions in case of fire, is the design
value of the actions in case of fire.
The resistance and the effects of actions are both based on characteristic values of geometrical data, usually the dimensions specified in the design, for cross section sizes for example. Geometrical
imperfections such as bar out of straightness or frame initial inclinations are represented by design values. The design values of the material properties, Xd,fi , are described for each material in
the relevant material Eurocode, for example in Eurocode 3 for steel structures. These material Eurocodes also describe how the resistance, Rfi,d,t , based on these material properties, is calculated.
Eurocode 1 describes how the design values of actions, Ffi,d , are calculated.
Designing Steel Structures for Fire Safety
The partial factor method considers that design values are derived from representative, or characteristic, values multiplied by scalar factors. The general equations are: Gfi,d = γG Gk
for the permanent actions
Qfi,d = γQ Qk , γQ ψ0 Qk , ψ1 Qk or ψ2 Qk ,
for the variable actions
Pfi,d = γP Pk
for the prestressing actions
where: Gk , Qk , Pk Gfi,d , Qfi,d , Pfi,d γ G , γQ , γ P ψ0
are the characteristic values of the permanent, variable and prestressing action, are the design values of these actions in case of fire, are the partial factors for these actions, is the coefficient
for combination value of a variable action, taking into account the reduced probability of simultaneous occurrences of the most unfavourable values of several independent actions, is the coefficient
for frequent value of a variable action, generally representing the value that is exceeded with a frequency of 0.05, or 300 times per year, is the coefficient for quasi-permanent value of a variable
action, generally representing the value that is exceeded with a frequency of 0.50, or the average value over a period of time.
Different actions generally occur simultaneously on the structure. In an accidental situation, they have to be combined as follows: • • • •
Design values of permanent actions Design value of the accidental action Frequent value of the dominant variable action Quasi-permanent values of other variable actions.
When it is not obvious to determine which one amongst the variable actions is the dominant one, each variable action should be considered in turn as the dominant action, which leads to as many
different combinations to be considered. In case of fire, which is an accidental design situation, and if the variability of the permanent action is small (applicable in most cases), the following
symbolic equations Equation 2.5a or Equation 2.5b hold: Efi,d = Gk + Pk + ψ1,1 Qk1 +
ψ2,i Qki
Efi,d = Gk + Pk +
ψ2,i Qki
It may be noticed that partial factors for permanent, prestressing and variable actions are equal to 1.0 in accidental situation.
Mechanical Loading
Table 2.1 Coefficients for load combination factors ψ for buildings Action Imposed load in buildings category A: domestic, residential category B: offices category C: congregation areas category D:
shopping category E: storage Traffic loads in buildings category F: vehicle weight ≤ 30 kN category G: 30 kN < vehicle weight < 160 kN category H: roofs Snow loads for sites located at altitude H ≤
1000 m for sites located at altitude H > 1000 m Wind loads
0.5 0.5 0.7 0.7 0.9
0.3 0.3 0.6 0.6 0.8
0.7 0.5 0.0
0.6 0.3 0.0
0.2 0.5 0.2
0.0 0.2 0.0
Table 2.1 given here is table A1-1 of prEN 1990 and gives the relevant ψ factors for the fire situation in buildings. The choice whether the frequent value (Eq. 2.5a) or the quasi-permanent value
(Eq. 2.5b) has to be used for the dominant variable action is a nationally determined parameter. In this book, Equation 2.5a will normally be used because it leads to the most complete load
combinations and is thus more illustrative for the examples. In fact, Equation 2.5a was the only one mentioned in ENV 1991-1-2. Equation 2.5b appeared in prEN 1991-1-2 and, in EN 1991-1-2, the use of
the quasi-permanent value is recommended for variable actions. The motivation to change from the frequent to the quasi-permanent value when the ENV was changed into prEN was that this is the solution
used for earthquake, which is also an accidental action, just as the fire. Why should the fire be treated differently? The authors of this book think that this argument can be accepted, except for
the wind action. Indeed, the coefficient for the quasi-permanent action ψ2 for wind is 0 and if wind is taken with a 0 value even when it is the dominant variable action, there will be no
verification at all under horizontal forces for a structure submitted to the fire. In case of an earthquake, horizontal forces from the accidental action are of course always present and the wind
effect may not be of significant importance anyway. In fact, not only the choice between 1 or 2 is a nationally determined parameter but also the values of these factors. Values different from those
presented in Table 2.1 can be adopted in different countries. The design value of the accidental action that was mentioned above does not appear in Equation 2.5 because, in case of fire, the fire
action is not of the same form as the other actions. It does not consist of some N or some N/m2 that could be added to the dead weight or to the wind load. The fire action consists of indirect
effects of actions induced in the structure by differential and/or restrained thermal expansion. Whether and how these effects have to be taken into account is discussed in the different material
Designing Steel Structures for Fire Safety
2.1.2 A mer i c a n p r o vi s i o n s f o r f i r e d e s i g n US model codes use ASCE-07 (ASCE, 2005) as the reference standard for various loads to be considered in the design of buildings. ASCE/
SEI 07 contains detailed specifications for evaluating the loads under various actions. This standard utilizes the well accepted principle that the likely loads that occur at the time of a fire are
much lower than the maximum design loads specified for room temperature conditions. This is especially true for members which have been designed for load combinations including wind, snow or
earthquake. For this reason, different design loads and load combinations are used. It is generally assumed that there is no explosion or other structural damage associated with the fire. Loads on
members could be much higher if some members are removed or distressed. In ASCE-07 fire is considered as an extraordinary event. A statement in Section C2.5 of the standard states “For checking the
capacity of a structure or structural element to withstand the effect of an extraordinary event, the following load combination should be used’’: Accordingly, the design load combination for fire (Uf
) is given as: Uf = 1.2Dn + 0.5Ln
where Dn and Ln are the design levels of dead and live load respectively, from the standard. The computed loads as per this provision generally works out be lower than the maximum design loads on the
structure, especially for members sized for deflection control or architectural reasons. In Canada, the load provisions are specified in National Building Code of Canada (NBCC 2005) and the
provisions for loads under fire conditions is similar to those in ASCE -07.
2.2 Examples 2.2.1
Offic e b u i l d i n g
What are the relevant load combinations for an office building that is not submitted to traffic loads and has no prestressed concrete element (H < 1000 m)? If Equation 2.5a is used, the appropriate
values of ψ from Table 2.1 yield the following combinations for the applied loads: •
Live load is the dominant variable action. Efi,d = dead weight + 0.5 × live load
Loading due to snow is the dominant variable action. Efi,d = dead weight + 0.2 × snow load + 0.3 × live load
Load from wind is the dominant variable action. Efi,d = dead weight + 0.2 × wind load + 0.3 × live load
Mechanical Loading
If Equation 2.5b is used, only one combination has to be considered: Efi,d = dead weight + 0.3 × live load
The wind load may take different patterns and values, depending on the direction of the wind and whether the wind induces a positive or a negative pressure inside the building. This can increase
significantly the amount of calculations but, especially in sway frames or for the bracing system of a laterally supported structure, this cannot be avoided. 2.2.2
B ea m f o r a s h o p p i n g c e n t r e
What is the design load on a beam that is part of a floor in a shopping centre? A beam supporting a floor in this type of building is designed using Equation 2.10 (from Eq. 2.5a) because neither wind
nor snow can affect such a beam. Efi,d = dead weight + 0.7 × live load 2.2.3
B ea m i n a r o o f
What is the design load supported by a beam that is part of a roof (H > 1000 m)? A beam supporting the roof of a building is designed using Equation 2.11 if snow is the dominant variable action and
Equation 2.12 if wind is the dominant variable action. Efi,d = dead weight + 0.5 × snow load
Efi,d = dead weight + 0.2 × wind load + 0.2 × snow load
2.3 Specific considerations 2.3.1
S imu lt a n e o u s o c c u r r e n c e
Clause 4.2.2 (1) of EN 1991-1-2 says that “Simultaneous occurrence with other independent accidental actions needs not be considered’’. The important word is independent. A fire and a tornado can be
considered as independent and they will not be combined. An earthquake, on the other hand, frequently gives rise to numerous building fires. In this case, the actions are not really independent.
Whether the combination or, better, the succession of these two events has to be considered should be decided on a case per case basis in consultation with the authorities or the contractor. This
could also be the case for a fire resulting from a terrorist action such as a bombing, or impact resulting from a vehicle hitting the building in an accident. Designing a structure that would be able
to sustain successively two of these accidental actions is of course not without an influence on the cost of the structure. In this book, it is considered that the fire is the only accidental action.
Dea d w e i g h t
The dead weight must be considered in all loading scenarios. It is important that all components of the dead weight are considered. For example, in a residential building,
Designing Steel Structures for Fire Safety
this would not be a significant approximation to neglect the lightning devices in the verification of the concrete floors. On the other hand, in a lightweight steel industrial building, the dead
weight of venting ducts suspended to the beams of the roof or the supporting beams of a crane can form an important part of the dead weight. This is also the case for equipments or reservoirs located
on and supported by the roof. 2.3.3
Up p e r f l o o r i n a n o p e n c a r p ar k
In an open car park with access for vehicles on the upper floor, it does not seem realistic to combine the traffic load and the snow load. This floor is calculated under the two different hypotheses:
1. 2.
With the traffic load. It is then not considered as the roof of the building but as a floor. With the snow load and no traffic.
The snow load on the upper floor and the traffic loads on the other floors can of course be combined. 2.3.4
I n d u st r i a l c r a n e s
For moving lifting cranes in industrial buildings, the dead weight of the longitudinal supporting beams spanning from one frame to the others has to be considered, as stated previously. This is also
the case for the transverse beams on which the motor is rolling. The weight of the motor is supported equally by these two transverse beams. The motor may be located near the column of the frame with
the highest compression force. The longitudinal position of the crane is chosen in such a way that it maximise the effects on one of the frames, see Figure 2.1. It is current practice not to consider
any load suspended to the crane because, even if there was one at the beginning of the fire, the cable is a rather thin element that is likely to heat up faster than any part of the building
structure with the consequence that the load might rapidly lay down on the ground. Operational instruction can also be given to the personnel operating the crane to lay down any load on the floor
before evacuation of the building in case of fire. Whether this can be safely relied on can be a subject of discussion. In special cases, it might be desirable to gather additional information on the
statistical distribution in time of the value of this load and to deduce a design value in case of fire. It has to be noted that EN 1991-1-2 states explicitly under clause 4.2.1 (5) that “Loads
resulting from industrial operations are generally not taken into account’’. This can also be seen as a justification for not taking any load supported by the crane into account 2.3.5
I n d ir e c t f i r e a c t i o n s
Indirect fire actions are defined in Eurocode 1 as internal forces and moments caused by thermal expansion. Further in the text, in Clause 4.1 (1) of EC1, it is recognised that they result from
imposed and constrained expansions and deformations.
Mechanical Loading
Long. beam
H Transv. beam
Motor H
H Plan view AA
Fig. 2.1 Typical crane in an industrial building
They shall be considered apart from those cases where they: (a) (b)
may be recognized a priori to be either negligible or favourable; are accounted for by conservative support and boundary conditions and/or conservatively specified fire safety requirements.
Some engineering judgement is thus necessary to decide in each particular case whether at least one of these conditions is met. A particular case is quoted under Clause 4.1 (4): “. . . when fire
safety requirements refer to members under standard fire conditions’’. This is the case, for example, if the requirement is expressed as 60 minutes fire resistance to the standard fire for the
columns and 30 minutes for the beams. The motivation behind this permission is probably to be found in the fact that, historically, requirements made on members and based on the standard fire were
linked to a verification by an experimental fire test in which no indirect actions were present. If a calculation model is used to verify such an arbitrary requirement, this is with the objective to
obtain a result similar to the result that a fire test would have given, but at a lower cost and much faster. The objective is not to obtain a representation of the real fire behaviour of the
structure. The calculation model must therefore represent as close as possible the conditions of the test and, hence, no indirect action is taken into account. If the requirement refers to the entire
structure as a whole, or if the requirement refers to anything else that the standard fire, a fire model for example, it does not necessarily mean that the indirect actions have automatically to be
taken into account, but authorisation not to do so is not automatic. If indirect fire actions need not be considered, either because of the permission stated in Clause 4.1 (4) or because it is
considered that one of the conditions a) and b) given previously is met, then the effects of actions are constant throughout the fire exposure and may thus be determined at time t = 0.
Designing Steel Structures for Fire Safety
The question of indirect fire actions will be discussed further in Chapter 5, when the analysis of substructures is introduced. 2.3.6
S imp l i f i e d r u l e
EN 1991-1-2 states in Clause 4.2.1 (1)P that “Actions shall be considered as for normal temperature design, if they are likely to act in the fire situation’’. In other words, actions acting in the
fire situation should be considered in the design of the structure subjected to the fire. It goes without saying, one might think. Yet, this sound principle is contradicted by a simplified rule.
According to clause 4.3.2 (2), in the cases where indirect fire actions need not be explicitly considered, “effects of actions may be deduced from those determined in normal temperature design’’ by a
multiplication factor ηfi , according to Equation 2.13. Efi,d,t = ηfi Ed
where: Ed
is the design value of the relevant effects of actions from the fundamental combination according to EN 1991-1-1, ηfi is a reduction factor, with ηfi = (Gk + ψfi Qk,1 )/(γG Gk + γQ,1 Qk,1 ) and ψfi =
ψ1,1 or ψ2,1 depending on the choice made for the nationally determined parameter, see § 2.1. The reduction factor ηfi is smaller than 1.0 and stems from the reduction of the design loads when going
from the room temperature situation to the fire situation. The idea seems to be interesting if the effects of actions at room temperature have been determined in a complex structure by a manual
method, the “method of Cross’’ for example. It is in this case considered as a benefit to be allowed to simply multiply these results by a scalar factor rather than doing one or several additional
analyses for the fire situation. The authors of this book nevertheless do not recommend the utilisation of this “simplified’’ rule but rather advocate determining the effects of action in case of
fire with the basic Equation 2.5. This is because: 1.
The simplified rule very seldom reduces the amount of calculations. • •
If the structure is very simple, it is as fast, or even faster, to calculate the effects of actions in case of fire as to calculate the reduction factor ηfi and then multiply the effects of actions
at room temperature by this factor. If the structure is complex, it is nowadays a common practice to determine the effects of actions by means of a numerical tool, say a finite element or a
displacement based program. In this case, it is a small task to analyse the structure for a couple of additional load combinations, namely the load combinations in case of fire. The most important
task is the discretisation of the structure. If, on the other hand, it is decided to apply the simplified rule, it is not always a simple task to determine which one from the load combinations at
room temperature is the fundamental one. Equation 2.13 has thus to be applied several times with different actions considered as the dominant load condition.
Mechanical Loading
2.5 kN/m
2.5 kN/m
10 m
2 kN/m
20 m
Fig. 2.2 Characteristic loads on a portal frame
The simplified rule can lead to results that are not statically correct. Let us consider an element in which the permanent load induces an axial force and the variable load, for example the wind,
induces a bending moment. Any load combination at room temperature will induce effects of action in the element that will be of the following type: Ed = (γG Nk ; γQ Mk )
for example Ed = (1.35 Nk ; 1.50 Mk )
The effects of actions in case of fire have, according to the general Equation 2.5, the following expression: Efi,d = (Nk ; ψ1,1 Mk )
for example Efi,d = (Nk ; 0.20 Mk )
It is obvious that any multiplication of Ed by a scalar factor will lead to a result which is different from Efi,d . As an example, let us consider the simple plane frame shown on Figure 2.2
submitted to a permanent vertical load of 2 kN/m on the beam and a wind load of 2.5 kN/m on the columns. The values indicated for the loads are characteristic values. The dead weight of the columns
is supposed to be negligible. The characteristic values of the effects of actions at the base of the columns are easily calculated. (Nk ; Mk ) = (2 kN/m × 20 m/2; 2.5 kN/m × 10 m × 5 m) = (20 kN; 125
kNm) The design values of the effects of action are determined for the design at ambient temperature. (Nd ; Md ) = (1.35 × 20 kN; 1.50 × 125 kNm) = (27 kN; 187.5 kNm) Application of Equation 2.5
directly yields the effects of actions in case of fire as (Nfi,d ; Mfi,d ) = (1.00 × 20 kN; 0.20 × 125 kNm) = (20 kN; 25 kNm)
Designing Steel Structures for Fire Safety
Axial force N
25 20 Ed
Ek Efi,d correct
Efi,d simplified 5
Efi,d simplified (2) Proportionality rule
Bending moment M
Fig. 2.3 Different effects of actions
However, the application of “simplified’’ rule leads to: ηfi = (1.00 × 40 kN + 0.2 × 50 kN)/(1.35 × 40 kN + 1.5 × 50 kN) = 50/129 = 0.388 (Nfi,d ; Mfi,d ) = (0.388 × 27 kN; 0.388 × 187.5 kNm) = (10.5
kN; 72.7 kNm) It can be seen that the application of the simplified rule requires more effort than the application of the exact rule and that it yields an axial force that is only half of the correct
value, whereas the bending moment is almost three times the correct value! This comparison is illustrated in Figure 2.3. It can also be calculated that the eccentricity of the load in the correct
evaluation is 25 kNm/20 kN = 1.25 m, whereas the value derived from the simplified rule is calculated as 72.7 kNm/10.5 kN = 6.92 m. The consequence might not be negligible, especially in members that
have a highly non symmetrical M-N interaction diagram, for example in reinforced concrete, prestressed concrete or composite steel-concrete members. As a further simplification to the already
simplified rule, an arbitrary value of ηfi = 0.65 may be used, in which case it is indeed trivial to determine the effects of actions in the fire situation. Whether this approach can still be
considered as a performance-based design is highly questionable. The point corresponding to this further simplification is noted as “Efi,d simplified(2)’’ on Figure 2.3. From here on in this book,
the basic rule according to Equation 2.5 will systematically be used.
Chapter 3
Thermal Action
3.1 Fundamental principles The calculation approaches to model the thermal action produced by a fire on a structure is described in Eurocode 1 Part 2. Different representations of the effects of fire
are given: temperature–time relationships, zone models or localised models. Some considerations are given hereafter on various aspects related to the calculation of temperatures in steel sections
depending of the type of representation of the fire. The fire severity to be used for design depends on the legislative environment and on the design philosophy. In a prescriptive code, the design
fire severity is usually prescribed by the code with little or no room for discussion. In a performance based code, the design fire is usually recommended to be a complete burnout, or in some cases a
shorter time of fire exposure which only allows for escape, rescue, or firefighting (Buchanan 2001). The equivalent time of a complete burnout is the time of exposure to the standard test fire that
would result in an equivalent impact on the structure. 3.1.1
E u r oco d e t e m p e r a t u r e-t i m e r e l at i o ns hi p s
In case of a fully developed fire, the action of the fire is most often represented by a temperature–time curve, i.e. an equation describing the evolution with time of the unique temperature that is
supposed to represent the environment in which the structure is located. 3.1.1.1 N o mi n a l fi r e c u r v es This equation can be one of the nominal curves given in the Eurocode. They are: 1.
The standard curve (sometimes called the standard ISO 834 curve, given in prEN13501-2). θg = 20 + 345log10 (8t + 1)
This curve is used as a model for representing a fully developed fire in a compartment. The external fire curve. θg = 20 + 660(1 − 0.686 e−0.32t − 0.313 e−3.8t )
Designing Steel Structures for Fire Safety
This curve is intended for the outside of separating external walls which exposed to the external plume of a fire coming either from the inside of the respective fire compartment, from a compartment
situated below or adjacent to the respective external wall. This curve is not to be used for the design of external steel structures for which a specific model exists, see section 4.4. The
hydrocarbon curve. θg = 20 + 1080(1 − 0.325 e−0.167t − 0.675 e−2.5t )
This curve is used for representing effects of a hydrocarbon fire. In these equations, θg t
is the gas temperature in the compartment (Eq. 3.1 and 3.3) or near the steel member (Eq. 3.2), in ◦ C, is the time, in minutes.
These nominal fire curves are illustrated on Figure 3.1. 3.1.1.2 Equ i v a l en t ti m e Annex F of Eurocode 1 contains a method yielding an equivalent time of fire exposure that brings back the user
to the utilisation of the standard temperature–time curve. The method is based on three parameters representing three physical quantities, namely the design fire load, the quantity and types of
openings and the thermal properties of the walls. A fairly simple equation gives, as a function of these three parameters the duration of the standard fire that would have the same effect on the
structure than a real fire that could occur in the relevant conditions. Equivalent time methods
Temperature (°C)
1000 800 600 Hydrocarbon curve Standard curve External curve
Time (min)
Fig. 3.1 Three different nominal fire curves as specified in Eurocodes
Thermal Action
are nowadays considered somewhat out-dated and other more refined models exist that allow representing the influence of the conditions on the severity of a real fire, see below. 3.1.1.3 P aram etr i
c tem p er a tu r e–ti m e c u r ve s The temperature representing the fire can also be given by the parametric temperature– time curve model from annex A of Eurocode 1. This annex presents all
equations required to calculate the temperature–time curve based on the value of the parameters that describe the particular situation. The model is valid for fire compartments up to 500 m2 of floor
area, maximum height of 4 meters without openings in the roof. The same equations are presented here in a somewhat more logical way, i.e. in the order in which they have to be used. The input data
are: • • • •
Thermal properties of the walls, ceiling and floor, namely thermal conductivity λ in W/mK, specific heat c in J/kgK and density ρ in kg/m3 . Geometric quantities such as total area of walls, ceiling
and floor including openings At in m2 , total area of vertical openings of all walls Av in m2 and weighted average of opening heights on all walls heq in m. Design value of the fire load density qt,d
in MJ/m2 , related to At . Fire growth rate, slow, medium or high. The successive steps are:
Evaluate the wall factor b according to Equation 3.4 (somewhat more complex equations are given for enclosure surfaces made of several layers of different materials or enclosure surfaces made of
different materials). (3.4) b = cρλ
Evaluate the opening factor O according to Equation 3.5. O = Av heq /At
where heq is the weighted average of the window heights Avi hi / Avi Evaluate the factor according to Equation 3.6 ( factors higher than 1 will yield a heating phase of the parametric curve that is
hotter than the ISO curve and vice versa). =
O/0.04 b/1160
2 (3.6)
Determine the shortest possible duration of the heating phase tlim in hours, depending on the fire growth rate. tlim = 25 minutes, i.e. 5/12 hour, for slow fire growth rate; tlim = 20 minutes, i.e. 1
/3 hour, for medium fire growth rate; tlim = 15 minutes, i.e. ¼ hour, for fast fire growth rate.
Designing Steel Structures for Fire Safety
Evaluate the duration of the heating phase tmax in hours according to Equation 3.7 tmax = 0.2 × 10−3 qt,d /O
If tmax > tlim then the fire is ventilation controlled and; •
The temperature during the heating phase, i.e. until t = tmax , is given by Equation 3.8. ∗
θg = 20 + 1325(1 − 0.324 e−0.2t − 0.204 e−1.7t − 0.472 e−19t ) •
with t ∗ = t The temperature during the cooling down phase is given by Equation 3.9. ∗ θg = θmax − 625(t ∗ − tmax )
∗ for tmax ≤ 0.5
∗ ∗ θg = θmax − 250(3 − tmax )(t ∗ − tmax )
∗ for 0.5 < tmax < 2.0
θg = θmax − 250(t −
∗ tmax )
for 2.0
0.04 and qt,d < 75 and b < 1160, lim obtained from Equation 3.11 has to be multiplied the factor k given by Equation 3.12. qt,d − 75 O − 0.04 1160 − b k=1+ (3.12) 0.04 75 1160
The temperature during the heating phase, i.e. until t = tlim , is given by Equation 3.8 in which t ∗ = tlim the temperature during the cooling down phase is given by Equation 3.13.
∗ θg = θmax − 625(t ∗ − tlim )
θg = θmax − θg = θmax −
∗ 250(3 − tmax )(t ∗ ∗ 250(t ∗ − tlim )
∗ where tlim = lim t
∗ for tmax ≤ 0.5
∗ tlim )
∗ for 0.5 < tmax ∗ for 2.0 < tmax
(3.13a) < 2.0
(3.13b) (3.13c)
Thermal Action
This parametric fire model described in Annex A of EN 1991-1-2 has its root in the parametric model that was present in ENV 1991-1-2, this one based on work done in Sweden by Petersson, Thelandersson
and Magnusson and later reformulated by Wickström. In addition, Franssen (2000) has introduced three modifications to the ENV model, namely: 1.
The first one deals with the equations that allow calculating the wall factor b for walls with several layers made of different materials, as is the case, for example, for a brick wall covered by a
layer of plaster. The equation has been improved in order to take into account in a more precise manner the amount of energy introduced in layered walls when they are heated. The concept of a minimum
duration of the heating phase, tlim , has been introduced that marks the transition from a fuel to an air control situation. The idea is that, no matter how big the openings are and no matter how
small the fire load is, it nevertheless requires a certain amount of time, namely tlim , to burn any piece of furniture. A typical example could be the burning of one single car in an open car park.
The factor k defined by Equation 3.12 has been introduced to take into account the effects of large openings that, in a fuel controlled situation, vent the compartment and, thus, limit the rise in
temperatures. This coefficient has been calibrated in order to obtain the best fit between the model and a set of 48 experimental fire tests.
In the background document of EN 1991-1-2, ProfilARBED 2001, two additional modifications have been introduced. Firstly, the coefficient of Equations 3.7 and 3.10 have been given different values,
namely 0.001 and 0.002 whereas a single value of 0.0013 had been proposed by Franssen (2000). Secondly, the notion that tlim depends on the fire growth rate has been introduced. These modifications
were introduced because they improved the fit between the model and the set of available experimental results. The first one has yet the consequence that the model is no longer continuous; a hiatus
now exists between the temperature–time curves in the fuel controlled regime and those in the air controlled regime. 3.1.1.4 Z o n e m od el s One-zone models and two-zone models also produce as a
result a temperature that represents the temperature of the gas in the compartment, for the one-zone models, or in each zone, for the two zone models. These models are based on the hypothesis that
the situation with regard to temperature is uniform in the compartment (one-zone models) or in each of the upper and the lower zone that exist in the compartment (two-zone) models. Physical
quantities such as the properties of the walls and the openings are not lumped into one single parameter as is the case for the parametric fire models but, on the contrary, each opening can be
represented individually and each wall can be represented with its own thermal properties. The temperature evolution is not prescribed by a predetermined equation as in the parametric fire models but
results from the integration on time of the differential equations expressing the equilibrium of mass and of energy in the zones. Application of such models thus requires the utilisation of numerical
computer software, see for example the software OZone developed at the university of Liege (Cadorin and Franssen 2003, and Cadorin et al. 2003).
Designing Steel Structures for Fire Safety
3.1.1.5 H e a t ex c h a n g e c oeffi c i en ts For all these situations when the action of the fire around the structural member is represented by a unique temperature, Eurocode 1 gives the
equation that has to be used in order to calculate at any time t the net heat flux reaching a steel member. Taking into account the fact that the emissivity of the fire and the configuration factor
may be taken as equal to 1, and the fact that the radiation temperature may be taken as the gas temperature, the net heat flux can be calculated by Equation 3.14. It shows that the net flux is made
of a convection term and a radiation term. 4 4 h˙ net = αc (θg,t − θm,t ) + εm σ(θg,t − θm,t )
where: is the coefficient of heat transfer by convection, is the temperature of the gas around the member (in K), is the surface temperature of the member (in K), is the surface emissivity of the
member, is the Stephan Boltzmann constant (=5.67 10−8 W/m2 K4 ).
αc θg,t θm,t εm σ
The surface emissivity is taken as 0.7 for steel carbon steel, 0.4 for stainless steel and 0.8 for other materials for which related design parts of EN 1992 to 1996 and 1999 give no specific value,
e.g. concrete. The value to be used for the coefficient of convection (αc ) depends on the fire curve that is considered and on the surface conditions, either on a surface exposed to the fire or on
the unexposed side, for example the top surface of a concrete slab heated from underneath. Table 3.1 gives the recommended values of αc for different surface conditions. 3.1.2
E u r oc o d e l o c a l i s e d f i r e, f l a m e no t i m p ac t i ng t he c e i l i ng
In case of a localised fire, the flame length Lf in meters is calculated according to Equation 3.15 known in the literature as the Heskestad flame height correlation, Heskestad 1983. Lf = 0.0148 Q0.4
− 1.02 D
Table 3.1 Coefficient of convection for different surface conditions αc (W/m2 K) Unexposed side of separating elements Possibility 1: radiation considered separately Possibility 2: radiation
implicitly contained Surface exposed to the fire Standard curve or external fire curve Hydrocarbon curve Parametric fire, zone fire models or external members
Thermal Action
where: Q D
is the rate of heat release of the fire [W], is the characteristic length of the fire, its diameter for example for a circular shape [m].
It has to be noted that, for low values of the heat release and large values of the characteristic length, this equation can yield negative flame heights. This is of course not physically possible.
It simply indicates that a single flaming area of diameter D has broken down into several smaller separate zones and the equation should be applied individually for each zone (Heskestad 1995). If the
flame length from the fire source does not reach the ceiling, only the temperature evolution along the flame length is given in Eurocode 1. The position of the virtual origin, z0 in meters, is first
calculated according to Equation 3.16. The values of z0 are negative because the virtual origin is lower than the fire source. z0 = 0.00524Q0.4 − 1.02D
The evolution of the temperature in the plume in ◦ C along the vertical axis of the flame is given by Equation 3.17. −5/3 θ = 20 + 0.25Q2/3 ≤ 900 c (z − z0 )
where Qc is the convective part of the rate of heat release Q, with Qc = 0.8Q by default. It is then the task of the user to make his own hypotheses to calculate the heat flux emitted by the flame
that reaches the surface of the structural member. In order to estimate this flux, hypotheses have to be made on the shape of the flame, for example cylindrical, and on the temperature distribution
in the horizontal plane of the flame, for example constant temperature. The view factors from different surfaces of the flame to the member can then be evaluated according to Annex G of Eurocode 1
and the flux finally calculated. Although this is not explicitly stated in the Eurocode, Annex B on thermal actions for external members could be of good guidance here, for example for estimating the
emissivity of the flame. Figure 3.2 shows the flame length (Lf ) as a function of the diameter (D) of a cylindrical fire source for different rate of heat release densities RHRf . It shows that for
densities of 250 kW/m2 , recommended for example in dwellings, hospital and hotel rooms, offices, classrooms, but also shopping centres and public spaces in transport buildings, the flame height
never exceeds 2.07 meters (obtained for D = 8 m) and thus never reaches the ceiling. The temperature in the plume at the level of the ceiling will thus always be lower than the temperature at the tip
of the flame, i.e. lower than 520◦ C. This is clearly not realistic and this model should not be applied for such a low heat release rate density. For densities of 500 kW/m2 recommended in libraries
and theatres (cinema), longer flame length are calculated and the flame may eventually reach the ceiling, in which case the model described below in Section 3.1.3 can be applied.
Designing Steel Structures for Fire Safety
15 RHRf ⫽ 2000 kW/m2 RHRf ⫽ 1000 kW/m2
RHRf ⫽ 500 kW/m2 RHRf ⫽ 250 kW/m2
Lf (m)
D (m)
Fig. 3.2 Flame length as a function of the diameter of the fire
Note: The curves in Figure 3.2 are not continued beyond the limits of application mentioned in Eurocode 1 for the models specified in Sections 3.1.2 and 3.1.3, i.e. D = 10 m and Q = 50 MW. In fact,
as long as the fire is not severe enough to produce flames that impact the ceiling, the threat posed by the fire to the structure supporting the ceiling is not very severe. It can indeed be
calculated that with this model the temperature at the tip of the flame is always equal to 520◦ C. Consequently, the evaluation of the effect of the local fire on the structure of the ceiling at this
stage is usually not performed and this effect is neglected. Cases when consideration should be given to the situation of a localised fire not impacting the ceiling comprise, for example, the heat
flux from a fire to a column that is engulfed in the fire in a compartment with a high floor to ceiling distance. It can be assumed that the column is located at the centre line of the flame and
Equation 3.17 can then be used to estimate the temperature of the flame surrounding the column. Figure 3.3 shows the evolution of the temperature in the centreline of the flame as a function of the
height, for different diameters of the cylindrical fire source, considering a rate of heat release densities RHRf of 500 kW/m2 . The horizontal line at the level of 520◦ C corresponds to the tip of
the flame. 3.1.3
E u r oc o d e l o c a l i s e d f i r e, f l a m e i m p ac t i ng t he c e i l i ng
In case of a localised fire with the flame tip impinging on the ceiling, the total heat flux received by the structure at the level of the ceiling is given as a function of geometrical parameters and
of the size and rate of heat release of the fire. The model given in the Eurocode is based on experimental tests made in Japan (Hasemi et al. 1984, Ptchelintsev et al. 1995, Hasemi et al. 1995,
Thermal Action
900 D ⫽ 10 m D⫽8m D⫽6m D⫽4m D⫽3m D⫽2m D⫽1m Lf
Temperature (°C)
Height (m)
Fig. 3.3 Evolution of the temperature for RHRf = 500 kW/m2 Flame axis LH r
Q D
Fig. 3.4 Localised fire, flame impacting the ceiling
Wakamatsu et al. 1996). These tests were small scale tests, made in steady state conditions. The flame from a gas burner was impinging a panel placed above the burner. The air flow was undisturbed in
the sense that no vertical panels were placed laterally to simulate the effect of eventual compartment walls, Figure 3.4. The original equations of Hasemi were slightly modified within the European
research “Development of design rules for steel structures subjected to natural fires in large compartments’’ (Schleich et al. 1999). It has been demonstrated (Franssen et al. 1998, and Schleich et
al. 1999) that this model yields acceptable results in transient situations within real compartments for heat release rates up to 50 MW. Experimental validations of Hasemi’s model on full scale tests
can also be found in Wakamatsu et al. (2002).
Designing Steel Structures for Fire Safety
The non-dimensional square root of the Froude number Q∗D is calculated according to Equation 3.18. Q∗D =
Q 1.11 × 106 D2.5
This number is big for fires in which the velocity of the gas is high compared to the effects of buoyancy such as in jet flames or gas burner. If the velocity of the premixed gas in the burner is
high enough, the length of the flame is virtually independent from the direction of the flame relative to the direction of gravity. This number is small for fires in which the velocity is low
compared to the effects of buoyancy such as in pool fires. The vertical position of the virtual source z in meters is calculated according to Equation 3.19, first proposed by Hasemi and Togunaga,
1984. On the contrary to Equation 3.16 used to calculate basically the same physical quantity, Equation 3.19 has been arranged in such a way as to yield positive values when the virtual source is
located under the fire source. ∗2/5
z = 2.4D(QD
− QD )
when Q∗D < 1.0
when Q∗D ≥ 1.0
z = 2.4 D(1.0 − QD )
Another non-dimensional square root of the Froude number, Q∗H , is calculated on the basis of the vertical distance H between the fire source and the ceiling according to Equation 3.20. Q∗H =
Q 1.11 × 106 H 2.5
The length of the flame, H + Lh , is calculated according to Equation 3.21, with Lh as the horizontal flame length. H + Lh = 2.9H(Q∗H )0.33
The non dimensional ratio y is calculated according to Equation 3.22. This is the ratio between the distance from the virtual source to the point along the ceiling where the flux is calculated, on
one hand, and the distance from the virtual source and the tip of the flame, on the other hand. y=
z + H + r z + H + Lh
where r is the horizontal distance from the vertical axis of the fire to the point under the ceiling where the flux is calculated.
Thermal Action
The heat flux received by the structure at the level of the ceiling, h˙ in W/m2 , is given by Equation 3.23. h˙ = 100 000 h˙ = 136 300 − 121 000 y h˙ = 15 000 y−3.7
when y ≤ 0.30 when 0.30 < y < 1.0 when 1.0 ≤ y
The net heat flux is the difference between the flux received by the member and the heat energy lost by the member to the environment by convection and radiation, see Equation 3.24 4 h˙ net = h˙ − αc
(θm,t − 293) − εm σ(θm,t − 2934 )
In case of several separate localised fire sources, each source is supposed to generate a heat flux calculated according to Equation 3.23 and the contributions are added, but the sum of all
contributions should not exceed 100 kW/m2 . No experimental evidence can justify this approximation, but it is believed to be on the conservative side because, in reality, the ceiling jets from each
fire source cannot be added and may even counteract each other. For a structural element that is located between two sources, for example, it may happen that the mass flows coming from each source
more or less annihilate each other. Although this is not physically correct, this model is sometimes used in order to evaluate the heat flux from a localised fire to horizontal beams that are not
located directly under the ceiling but at some distance below the ceiling. This is the case, for example, for the lower members of a steel truss supporting the ceiling. The model is used simply
replacing the vertical distance between the fire source and the ceiling by the vertical distance between the fire source and the member. It is generally accepted that this utilisation of the model
yields results that are on the conservative side. Because only the heat flux at the level of the ceiling is given, this model cannot be used as such for evaluating the effect of the fire on a column.
Users must rely on the literature for evaluating the effects on the columns of a localised fire that impacts the ceiling, see for example Kamikawa et al. 2001. 3.1.4
C FD m o d e l s i n t h e E u r o c o d e
Eurocode 1 allows the utilisation of CFD (Computational Fluid Dynamics) models. Although prEN 1991-1-2 states under Clause 3.3.2 (2) that a method is given in Annex D for the calculation of thermal
actions in case of CFD models, this annex simply gives general principles that form the base of the method and must be respected when establishing a software that allows application of this method in
order to estimate the temperature field in the compartment. No guidance is provided on the manner to deduce the heat flux on the surface of the structural elements from the temperatures calculated in
the compartment by the CFD model. In fact, this topic is still nowadays a subject of ongoing research activities and is probably premature to layout recommendations in a Code. The Eurocode opens the
door for application of the CFD models in fire safety engineering but, at the moment, this is not yet standard practice and can be made only by very experienced user. This is probably why the
Eurocode has foreseen that the
Designing Steel Structures for Fire Safety
Temperature (°C)
1000 800 600 400 ISO 834 curve ASTM E119 curve
Time (hours)
Fig. 3.5 Comparison between ISO 834 and ASTM E119 fire curves
National Annex in each country may specify the procedure for calculating the heating conditions from advanced fire models. 3.1.5
Nor t h Am e r i c a n t i m e–t e m p e r at ur e r e l at i o ns hi p s
North American standards rely on large scale fire resistance tests to assess the fire performance of building materials and structural elements. The time temperature curve used in fire resistance
tests is called the standard fire. Full size tests are preferred over small scale tests because they allow the method of construction to be assessed, including the effects of thermal expansion,
shrinkage, local damage and deformation under load. In US, standard fire resistance tests are carried out as per the specifications in ASTM E119 (ASTM 2002), NFPA 251 (NFPA 1999), UL 263 (UL 2003).
The standard time temperature curves from ASTM E119 and ISO 834 are compared in Figure 3.5. They are seen to be rather similar. All other international fire resistance test standards specify similar
time temperature curves. The development of a standard for characterizing fire exposure scenarios is currently underway in United States. SFPE has recently published an Engineering guide (SFPE 2004)
which can be used to develop design fire scenarios for various compartment types. Other national standards include British Standard BS 476 Parts 20-23 (BSI 1987), Canadian Standard CAN/ULC-S101-04
(ULC 2004) and Australian Standard AS 1530 Part 4 (SAA 1990). The ASTM E119 curve is defined by a number of discrete points, which are shown in Table 3.2, along with the corresponding ISO 834
temperatures. Several equations approximating the ASTM E119 curve are given by Lie (1995), the simplest of which gives the temperature T (◦ C) as √ √ (3.25) T = 750[1 − e(−3.79553 th ) ] + 170.41 th
+ T0 where th is the time (hours).
Thermal Action
Table 3.2 ASTM E119 and ISO 834 time temperature curves Time (min)
ASTM E119 Temperature (◦ C)
ISO 834 Temperature (◦ C)
3.2 Specific considerations 3.2.1
H ea t fl u x t o p r o t e c t e d s t e e l w o r k
The procedures that have to be used to calculate the temperature in the steel members will be given in Chapter 4. For undertaking such temperature calculations, one particular question that has to be
addressed is the nature of boundary conditions, especially for protected sections. For unprotected sections indeed, the heat flux introduced in the section is present in the equation used for
evaluating the temperature in the section, see Equation 4.1. This flux is easily calculated according to Equation 3.14 if the fire is represented by a temperature-time curve. In the case of a
localised fire impinging the ceiling (see Section 3.1.3), the effect of the fire is directly given as a net heat flux (see Equation 3.24), and this can also be used directly for calculating the steel
temperature. For localised fires not impinging the ceiling (see Section 3.1.2), it is also possible to calculate the net heat flux, for example from Equation 3.14 if the fire is surrounding a column
and the temperature of the hot gases is given by Equation 3.17. For protected steelworks on the other hand, the equation used for calculating the temperature is based on the gas temperature, see
Equation 4.7. This equation can be used as such in the cases where the fire is represented by a gas temperature, see sections 3.1.1 and 3.1.2. Equation 4.7 cannot yet be applied directly if the
effect of the fire is given as an impinging flux, see section 3.1.3. A procedure has to be established to transform the impinging heat flux into an equivalent gas temperature. This procedure is based
on the assumption (also made in Eq. 4.7) that the surface temperature of the protection, θm,t , is equal to the gas temperature, θg,t . If the fire is represented by a temperature–time curve and the
boundary condition is taken as expressed by Equation 3.14, then net heat flux is obviously equal to 0 when θm,t = θg,t . If the same condition h˙ net = 0 is imposed in Equation 3.24, it is then
possible to derive the equivalent temperature of the gas leading to the situation when the flux received by the surface is exactly equal to the flux reemitted by that surface. This equivalent
temperature of the gas is given by the solution of Equation 3.26. 4 − 2934 ) h˙ = αc (θg,t − 293) + εm σ(θg,t
Designing Steel Structures for Fire Safety
Equivalent gas temperature (°C)
Heat flux received by the surface in the local fire model (kW/m2)
Fig. 3.6 Evolution of the equivalent gas temperature as a function of the impinging flux
Figure 3.6 shows the relation between the equivalent gas temperature and the heat flux given by the local model if the values αc = 35 W/m2 K and εm = 0.8 are introduced in Equation 3.26. It has to be
underlined that, because the impinging heat flux given by the local model has a maximum value of 100 kW/m2 , whatever the geometrical conditions and whatever the power released by the fire, the
equivalent gas temperature cannot be higher than 847◦ C and, hence, the steel temperature of a steel member heated by this local model cannot reach temperatures higher than 847◦ C. The structure of
Equation 3.26 also shows that higher values of coefficients of heat transfer, αc and εm , lead to lower values of the steel temperature because, for a given impinging flux, the reemitted flux will be
greater! 3.2.2
C ombi n i n g d i f f e r e n t m o d e l s
When the fire is localised, a 2-zone model will give as a result, amongst other things, the evolution of the temperature in the hot zone. This temperature must be regarded as the average temperature
that one could observe or measure in the hot zone during a test or a real fire. In addition to that, it must be considered that a more severe thermal attack is imposed on structural elements that are
located in the near vicinity of the fire source. For example, if one car is burning in a car park, stratification will indeed be observed and a 2-zone model allows predicting the temperature in the
hot zone, at least at a reasonable distance from the burning car. Just above the car, the situation is certainly very different, with a much severe thermal attack by direct radiation from the flames
produced by the burning car. This effect is evaluated by the localised fire model described in Annex C of EN 1991-1-2, see for example Section 3.1.3 here above. The Eurocode recommends that the
combination of the results obtained by the 2-zone model and by the localised fire model may be considered, in order to get a more accurate temperature distribution along the members. According to the
Eurocode, the
Thermal Action
combination simply considers taking at each location and at any time the maximum of the effect given by the two models. It is the opinion of the authors of this book that the combination should
generally be made. If, for example, the fire source is located under a simply supported truss girder at, say, ¼ of the span, it is not possible to predetermine which member of the truss will fail
first, either a bar just above the fire source, possibly loaded to a lower level but submitted to a more severe attack from the fire, or a bar at mid span of the truss, presumably loaded to a higher
level but heated by the average temperature of the hot zone, i.e. by a less severe thermal attack. When flash-over occurs and the situation in the compartment turns into a one-zone situation, there
is no localised fire source anymore and, thus, no need to combine. As a consequence, it may happen that a structural member located in the vicinity of the source is submitted to a less severe thermal
attack immediately after the flash-over than immediately before the flash-over. Usually, because the flash-over is accompanied by a sudden increase in the rate of heat release of the fire, the
duration of this apparent bias is very short and its effect can hardly be seen on the temperature curve in the steel member.
3.3 Examples 3.3.1
L oc a lis e d f i r e
A car is burning with a heat release rate of 5 MW in a car park with a floor to ceiling distance of 2.80 m. What is the flux received by a steel element located under the ceiling at a horizontal
distance of 5 meters from the centre of the car? Characteristic length of the fire source The fire source is assumed to be rectangular with a surface area equal to 10 m2 (2 × 5), which means an
equivalent diameter D of 3.57 m. Length of the flame Lf = 0.0148 (5 106 )0.4 − 1.02 × 3.57 = 3.44 m The flame is impinging the ceiling. Non dimensional Froude number Q∗D = 5 106 /(1.11 106 × 3.572.5
) = 0.187 Position of the virtual source z = 2.4 × 3.57(0.1870.4 − 0.1870.67 ) = 1.58 m Non dimensional Froude number Q∗H = 5 106 /(1.11 106 × 2.302.5 ) = 0.561 Note that the fire source is supposed
to be 0.50 m above the ground, hence the vertical distance from the fire source to the ceiling, H, is equal to 2.30 m. Length of the flame H + Lh = 2.9 × 2.30 × 0.5610.33 = 5.50 m Non dimensional
ratio y = (1.58 + 2.30 + 5.00)/(1.58 + 5.50) = 8.88/7.08 = 1.25
Designing Steel Structures for Fire Safety
Impinging flux h˙ = 15 000/1.253.7 = 6491 W/m2 According to Figure 3.5, the temperature of a steel member submitted to such a flux cannot reach a temperature higher than 167◦ C.
P a r a m e t r i c f i r e – ve n t i l a t i o n c o nt r o l l e d
A fire compartment is rectangular in plan of size of 3 m by 6 m. The floor to ceiling distance is 2.5 m. The design fire load is 750 MJ/m2 and the rate of fire growth in the compartment is “slow’’.
The compartment partitions are made of normal weight concrete, C = 1100 J/kgK, ρ = 2300 kg/m3 , λ = 1.2 W/mK. One window, 2 meters wide and 1 meter high, and one door, 1 meter wide and 2.1 meters
high, open in the walls. Calculate the parametric fire curve. Wall factor b b = (1100 J/kgK × 2300 kg/m3 × 1.2 W/mK)0.5 = 1742 J/m2 s0.5 K Total area of vertical openings Av = 2 m × 1 m + 1 m × 2.1 m
= 4.1 m2 Total area of enclosures At = 2 (3 m × 6 m + 3 m × 2.5 m + 6 m × 2.5 m) = 81 m2 Weighted average of window heights heq = (2 m2 × 1 m + 2.1 m2 × 2.1 m)/4.1 m2 = 1.56 m Opening factor O = 4.1
× 1.560.5 /81 = 0.0633 m0.5 Factor = (0.0633/0.04)2 /(1742/1160)2 = 1.11 Fire load density qt,d = 750 × 18/81 = 167 MJ/m2 Shortest possible duration of the heating phase tlim = 5/12 = 0.417 h (25
min.) Duration of the heating phase tmax = 0.2 10−3 × 167/0.0633 = 0.528 h (31 min. 41 s) The fire is ventilation controlled because tlim < tmax Temperature during the heating phase can be computed
using Equation 3.8. For example; • •
At t = 30 min., i.e. t = 0.5 h, t ∗ = 1.11 × 0.5 = 0.555, θg = 856◦ C. ∗ = 1.11 × 0.528 = 0.586, θmax = 863◦ C. At t = tmax , tmax
Thermal Action
1000 900 Ventilation controlled Fuel controlled
Temperature (°C)
Time (min)
Fig. 3.7 Temperature–time curve according to the parametric model
Temperature during the cooling phase can be computed using Equation 3.9b. For example; • •
At t = 1.0 h, t ∗ = 1.11, θg = 863 − 250(3 − 0.586)(1.11 − 0.586) = 547◦ C. At t = 1.787 h, t ∗ = 1.984, θg = 863 − 250(3 − 0.586)(1.984 − 0.586) = 20◦ C.
The complete temperature–time is plotted as a continuous line on Figure 3.7 3.3.3
P a r a me t r i c f i r e – f u e l c o n t r o l l e d
How does the temperature–time curve get modified if the width of the window is increased to 3.4 meters? The parameters and factors that are modified are: Total area of vertical openings Av = 3.4 m ×
1 m + 1 m × 2.1 m = 5.5 m2 Weighted average of window heights heq = (3.4 m2 × 1 m + 2.1 m2 × 2.1 m)/5.5 m2 = 1.42 m Opening factor O = 5.5 × 1.420.5 /81 = 0.0809 m0.5 Factor = (0.0809/0.04)2 /(1742/
1160)2 = 1.814 Duration of the heating phase tmax = 0.2 10−3 × 167/0.0809 = 0.413 h (24 min. 46 s) ∗ = 1.814 × 0.413 = 0.749 tmax The fire is fuel controlled because tmax ≤ ttlim
Designing Steel Structures for Fire Safety
Modified opening factor Olim = 0.1 10−3 × 167/0.417 = 0.04005 Modified factor lim = (0.04005/0.04)2 /(1 742/1 160)2 = 0.444 The temperature during the heating phase can be computed using Equation 3.8
in which t ∗ = tlim . For example; • •
At t = 20 min., i.e. t = 0.333 h, t ∗ = 0.444 × 0.333 = 0.148, θg = 680◦ C. At t = tlim , t ∗ = 0.444 × 0.417 = 0.185, θmax = 715◦ C.
The temperature during the cooling phase can be computed using Equation 3.13b. For example; • •
At t = 1.0 h, t ∗ = 1.814 × 1.0 = 1.814, g = 715 − 250(3 − 0.749)(1.814 − 1.814 × 0.417) = 120◦ C. At t = 1.1 h, t ∗ = 1.814 × 1.10 = 1.991 g = 715 − 250(3 − 0.749)(1.991 − 1.814 × 0.417) = 20◦ C.
The complete temperature–time curve is now plotted on Figure 3.7 as a doted line.
Chapter 4
Temperature in Steel Sections
4.1 General There are three main steps in the fire resistance analysis of steel structures. The first step is determining the fire temperature resulting from a given fire exposure scenario. This can
be carried out using the methodologies described in Chapter 3. The second step is to establish the temperature history in the steel structure, resulting from fire temperature. The second step forms
the basis for undertaking the structural analysis which is the third step of fire resistance calculation. The accuracy of calculations in the second stage is critical for obtaining realistic fire
resistance predictions. In fact in many cases an estimate of fire resistance can be obtained based on predicted steel temperatures alone through the use of critical temperature failure criterion.
Establishing temperature history in the steel structure typically involves heat transfer analysis. For this simple or advanced calculation models described in later sections can be used. The detailed
steps for heat transfer calculations are contained in various manuals and guides. Eurocode provides different approaches for evaluating temperatures in steel section. The complexity of these methods,
discussed later in the chapter, varies with the type of analysis and is related to the accuracy of temperatures. AISC steel design Manual (AISC 2005) does not contain the equations for such
calculations. However, ASCE manual (ASCE 1992) details the procedure for temperature calculations under various exposure (2, 3 or 4 sided) types and boundary conditions. One of the most important
input for the heat analysis is the high temperature thermal properties of steel and insulation materials. The high temperature properties of steel as per North American and Eurocode practices are
listed in Annex I. In addition, the room temperature thermal properties of common fire protection material are provided in Annex I. For simple calculation methods room temperature thermal properties
may be sufficient to estimate the temperatures in steel structures.
4.2 Unprotected internal steelwork 4.2.1
Pr in c ip l e s
If the temperature distribution in the cross section is supposed to be uniform, the temperature increase during a time increment is given by Equation 4.1 θa,t = ksh
Am /V ˙ hnet t ca ρa
Designing Steel Structures for Fire Safety
where: θa,t ksh Am V ca ρa h˙ net t
is the steel temperature increase from time t to time t + t, is the correction factor for the shadow effect, see below, is the surface area of the member per unit length, is the volume of the member
per unit length, is the specific heat of steel, is the unit mass of steel, is the design value of the net heat flux per unit area, is the time interval.
Equation 4.1 is better understood if transformed into the form of Equation 4.2 which shows that it is just the expression of the conservation of energy between the quantity that penetrates in the
section and the quantity used to modify the temperature and hence the enthalpy of the section. h˙ net ksh Am t = θs,t ca ρa V
In Equation 4.1, the ratio between the surface area of the member and the volume of the member, Am /V, is the parameter characterising the cross section of the member that governs its heating. It is
referred to in Eurocode 3 as the section factor. The higher the value of this factor, the thinner the section and, hence, the faster the heating of the section. Figure 4.1 hereafter, taken from
Eurocode 3, shows how this parameter is calculated for different section configurations. In fact, the term “section factor’’ is not meaningful because it contains no information about the physical
characteristic that this factor represents. This parameter is sometimes referred to as “the massivity factor’’ which indicates at least what this factor is about but the problem nevertheless remains
that this quantity is the highest for the most slender and less massive sections. In this text, the term “thermal massivity’’ will be introduced to designate the invert of the massivity factor. This
thermal massivity indicates what physical effect it is related to, with the advantage that this quantity presents the highest values for the stockiest sections. Table 4.1 indicates how each factor is
related to the thickness t of a steel plate, either if this steel plate is used in an open section or if it is the wall of a steel tube. The specific heat of steel ca that is present in Equation 4.1
is given as a simple function of the steel temperature in EN 1993-1-2, see Equation I.1 in Annex I. h˙ net is calculated as mentioned in Chapter 3. ksh , the correction factor for the shadow effect,
stems from the fact that, at least in a furnace test, the steel section is mainly heated by the radiation that originates from the walls of the furnace and from the flames of the burners. In that
case, there cannot be more energy reaching the surface of the member than the energy travelling through the smallest box surrounding the section (Wickström 2001). This can be seen from Figure 4.2
that shows the difference between the surface perimeter (full line) and the box perimeter (doted line) for an I-section and for an angle section. Strictly speaking, the correction taking the shadow
effect into account should apply only to the radiative part of the heat flux whereas, because it directly multiplies the totality of the heat flux in Equation 4.1, it applies also to the convective
part of the flux.
Temperature in Steel Sections
Open section exposed to fire on all sides: Am perimeter V cross-section area t
Open section exposed to fire on three sides: Am surface exposed to fire V cross-section area
Hollow section (or welded box section of uniform thickness) exposed to fire on all sides: If t «b: Am/V 1/t t h b
I-section flange exposed to fire on three sides: Welded box section exposed to fire on all sides: Am If t «b: Am/V 1/tf V cross-section area If t «b: Am/V 1/t
h I-section with box reinforcement, exposed to fire on all sides: Am V cross-section area
If t «b: Am/V 2/t
If t «b: Am/V 1/t
b t
Fig. 4.1 Section factor in unprotected steel sections
Designing Steel Structures for Fire Safety Table 4.1 Section factor and thermal massivity values for different sections Term
Section factor (Massivity factor)
Thermal massivity
Equation Unit Value for an open section Value for a tube
Am /V m−1 2/t 1/t
V/Am m t/2 t
Fig. 4.2 Surface area and box area
This approximation is justified by the fact that, for temperatures normally encountered in a fire, radiation is the dominant heat transfer mode to the section. Consequently, ksh equals unity for
cross sections with a convex shape, e.g. rectangular and circular hollow sections, in which the box area equals the surface area. Generally speaking, ksh is given by Equation 4.3. ksh =
Am,b [Am /V]b = [Am /V] Am
For the specific case of I-sections under nominal fire action, for example the ISO 834 or the hydrocarbon temperature-time curves, ksh is given by Equation 4.4. ksh = 0.9
Am,b [Am /V]b = 0.9 [Am /V] Am
Practically speaking, it is easy to use Equation 4.5 instead of Equation 4.1 θs,t =
A∗m /V ˙ hnet t ca ρ a
where A∗m /V is either based on Am , Am,b or 0.9 Am,b depending on the situation. This simplifies, for example, the utilisation of graphical design aids. As such, Equation 4.5 (or Eq. 4.1) does not
directly yield the steel temperature at a specific time; it has to be integrated over time. A simple algorithm such as the one given
Temperature in Steel Sections
below can be used to compute steel temperatures. This algorithm is explicit, which means that the temperature increase during a time step is calculated as a function of the values of all variables at
the beginning of the time step. In order to ensure stability of the integration process, such an algorithm must be used with rather small time steps, not bigger than 5 seconds according to EN
1993-1-2. Other more refined algorithm can be written based on an implicit integration that allows using bigger time steps but, with modern computers, it is not a problem to use short time steps and
the time required and the precision obtained with the explicit algorithm are normally satisfactory. Data: Rho=7850; time=0; dtime=2; Tsteel=20; TimePrint=60 Data: h=25; eps=0.7 Read AmV_effective,
FinalTime Print AmV_effective Do while (time < FinalTime) Call Sub_Csteel(Tsteel,Csteel) Call Sub_FireTemp(time,Tfire) Call Sub_Flux(h,eps,Tsteel,Tfire,hnet) Tsteel = Tsteel +
(AmV_effective*hnet*dtime) / (Csteel*Rho) time = time + dtime If ( modulo(TimePrint,time) = 0 ) Print time, Tfire, Tsteel EndDo where: Sub_Csteel
Sub_FireTemp Sub_Flux
is a subroutine or a function that returns the value of the specific heat of steel as a function of the steel temperature, see Equation I.2 in Annex I. is a subroutine that returns the value of the
gas temperature as a function of time, see Equation 3.1, 3.2 and 3.3 for nominal fires. is a subroutine that returns the value of the design value of the net heat flux as a function of the
coefficient of convection, the emissivity, the gas temperature and the steel temperature, see Equation 3.14.
For a defined fire, it is quite convenient to do the integration of Equation 4.5 once for various values of the effective section factor A∗m /V and to build design aids in the form of tables or
graphs. For example, Table I.1 and Figure I.3 and I.4 presented in Annex I have been built for the ISO 834 standard fire. The explicit integration scheme has been used with a time step of 1 second.
The S shape of the curves that can be observed on Figure I.4 for temperatures around 735◦ C results from the peak in the specific heat of steel for this temperature, see Equation I.2. This Figure
shows that, except for very massive sections, the steel temperature is higher than 700◦ C after 30 minutes. The temperatures obtained after 60 minutes are so high that it is impossible for an
unprotected steel structure to have a fire resistance of one hour under standard fire exposure. Figure I.4 shows the evolution of the temperature obtained after a given time as a function of the
massivity factor. It has been suggested in some textbooks that a good
Designing Steel Structures for Fire Safety
mean for obtaining higher resistance times would be to select steel sections with higher massivity factors because the temperature increase is slower in sections that are more massive. This Figure
shows that, for fire resistance times of 20 minutes and more, the steel temperature is hardly decreased if the massivity factor cannot be reduced below 200 m−1 . A significant reduction of the
temperature requires a reduction of the massivity factor to values lower than 100 m−1 . Experience shows that, in reality, it is always more efficient to select a section that has higher mechanical
properties, a higher plastic modulus for example in a section under bending, than to increase the thermal massivity.
4.2.2.1 R e c t a n g u l a r h ol l ow c or e s ec ti on What is the effective section factor of a 200 × 300 × 10 rectangular hollow core section exposed to the fire on four sides? In this convex
section, the box perimeter is equal to the surface perimeter and the section factor is based on the surface perimeter. Am,b = 2(200 + 300) = 1000 mm = Am V = 200 ∗ 300 − 180 ∗ 280 = 9600 mm2 A∗m /V =
0.104 mm−1 = 104 m−1 Note: V/A∗m = 9.62 mm ≈ t 4.2.2.2 I-s e c t i on ex p os ed to fi r e on 4 s i d e s a n d s u b je c t e d t o a n o m in a l f ir e What is the effective section factor of a HE
200 A section heated on four sides and subjected to a nominal fire? The effective section factor of this concave section has to be based on the box value of the perimeter and, because it is subjected
to the nominal fire, the factor 0.9 has to be taken into account. Am,b = 2(h + b) = 2(0.190 + 0.200) = 0.780 m V = 53.8 × 10−4 m2 (from catalogues) A∗m /V = 0.9 × 0.780/53.8 × 10−4 = 130 m−1 Note: V/
A∗m = 7.66 mm 4.2.2.3 I-s e c t i on ex p os ed to fi r e on 3 s i d e s What is the effective section factor of an IPE300 section exposed on three sides with a concrete slab on the top of the beam
(any fire)? The effective section factor of this concave section has to be based on the box value of the perimeter of the exposed part of the section.
Temperature in Steel Sections
Am,b = 2h + b = 2 × 0.300 + 0.150 = 0.750 m V = 53.8 × 10−4 m2 (from catalogues) A∗m /V = 0.750/53.8 × 10−4 = 139 m−1 Note: V/A∗m = 7.17 mm The top surface of the steel section has not been taken
into account for the evaluation of the boxed perimeter. This is to represent the fact that this surface is not in contact with the fire and that no heat transfer with the hot gas exists there. In
fact, there will be a heat transfer on the top surface but in the direction from the steel section to the concrete slab and this transfer delays somewhat the temperature increase in the section. It
will be explained in Chapter 5 how the fact that this heat sink effect is not taken into account in the thermal analysis is compensated for by a correction factor in the mechanical analysis.
4.3 Internal steelwork insulated by fire protection material 4.3.1
Pr in c ip l e s
If the temperature distribution in the cross section is supposed to be uniform, the temperature increase during a time increment is given by Equation 4.6. λp Ap /V (θg,t − θa,t ) t − (eφ/10 − 1)θg,t
dp ca ρa (1 + φ/3) cp ρp with φ = dp Ap /V ca ρ a θa,t =
where: λp Ap /V Ap V θg,t θa,t dp ca ρa t θg,t cp ρp
is the thermal conductivity of the fire protection material, is the section factor for steel members insulated by fire protection material, is the appropriate area of fire protection material per
unit length of the member, is the volume of the member per unit length, is the ambient gas temperature at time t, is the steel temperature at time t, is the thickness of the fire protection material,
is the temperature dependant specific heat of steel, is the unit mass of steel, is the time interval, is the increase of ambient gas temperature during the time interval t, is the temperature
independent specific heat of the fire protection material, is the unit mass of the fire protection material,
The above equation is derived from the formulation of Wickström (1985) where the governing partial differential equation of the heat transfer inside the insulation layer
Designing Steel Structures for Fire Safety
Insulation λp
Gas θg,t
Steel V
Fig. 4.3 Temperature in protected steelwork – schematic representation
was solved. Some simplifications of the solution of this 1D equation lead to the exponential correction factor. Strictly speaking, the approximation of the exact solution is valid for small values of
the factor φ. This factor should normally not be higher than 1.5 but this limitation has not been specified in the Eurocode. A comprehensive discussion on various simple equations derived for
evaluating the temperature increase in thermally protected steel member may be found in Wang (2004). The design value of the net heat flux, and hence the coefficients for boundary conditions, do not
appear in Equation 4.6 because the hypothesis behind this equation is that the surface temperature of the thermal insulation is equal to the gas temperature. The thermal resistance between the gas
and the surface of the insulation is neglected. It is assumed that the temperature increase in the section is driven by the difference in temperature between the surface of the insulation, i.e. the
gas temperature, and the steel profile, with only the thickness of the insulation providing a thermal resistance to conduction, see Figure 4.3. Figure 4.4 shows how the section factor for steel
members insulated by fire protection material is calculated under different configurations. As Equation 4.1 or 4.5, Equation 4.6 has to be integrated over time in order to obtain the evolution of the
temperature in the steel section as a function of time. EN 1993-1-2 recommends that the value of the time step t should not be taken as more than 30 seconds, a value deemed to ensure convergence even
with an explicit method. Higher values could probably be taken into account with implicit methods, but the benefit in term of CPU time on modern computers would be marginal anyway; it is thus better
to adhere to time steps not exceeding 30 seconds. Figure 4.4, taken from Eurocode 3, shows that the section factor for sections insulated by a hollow encasement are based on the dimensions of the
section, h and b, even if the encasement does not touch the section and, in that case, the surface that radiates energy to the steel section is the inside surface of the encasement. This
approximation has been made in order to avoid the introduction of the distance between the section and the encasement as a new parameter in the design process.
Temperature in Steel Sections
Section factor Ap/V
Contour encasement of uniform thickness
steel perimeter steel cross-section area
Hollow encasement of uniform thickness1
Contour encasement of uniform thickness, exposed to fire on three sides b
b 1
Hollow encasement of uniform thickness, exposed to fire on three sides1
The clearance dimensions c1 and c2 should not normally exceed h/4
Fig. 4.4 Section factor, Ap /V, in protected steel sections
This would complicate the design process especially when using design aids such as graphs and tables. The thermal properties of the insulating material that appear in Equation 4.6 must have been
determined experimentally according to ENV 13381-4 (2002). According to this standard, several short unloaded specimens as well as a limited number of loaded specimens, with various massivity factors
and various protection thicknesses, are submitted to the standard fire. The thermal conductivity of the insulating material is back calculated from the recorded steel temperatures using the invert of
Equation 4.6. The unit mass and the constant specific heat must be provided by the manufacturer of the product (if the specific heat is unknown, a value of 1000 J/kgK is assumed).
Designing Steel Structures for Fire Safety
It is important to note that the thermal properties of the insulation determined according to ENV 13381-4 are directly applicable only to “I’’ or “H’’ type sections. Some corrections may be required
if the product is to be applied on other section types such as “U’’ or “T’’ sections or rectangular and circular hollow sections. For reactive protection materials such as intumescent paint for
example, additional tests may even be required if the product has to be applied on hollow sections. If the thermal conductivity is considered as temperature dependent in the analysis of the results,
a horizontal plateau at 100◦ C can be introduced in the time integration of Equation 4.6 when calculating the temperature evolution in the protected steel section, in order to take the evaporation of
moisture into account. The duration of this plateau is a function of the thickness of the insulation. The evaporation of moisture can also be conservatively neglected in order to simplify the
process. There is also a possibility to consider the thermal conductivity as constant in the analysis of the results. In that case, the effect of the evaporation of moisture is implicitly taken into
account in the effective thermal conductivity that is derived (but this conductivity is now a function of the thickness of the insulation and of the maximum steel temperature). A big error that must
be avoided is to consider the values of the thermal properties derived at ambient temperature, typically for applications such as thermal insulation in buildings. This would lead to unsafe results in
the fire situation because the thermal conductivity, for example, has a tendency to increase with increasing temperature in most insulating materials. Generally speaking, and especially for reactive
protections such as intumescent paints, the thermal conductivity is a function of the thickness of the protection. What is very important is that the hypothesis made for the thermal conductivity of
the insulating material when integrating Equation 4.6 to calculate the evolution of the steel temperature is consistent with the hypothesis made when analysing the experimental results for deriving
this thermal conductivity. The following algorithm shows a very simple example of an explicit integration scheme. It is consistent with the hypothesis made in the determination of the thermal
conductivity that the temperature of the protection is equal to the average between the steel temperature and the gas temperature. During the early stage of the fire, it may occur that the
temperature increase in steel calculated by Equation 4.6 turns out to be negative. This will be the case especially for protection materials that have a high thermal capacity. In that case, the
temperature increase in steel has to be set to 0 and the integration process continued in the next time steps. If the gas temperature is decreasing, a negative variation of the steel temperature can
of course be accepted. Data: Rhoa=7850; time=0; dtime=10; Ta=20; TimePrint=60 Data Tfire_previous = 20 Read ApV, dp, cp, Rhop, FinalTime Print ApV, dp, cp, Rhop Do while (time < FinalTime) Call
Sub_Csteel(Ta,Ca) Phi = cp * Rhop *dp * Apv / (Ca * Rhoa) Call Sub_FireTemp(time,Tfire) dTfire = Tfire-Tfire_previous
Temperature in Steel Sections
Tp = (Ta+Tfire)/2 Call Sub_Lambdap(Tp,Lp,dp) dTa = Lp * ApV * dtime * (Tfire-Ta) / ( dp * Ca * Rhoa * (1+Phi/3)) dTa = dTa - (exp(Phi/10) - 1 ) * dTfire if (dTfire>0).and.(dTa 1 and is replaced by
the value of 1. The emissivity of the flame at the window (εf ) is taken as 1.0
Designing Steel Structures for Fire Safety
W2 12 m
wt 3 m
Fig. 4.7 Plan view of the region near the window 1.0
Lx 1.0 1.5
Fig. 4.8 Elevation of the region near the window
Axis length from the window to the point where the temperature analysis is made, see point A on Figure 4.8: Lx = 0.707 m. The flame temperature at point A is given by: Tz = T0 + (Tw − T0 )(1 − 0.4725
Lx wt /Q) = 865◦ C The emissivity of flames is taken as 1 − e−0.3 df = 0.259 where df is the flame thickness = 1 m
Temperature in Steel Sections
The convective heat transfer coefficient is given by: αc = 4.67(1/deq )0.4 (Q/Av )0.6 = 13 W/m2 K From Annex B of Eurocode 3 As the column is engulfed in flames, its average temperature Tm is
determined as the solution of the following heat balance equation: 4 + αTm = Iz + If + αTz σTm
where: is the Stefan-Boltzmann constant, 5.67 10−8 W/m2 K4 , is the convective heat transfer coefficient, 13 W/m2 K, is the flame temperature, 865◦ C (1138 K), is the radiative heat flux from the
flame, is the radiative heat flux from the window.
σ α Tz Iz If
Radiation from the flame. Iz =
(Iz,1 + Iz,2 )d1 + (Iz,3 + Iz,4 )d2 2(d1 + d2 )
d1 , width of the member perpendicular to the window = 0.15 m. d2 , width of the member parallel to the window = 0.15 m. εz,1 , emissivity of the flame on side 1 of the member, i.e. perpendicular to
the window = 1 − e−0.3×1.425 = 0.348 => Iz,1 = 0.348 × 5.67 × 10−8 × 11384 = 33.09 kW/m2 => Iz,2 = Iz,1 = 33.09 kW/m2 εz,4 , emissivity of the flame on side 4 of the member, i.e. opposite to the
window = 1 − e−0.3×0.35 = 0.100 => Iz,4 = 0.100 × 5.67 × 10−8 × 11384 = 9.48 kW/m2 εz,3 = εz,4 => Iz,3 = 0.100 × 5.67 10−8 × 12794 = 15.17 kW/m2 Iz = (2 × 33.09 + 9.48 + 15.17)/4 = 22.71 kW/m2
Radiation from the opening If = φf εf (1 − αz )σTf4 φf =
(ϕf ,1 + ϕf ,2 )d1 + ϕf ,3 d2 2(d1 + d2 )
ϕf ,3 , view factor from the window to side 3 of the member, i.e. the side of the member facing the window, = 2 × 0.239 = 0.478 according to equation G.2 of Eurocode 1, with the window divided in 2
zones of 1.5 × 1.5 m2 at a distance of 0.35 m from the member.
Designing Steel Structures for Fire Safety
ϕf ,2 , view factor from the window to side 2 of the member, i.e. one side perpendicular to the plane of the window, = 0.158 according to equation G.3 of Eurocode 1, with w = 1.425 m, s = 0.5 m and h
= 1.5 m. ϕf ,1 = ϕf ,2 , owing to symmetry. φf =
2 × 0.158 + 0.478 = 0.199 4
εf = 1.0 αz = (εz,1 + εz,2 + εz,3 )/3 = 0.265 Tf = 857◦ C (1130 K) If = 0.199 (1 − 0.265) 5.67 10−8 × 11304 = 13.55 kW/m2 Incident heat flux 22 710 + 13 550 + 13 × 1138 = 51 054 W/m2 This flux is the
quantity that appears on the right hand side of Equation 4.9. It is straightforward to solve this equation. This yields: Tm = 912 K (639◦ C) This temperature is the temperature that will be
established in the member in the steady state situation. No indication is given by this method concerning the time that will elapse before this situation exists. Of course, it would be easy to write
the transient equivalent of Equation 4.9 and to integrate this resulting equation over time, which would yield the evolution of the temperature in the steel member as a function of time.
Chapter 5
Mechanical Analysis
5.1 Level of analysis 5.1.1
Pr in c ip l e s
The response of a structure exposed to fire can be analysed at different levels. It is the responsibility of the designer to select the level of analysis. The three possibilities are: Global
structural analysis If the structure is rather simple or, in case of a complex structure, if a sufficiently sophisticated tool is available for the analysis, it is possible to consider the entire
structure as a whole and to analyse it as a single object. Member analysis On the opposite, the structure can be seen as the assembly of members, here defined as load bearing elements limited in
their dimensions either by a support with the foundation or by a joint with other elements. Typically, the word “member’’ designates a beam, a column, a floor, etc. It is possible to analyse the
structure as a collection of a number of individual elements, the fire resistance of the structure being taken as the shortest fire resistance of all the members. Sub structure analysis This is the
intermediate solution between the aforementioned limit cases; any part of the structure that is bigger than an element is a substructure. It has to be noticed that the same choice is in fact also
made for the design at room temperature: • • •
A structure can be entirely represented (discretised) as a single object and the effects of actions determined in this object, usually by a computer analysis program. Yet, very large structures, such
as the Eiffel Tower in Paris, have been designed as an assembly of elements, the resistance of each of them being verified individually. In an industrial hall made of parallel one storey one bay
portal frames with purlins spanning from frame to frame, a usual procedure would be: – design the purlins as individual elements, – design the frames as separate substructures, i.e. one
representative frame is considered individually (no 3D interaction with the other frames) but analysed
Designing Steel Structures for Fire Safety
as a whole and not as the addition of two columns and one beam, which means, for instance, that moment redistribution is considered within the frame, design the bracing system also as a substructure,
for example as a statically determinate truss girder.
The problem is somewhat more complex in case of fire because of indirect fire actions, i.e. these variations of axial forces, shear forces and bending moments resulting from restraint to the thermal
movements. In a global structural analysis, all indirect fire actions developing in the structure during the course of the fire must be considered. In a substructure analysis, the conditions in terms
of supports and/or actions acting at the boundary of the substructure are evaluated at time t = 0, i.e. at the beginning of the fire, and are considered to remain constant during the entire duration
of the fire. Indirect fire actions can nevertheless develop within the substructure. In a member analysis, boundary conditions are also fixed to the value that they have at the beginning of the fire,
but no indirect fire action is taken into account in the member, except those resulting from thermal gradients. In fact, the only cases where the effects of thermal gradients have been recognised to
have significant effects on the fire resistance of simple members are the cantilever or simply supported walls or columns submitted to the fire on one side only. In these cases, the important lateral
deflections induced by the thermal gradient may generate significant additional bending moments due to second order effects. This can lead to premature failure, either by yielding of the material at
the base of the column, by general buckling of the column, or even by loss of equilibrium of the foundation. This is clearly a case where the effects of thermal gradients have to be taken into
account, even in a member analysis. It should be noted that significant thermal expansion will in fact be present in the structure and this should be accounted for in the analysis through the
discretisation of the structure into elements and/or substructures in such a way that these hypotheses on the constant boundary conditions are reasonable and correspond at least to a good
approximation of the real situation. Designing, for example, as a simple element a beam of an underground car park that is very severely restricted against thermal elongation by the surrounding
ground and neglecting the increase of axial compression force that will for sure arise in reality, would not represented a sound approximation of the actual boundary conditions present during fire
exposure. 5.1.2
B ou n d a r y c o n d i t i o n s i n a s u b s t r uc t ur e o r an eleme n t a n a l y s i s
No precise recommendation is given in the Eurocode concerning the way to define the boundary conditions at the separation between an element or a substructure and the rest of the structure. The
following procedure is recommended by the authors for selecting the boundary conditions in a substructure or an element. It is here explained for a substructure, but the same would hold for an
element. 1. 2.
The effects of action in the whole structure must be determined at time t = 0 under the load combination in case of fire that is under consideration. The limits of the substructure have to be chosen.
The choice is made with the contradictory objective that the substructure becomes as simple as possible, but
Mechanical Analysis
3. 4. 5.
at the same time the hypothesis of constant boundary conditions during the fire must represent a good approximation of the real situation, with respect to the thermal expansion that exists in
reality. The choice of the limits of the substructure is of course highly dependent of the location of the fire. Engineering judgement is necessary. All the supports of the structure that belong to
the substructure have to be taken into account as supports of the substructure. All the external mechanical loads that are applied on the substructure in case of fire have to be taken into account as
acting on the substructure. For each degree of freedom existing at the boundary between the substructure and the rest of the structure, an appropriate choice has to be made in order to represent the
situation as properly as possible. The two possibilities are: (a) the displacement (or the rotation) with respect to this degree of freedom is fixed, or (b) the force (or the bending moment) deduced
from the analysis of the total structure computed in step 1 is applied. These two possibilities are exclusive because it is not possible to impose simultaneously the displacement and the
corresponding force at a degree of freedom. Whatever the choice, these restrictions on the displacements and these forces applied at the boundaries will remain constant during the fire. A new
structural analysis is performed at room temperature on the substructure or the element that has been defined and it yields the effects of actions that have to be taken into account in the
substructure or in the element. In a substructure analysis, the indirect fire actions that could develop within the substructure have to be taken into account, whereas this is not the case for an
element analysis.
This procedure allows finding ones way through the most complex cases but, as the examples below will demonstrate, some of the steps are trivial or omitted in more simple cases. Two examples are
given in chapter 8 that illustrate this procedure, first for a continuous beam, and then for a multi-storey framed structure. 5.1.3
Det er m i n i n g Efi,d,0
It has been mentioned in Section 5.1.2 that the effects of actions at time t = 0, noted Efi,d,0 , have to be determined in order to perform a member or a substructure analysis. No indication is given
in the Eurocodes concerning the analysis method that has to be used to determine these effects of action. In practice, this is normally done by an elastic analysis because it is reasonable to assume
that the structure will exhibit very little if any plasticity under the design loads in case of fire. Indeed, if the situation prevailing at the beginning of the fire is compared to the situation
that has been taken into account for the design of the structure under normal conditions, the design values of the mechanical loads as well as the partial safety factor dividing the resistance of the
material are lower. A steel structure that has been designed in order to sustain in normal conditions a design load equal to 1.35G + 1.50Q with a resistance of fy /γM,1 = fy /1.15, for example, will
exhibit very little plasticity at the beginning of the fire if the load is only 1.00G + 0.50Q and the full resistance fy /γM,fi = fy can be mobilised.
Designing Steel Structures for Fire Safety
Because the effects of actions are determined at time t = 0, the stiffness of the material at room temperature is of course taken into account. If the structure is simple, the analysis is trivial
but, if the structure is complex, it is possible to use one of the numerous numerical tools developed for the analysis of structures at ambient temperature.
5.2 Different calculation models 5.2.1
Gen er a l p r i n c i p l e
Three different calculation models can be applied for the determination of the fire resistance of a structure or an element. They differ very much in their complexity, but also in their field of
application and in what they can offer. It is important, before a choice is made, that these differences are clearly identified. These calculation models are discussed hereafter, from the simplest to
the most complex one. 5.2.1.1 Ta bu l a ted d a ta Tabulated data directly give the fire resistance time as a function of a limited set of simple parameters, e.g. the concrete cover on the
reinforcing bars in a reinforced concrete section or the thickness of insulation in a steel section, the load level, or the dimensions of the section. Such a model is thus normally easy to use.
Tabulated data are not based on equilibrium equations, but result from the empirical observation of either experimental test results or results of calculations made by more refined models. The
tabulated data methods aim at representing these results with the best possible fit. The name “tabulated’’ for this group of calculation models comes from the fact that the results are usually
presented in the form of multi-entry tables. It has to be emphasized that some methods, even if they are presented in the form of analytical equations, belong in fact to the group of the tabulated
data calculation models if they are not based on equilibrium conditions but, on the contrary, simply represent a best fit correlation with results obtained by another way. The main limitations of
tabulated data are: •
Tabulated data exist only for simple elements at present. Theoretically speaking, nothing speaks against establishing tabulated data for more complex structures, for example for single storey one bay
frames, but the effort needed to establish these tables would be high, and the number of input parameters would probably increase to a point that much of the simplicity of the method would be lost.
Such tabulated data have been established so far only for the standard fire curve, namely the ISO fire curve or its equivalent. It would in fact be totally impossible, even for the simplest elements,
to build tabulated data encompassing all the possible natural fire curves that could exist, simply because the number of these curves is infinite. It has yet to be mentioned that some recent
developments have been made in order to establish tabulated data
Mechanical Analysis
in case of particular parametric fire curves, namely those recommended in Annex A of Eurocode 1. In this particular case, it is possible to establish tabulated data in which the duration of the ISO
fire that is usually present in the traditional tabulated data has been replaced by other factors, such as the fire load and the opening factor of the compartment for example. Such tables could, for
example, allow verifying that a steel element with a defined thermal massivity, and a defined load level, can survive the parametric fire provided that the fire load does not exceed a certain value.
Whereas tabulated data are extensively used for concrete and composite steelconcrete structures, no tabulated data is presented in Eurocode 3, probably because the simple calculation model is of
rather simple application. In the past, a nommogram had been published by the ECCS (1983) for unprotected elements and elements protected by a lightweight insulating material. It related graphically
the fire resistance time to the standard fire, the thermal massivity of the section, the load level and, if relevant, the amount of thermal protection. 5.2.1.2 S i mpl e c a l c u l a ti on m od el s
Simple calculation models must be simple enough to be applied in everyday practice without using sophisticated numerical software. They must be based on equilibrium equations. The ability of the
element or the structure to sustain the applied loads is verified taking into account the elevation of the temperature in the material. Usually, the simple calculation models for steel elements are
the direct extrapolation of models used for normal design at room temperature, in which the yield strength and Young’s modulus of steel have been adapted in order to reflect the decrease induced by
the increase of the temperature in steel. Some modifications may be introduced in the model in order to take into account certain aspects specific to the fire situation. On the contrary to tabulated
data, simple calculation models are applicable for any temperature-time fire curve, provided that the adequate material properties are known. It is for example essential to know whether any property
determined during first heating is reversible during the cooling phase that will occur in a natural fire curve. Attention must also be paid to verify that the heating or cooling rate in the material
belongs to the range for which the material properties have been determined. Note: It has to be noted that Eurocode 3 gives no indication about the properties of steel during or after cooling.
Information has to be taken from the literature, for example Kirby et al. 1986. The main field of application of simple calculation models is the element analysis, although some simple substructures
could theoretically also be analysed. 5.2.1.3 A dv a nced c a l c u l a ti on m od el s Advanced calculation models are those sophisticated computer models that aim at representing the situation as
close as possible to the scenario that exists in real structure. Such models must be based on acknowledged and recognised principles of the structural mechanics.
Designing Steel Structures for Fire Safety Table 5.1 Relation between calculation models and division of the structure
Tabulated data Simple calculation model Advanced calculation model
++ ++ +
− + ++
−− − ++
It has to be emphasized that the fact to program a simple calculation method in a computer in order to facilitate its utilisation does not make it an advanced calculation model. Advanced calculation
models are applicable with any temperature-time fire curve provided that the appropriate material properties are known. They can be used for the analysis of entire structure because they take
indirect fire actions into account. More information about advanced calculation models is presented in Chapter 7. 5.2.2
R ela ti o n s b e t w e e n t h e c a l c u l at i o n m o de l and t he p ar t o f the structure that is analysed
A confusion is often made between the three calculation models, namely tabulated data, simple calculation models and advanced models on one hand (see Section 5.2.1), and the three levels of division
of the structure, namely the element analysis, the substructure analysis and the structure analysis on the other hand (see Section 5.1.1). Although these are two different aspects of the question,
there are of course some clear links between these two aspects. Table 5.1 which illustrates the relation between calculation models and division of structure may clarify the situation. The table
shows that: •
• •
Tabulated data are to be used mainly with simple elements. Although one could imagine that such tabulated data be developed for simple substructures, this has actually not yet been done. Complete
structures cannot be analysed by means of tabulated data. Simple calculation models can certainly be used for simple elements and, to some extent, for simple substructures. The analysis of complete
structures can normally not be undertaken with simple calculation models. Advanced calculation models are the tool of choice for the analysis of complete structures or, if the time for the analysis
has to be reduced, for substructures. They can also be used for simple elements but, in many cases, the simplicity, greater availability and user friendliness of simple calculation models, will lead
to the decision to use these simple models when possible.
C a lc u l a t i o n m e t h o d s i n N o r t h A m e r i c a
The most widely used approach for evaluating fire resistance in North America is based on manufacturer listings or through the use of prescriptive based simple calculation methods. ASCE/SFPE 29
contain a number of simplified equations for determining fire resistance of steel structural members. These empirical equations are derived based on the
Mechanical Analysis
results of standard fire resistance tests carried out on steel structural assemblies under standard fire exposure. These empirical methods often utilize factors such W/D ratios, where W is defined as
the weight per unit length of the steel member (in lbs), and D is defined as the inside perimeter of the fire protection (in inches), for defining fire resistance. The methods are based on the
presumption that the rate of temperature rise in a structural steel member depends on its weight and the surface area exposed to heat. For calculating the fire resistance of tubular column sections A
/P factors are used, where A is the section area (in square inches) and P is the section perimeter (in inches). Specific values of W/D and A/P ratios for various steel sections and configurations
(three or four side exposure types) are listed in Tables of AISC Manual (AISC 2005) and Appendix A of the AISC Steel Design Guide 19 (Ruddy et al. 2003). The rational approach for evaluating fire
resistance of steel structural members design is contained in ASCE manual of fire protection (Lie 1992) and SFPE Handbook of Fire Protection Engineering (SFPE, 2002). However, these sources have only
limited information and do not contain details for the advanced calculation methods. The AISC manual of construction contains some discussion on rational fire design principles and refers to Eurocode
3(2003) for calculation methodologies. Thus, it is possible for designers in North America and other parts of the world to apply Eurocode methodologies for evaluating fire resistance and gain
acceptance from regulatory officials. In such scenarios, the relevant high temperature properties for structural steel and insulation should be used. The high temperature material properties for
structural steel and room temperature properties of insulation are listed Annex I and Annex II. In evaluating fire resistance, often critical temperature in steel is used to define failure of a steel
structural member. A critical temperature is defined as the temperature at which steel looses 50% of its room temperature strength (often yield strength). The rational for using critical temperature
to define the failure is based on the premise that the loading on the structure, under fire conditions, is about 50% of its full capacity. In North America the critical temperatures limit commonly
used are 538◦ C for steel columns and 593◦ C for steel beams (similar to temperature acceptance criteria adopted in ASTM E119) regardless of the loads applied to structural members. This is in
contrast to European practice, see Section 5.7, where critical temperatures for steel members are specified depending on the so-called applied load level, or load ratio, i.e. critical temperatures
are dependent on both the type of the structural member and the load level. The critical temperatures, however, are independent of time, and also, independent of the shape or size of the steel
5.3 Load, time or temperature domain The stability analysis can be performed through different approaches mentioned in the Eurocode: namely in the time domain, in the load domain and, in some cases,
in the temperature domain. These possibilities are illustrated on Figure 5.1 and Figure 5.2 for a simple case in which the applied load, in fact the effect of action Efi,d , is constant during the
fire and the element is characterised by a single temperature, θstructure . Figure 5.1 refers to the case of a nominal fire in which the fire temperature, θfire , is continuously increasing. The
temperatures in the structure, θstructure , will therefore also be a continuously increasing as a function of time and, although this will not
Designing Steel Structures for Fire Safety
Load, resistance
2 A
Time treq
1 ϑfire
3 ϑstructure A
Time treq
Load, resistance
Fig. 5.1 Load, time or temperature domain for a nominal fire Rfi,d,t A
Time tfailure
treq 1
Jcr B
3 Jstructure Jfire
Time tfailure
Fig. 5.2 Load, time or temperature domain for a natural fire
be demonstrated theoretically, it will be assumed that this induces a continuously decreasing load bearing capacity, Rfi,d,t . The situation is different in the case of a natural fire in which the
fire temperature has an increasing phase systematically followed by a cooling down phase, see Figure 5.2. The temperature in the structure will follow a similar evolution, although with a time
Mechanical Analysis
delay. For steel structures, the load bearing capacity of the structure that could be calculated at different moments in time produces a pattern as shown on Figure 5.2, with a first phase where the
load bearing capacity decreases as a function of time, and a second phase when the structure recovers its load bearing capacity, mainly because steel recovers its strength, either totally or
partially, when cooling down to ambient temperature. In each case, treq noted on the Figures is the required fire resistance time of the structure. The situation at the beginning of the fire is
represented by point A on the Figures and, if the analysis is performed by the advanced calculation model, the method, i.e. the software normally, will usually track the evolution of the situation of
the structure until point B when failure occurs (most computer software indeed perform a transient step by step analysis). This means that the curve showing the evolution of the load bearing capacity
is not known to the designer. If, on the contrary, the analysis is performed by the simple calculation model, there are different manners to verify the stability, normally referring to one of the
points of this curve. The three verification possibilities are: 1.
In the time domain. It has to be verified that the time of failure tfailure is higher than the required fire resistance time treq . This is expressed by Equation 5.1 and corresponds to the
verification 1, satisfied on Figure 5.1 but not satisfied on Figure 5.2. tfailure ≥ treq
In the load domain. At the required time in the fire treq , it is verified that the resistance of the structure Rfi,d,t is still higher than the effect of action Efi,d . This is expressed by equation
5.2 and corresponds to the verification 2 on Figures 5.1 and Figure 5.2. Rfi,d,t ≥ Efi,d
at t = treq
This verification is proposed as the standard method in Eurocode 3. It can be shown that, in the case of a fire with no decreasing phase, the fact that Equation 5.2 is satisfied guarantees that
Equation 5.1 is also satisfied, see Figure 5.1. On the other hand, in the case of a fire with a cooling down phase, it can happen at some stage that Equation 5.2 is satisfied whereas Equation 5.1 is
not satisfied, see Figure 5.2. In the temperature domain. At the required fire resistance time treq , it has to be verified that the temperature of the structure ϑstructure is still lower than the
critical temperature ϑcr , i.e. the temperature that leads to failure. This is expressed by Equation 5.3 and corresponds to the verification 3 on Figures 5.1 and 5.2. ϑ ≤ ϑcr
at t = treq
This verification is a particular case of the verification in the load domain, only possible when the stability of the structure is depending on a single temperature,
Designing Steel Structures for Fire Safety
which is the case in steel elements under uniform temperature distribution. It can also happen for natural fires that Equation 5.3 is satisfied whereas Equation 5.1 is not. It appears thus from the
above discussion that, in the case of a natural fire, a single verification made in the load or in the temperature domain is not sufficient if the required fire resistance time is higher than the
time of minimum load bearing capacity, i.e., usually, the time of maximum temperature. It is possible to make the verification in the load domain several times at different times of fire until, after
an iterative process, the time is found where the resistance of the structure is equal to the applied load. This time is, by definition, the failure time and it can be compared to the required fire
resistance time. It is yet simpler and possible to make one single verification if Rd,fi in Equation 5.2 or θ in Equation 5.3 are not taken systematically at t = treq , but at the time of the maximum
steel temperature. The rest of this chapter is presented based on the verification in the load domain. This type of verification has indeed several advantages. 1.
It is easy to use; because the verification is at a given time; the steel temperature and hence the material properties are known and can be used for the evaluation of the load bearing capacity. It
is applicable for any type of effect of actions whereas, as will be explained in Section 5.7, verification in the temperature domain is possible only in a limited number of cases. It produces a
safety factor that is similar to the one that engineers and designers have been using for years at room temperature, namely the ratio between the applied load and the failure load. On the other hand,
verification in the temperature domain yields a safety factor in degrees centigrade that does not provide much in term of practical consequences. A verification in the time domain may even be more
confusing because, with the tendency of standard fire curves to level off at nearly constant temperatures after a certain period of time, they can yield the false impression of a very high level of
safety because the calculated time of failure is significantly longer than the required fire resistance time, simply because the temperature of the structure changes very slowly, whereas a small
variation in the applied load or in the heating regime would decrease the fire resistance time very dramatically close to the required resistance time.
5.4 Mechanical properties of carbon steel For ambient design of building members at the ultimate limit state, carbon steel is usually idealised either as a rigid-plastic material, for evaluating
plastic bending capacity of sections for example, or as an elastic-perfectly plastic material, for instability problems such as buckling for example. At elevated temperatures, the shape of the
stress-strain diagram is modified. The model recommended by EN 1993-1-3 is an elastic-elliptic-perfectly plastic model, plus a linear descending branch introduced at large strains when this material
is used in advanced calculation models. The first part of the stress-strain relationship is schematically represented by the continuous curve O-A-B on Figure 5.3.
Mechanical Analysis
Stress C
Ea,θ O
Fig. 5.3 Stress–strain relationship (schematic) for steel
The strain–strain relationship at elevated temperature is thus characterised by 3 parameters: • • •
The limit of proportionality fp,θ The effective yield strength fy,θ The Young’s modulus Ea,θ
Note: the strain for reaching the effective yield strength, point B on the Figure, is fixed at 2%. The Eurocode contains a table that gives the evolution of these properties as normalized by the
relevant property at room temperature, namely: • • •
kp,θ = fp,θ /fy ky,θ = fy,θ /fy kE,θ = Ea,θ /E This table is reproduced in Annex II of this book.
5.5 Classification of cross-sections Stocky steel members are able to support a significant degree of rotation and/or compression without any local deformation and to develop the full plastic
capacity of the section. Steel members made of thin plates, on the contrary, suffer from severe local deformations, possibly at load levels that are below the elastic capacity of the section. In the
philosophy of the Eurocodes, steel sections are sorted into 4 different classes with respect to the susceptibility to local buckling. 1.
Class 1 sections are the stockiest sections. They are able to develop the full plastic capacity and this capacity is maintained for very large deformations. The ductility is sufficient to allow a
redistribution of the bending moments along the length of the members by formation of plastic hinges during a loading to failure.
3. 4.
Designing Steel Structures for Fire Safety
Class 2 sections are also able to develop the full plastic capacity, but this capacity cannot be maintained for large deformations. Plastic redistribution along the members is not possible with such
a section. Class 3 sections are able to develop the full elastic capacity but cannot reach the plastic capacity. Class 4 sections are the thinnest sections. In these sections, local buckling occurs
for load levels that are below the full elastic capacity of the section.
At room temperature, the classification of a section depends on different parameters such as: • • •
The geometric properties of the section, via the slenderness of the plates that form the section. The more slender the plates, the higher the classification. The type of effect of action. Whereas the
whole web is in compression under axial loading of the section, only half of it is in compression under pure bending loading and the susceptibility to local buckling is reduced in the later case.
Material properties. In an elastic-perfectly plastic material; – If the Young’s modulus is kept constant, higher yield strength means that the section has to be submitted to larger deformations
before it develops the full plastic capacity. Sections with high yield strength are thus more prone to local buckling. – On the other hand, if the yield strength is kept constant, a lower Young’s
modulus means that the section has to be submitted to larger deformations before it develops the full plastic capacity. Sections with low Young’s modulus are thus more prone to local buckling.
In fact, the parameter that drives the classification of the section with regard to the material properties is the square root between these two material properties, see Equation 5.4 E/fy
Because, at room temperature, the Young’s modulus of steel can be regarded as a constant, the parameter that appears in the application rules is in fact the parameter ε given by Equation 5.5. Local
buckling is most likely to occur for low values of this parameter. ε=
where fy is the yield strength of steel at room temperature, in N/mm2 . The classification of a cross-section is made according to the highest class of its compression parts. Table 5.2 summarizes the
limits of the width-to-thickness ratios (slenderness) for Class 1, 2 and 3, in case of internal compression parts (webs) and outstand flanges. Complete information can be found in EN 1993-1-1. For
slenderness greater than the Class 3 limits, the cross-section should be taken as Class 4.
Mechanical Analysis
Table 5.2 Maximum slenderness for compression parts of cross-section Class
Flange compression
≤33ε ≤38ε ≤42ε
≤72ε ≤83ε ≤124ε
≤9ε ≤10ε ≤14ε
SQRT(kE /ky) ()
1.0 0.8 0.6 0.4 0.2 0.0 0
Temperature (°C)
Fig. 5.4 Material property influencing local buckling
At elevated temperature, the Young’s modulus as well as the yield strength are modified. The values at room temperatures are multiplied by kE,θ , respectively ky,θ , to give the values at elevated
temperatures. If the material would remain elastic-perfectly plastic at elevated temperature, the parameter of Equation 5.4 would be transformed as indicated by Equation 5.6.
Eθ /fy,θ =
kE,θ E = ky,θ fy
kE,θ ky,θ
E fy
The coefficients that describe the evolution of the Young’s modulus and the yield strength, namely kE,θ and ky,θ , follow two different functions of the temperature. The ratio kE,θ /ky,θ is thus also
a function of the temperature, see Figure 5.4. In Eurocode 3, the constant value of 0.85 has been considered for simplicity as an approximation of the function kE,θ /ky,θ . Figure 5.4 shows that, for
the practical temperature range from 500 to 800◦ C, this constant value is more or less an average
Designing Steel Structures for Fire Safety
value between all the possible values that can be calculated, namely 0.75 at 700◦ C and 0.90 at 800◦ C. It has to be kept in mind that steel at elevated temperatures is not an elastic – perfectly
plastic material and that the considerations based only on the Young’s modulus and on the yield strength are only indicative. The advantage of a constant value as opposed to a temperature dependent
classification is that it prevents the following situation: for an infinitely small temperature increase in the range from 400 to 500◦ C or from 700 to 900◦ C, it could happen that a section has its
classification improved, from Class 3 to Class 2 for example, and the steel member would, as a consequence, have its load bearing capacity increased by a temperature increase. This unrealistic result
that would be created by the stepwise classification of the sections does not occur with a classification based on parameter ε that is not temperature dependent. Finally, the classification of the
sections in the fire situation is made according to the same rules as at ambient temperature but using the parameter defined by Equation 5.7 instead of Equation 5.5. (5.7) ε = 0.85 235/fy This means
that the whole classification process has to be made again in the fire situation, theoretically for each load combination because the classification depends on the effects of actions. The situation
is complicated enough for the simple calculation models where the effects of actions are evaluated only once at time t = 0 but, for advanced calculation models, the classification should
theoretically be done at every time step during the fire because of indirect fire actions. In practice, some level of approximation must be tolerated. Generally, each section will be classified once
for all in the fire situation depending on its most relevant load resisting mode. A beam will be classified as if acting in pure bending and a member that is essentially axially loaded will be
classified as if acting in pure compression. It should be noted that, in moment resisting frames, a significant degree of bending is present, even in the columns.
5.6 How to calculate Rfi,d,t ? 5.6.1
Gen er a l p r i n c i p l e s
Generally speaking, the procedures used to calculate the design resistance of a steel member for the fire design situation are based on the same methods and equations as the ones used for the normal
temperature situation, but modifying the mechanical properties of steel in order to take the temperature increase into account. This modification can be straightforward in the usual hypothesis of a
uniform temperature in the section, somewhat more complex in case of a non-uniform temperature distribution. This procedure is applicable only because the material model proposed by Eurocode 3 at
elevated temperature does not contain an explicit creep term. Creep is deemed to be implicitly included in the stress strain relationship. As a consequence, the temperature leading to failure does
not depend on the time required to reach this temperature and, hence, the thermal analysis and the mechanical analysis can be performed separately and in any order. For example, it is possible to
determine first the critical temperature
Mechanical Analysis
of a defined structure and then to choose the amount of thermal protection needed for this temperature not to be attained before a certain amount of time. This is possible only because the critical
temperature is the same, whether it is reached within 20 minutes or within 2 hours. As a limit, Eurocode 3 states that this is valid provided that the heating rate in steel is comprised between 2 and
50◦ C/min., which is normally the case in building structures subjected to fire. In fact, the procedures used to calculate Rfi,d,t diverge in some aspects from the procedure used at room temperature
to calculate Rd . This is the case namely; (a) (b) (c) (d) (e)
for the evaluation of the buckling length of continuous columns in braced frames, for the buckling and lateral torsional buckling curves, for the M-N interaction equations, for the classification of
the sections, and for non uniform temperature distribution in beams.
The differences with the design at room temperature will be mentioned and discussed in the text wherever required. In fact, the differences from (b) to (d) can be traced down to the shape of the
stress-strain diagram that is different at elevated temperature from the shape of the diagram that was considered when the design equations were established for room temperature conditions, see
Figure 5.3. If the same equations are used at elevated temperature as at room temperature and simply fy and E by the corresponding value at the elevated temperature, it is as if the material was
following the path O-C-B (in a design equation based only on fy ) or the path O-D-B (in a design equation based on fy and E) instead of the real path O-A-B. Some adaptations to the design equations
established for room temperature conditions may thus be necessary when used at elevated temperatures. The Eurocode proposes detailed equations for different types of effects of actions. These
equations are presented and discussed in the next sections. 5.6.2 Ten s ion m e m b e r s In case of a uniform temperature, the equation proposed by Eurocode 3 for the design resistance of a tension
member is Equation 5.8. Nfi,θ,Rd = ky,θ NR,d [γM,0 /γM,fi ]
where: ky,θ
is the reduction factor giving the effective yield strength of steel at temperature θa reached at time t, NRd is the plastic design resistance of the cross-section for normal temperature design,
according to EN 1993-1-1, γM,0 is the partial safety factor for the resistance of cross-section at normal temperature, γM,fi is the partial safety factor for the relevant material property, for the
fire situation. Note: the recommended value for γM,0 and γM,fi is 1.00, but different values may be defined in the National Annex.
Designing Steel Structures for Fire Safety
It is as easy to use directly Equation 5.9 that is physically more meaningful. Nfi,θ,Rd = Aky,θ [fy /γM,fi ]
where A is the cross-sectional area of the member. It has to be realised that the utilisation of Equation 5.8 or Equation 5.9 means that the member must exhibit a stress-related strain of 2% for the
full plastic load in tension to be mobilised. Added to a thermal strain in the order of magnitude of 1%, this means that the total elongation of the bar is near 3% at failure. Eurocode 3, in 4.2.1
(5) and in Annex D, specifies that net-section failure at fastener holes need not be considered, provided that there is a fastener in each hole because, according to the Eurocode, the steel
temperature is lower at connections due to the presence of additional material. It has yet been shown that this hypothesis is not safe in general, Franssen 2002. This is especially the case in
protected members where the temperature is more likely to be nearly uniform or, in any case, after certain duration of a standard fire where the gas temperature tends to level off to a nearly
constant level which, also, has a tendency to create a uniform situation in the steel structure. If the temperature distribution is uniform, there is no beneficial effect of added thermal massivity
that can compensate for the reduction in net section. The design resistance at time t of a tension member with a non-uniform temperature distribution across the cross-section may be determined by
Equation 5.10 or Equation 5.11, with the latter equation leading to a conservative approximation. Nfi,t,Rd =
Ai ky,θ,i [fy /γM,fi ]
where the subscript i refers to an elemental area of the cross-section in which the temperature is considered as uniform. Nfi,t,Rd = Aky,θmax [fy /γM,fi ]
where θmax is the maximum temperature in the section at time t. Application of Equation 5.10 makes sense only if the temperature distribution is symmetrically distributed in the section. If not, the
mechanical centre of gravity in the section is moved by the non symmetrical variation of the yield strength and the section is submitted to tension and bending (for which there is no specific
provisions in the Eurocode). In the case of a non symmetrical temperature distribution, it is preferable to accept an approximation and to use Equation 5.9 (uniform temperature distribution) or
Equation 5.11 (maximum temperature in the section). 5.6.3
C omp r e s s i o n m e m b e r s w i t h Cl as s 1, 2 o r 3 c r o s s-s e c t i o ns
This section is related to members that are submitted to axial compression; a separate section is indeed dedicated to members subject to combined bending and compression. The following illustrates a
scenario where the design in the fire situation differs somewhat from the design at normal temperature. The two differences are related to the evaluation of the buckling length and to the buckling
curve that is used.
Mechanical Analysis
If the column is a continuous member that extends through several floors of a braced building and if each storey forms a separate fire compartment, then the buckling length of a column exposed to
fire in an intermediate storey may be taken as lfl = 0.5 L and in the top storey as lfl = 0.7 L where L is the system length in the storey that is under fire. The reason for considering these reduced
lengths is that the stiffness of the column in the fire compartment decreases as its temperature increases, whereas the adjacent parts of the column that are located in the floors above or below
remain at normal temperature and keep a constant stiffness. As a consequence, the adjacent parts become relatively stiffer and provide a significantly higher degree of restraint with respect to
rotation. Therefore, the boundary conditions of the heated part of the column tend toward the condition of rotationally fixed supports, leading to the value of 0.5 L (fixed-fixed supports) or 0.7 L
(fixed-free supports). Although this is not explicitly mentioned in the Eurocode, it can be deduced that the buckling length of the column at the first floor should be equal to 0.5 L or 0.7 L,
depending on the boundary condition at the base of the column. Numerical software that have been established for the analysis of structures at room temperature will not recognise this effect if the
method utilised for calculating the slenderness of the members is based on the underlying hypothesis that the Young’s modulus of the material is the same in every bar. An adaptation may be required
if such software are used for analysing the structure in the fire situation. The buckling curve of hot rolled sections subjected to fire has been studied in an ECSC research project (Schleich et al.
1988) and the results of this work have been incorporated in the Eurocode. The main results of this research work can also be found in Talamona et al. (1997) and in Franssen et al. (1998). The
proposed equations have a form that is very similar to those proposed at normal temperature; the main differences are: 1.
There are not any more several different buckling curves depending on the shape and dimensions of the cross-section or on the buckling axis, as was the case at room temperature. The buckling curve
now depends on the yield strength at room temperature, as was the case in some preliminary drafts of Eurocode 3 – Part 1, although this distinction had not been maintained in the final draft of the
Eurocode for room temperature.
The successive steps to be followed to determine the design buckling resistance Nb,fi,t,Rd of a member in compression with a uniform temperature θa are: 1.
Determine the non-dimensional slenderness (λ) based on material properties at room temperature, but using the buckling length in the fire situation as explained above, see Equation 5.12. lfl / I/A λ=
π E/fy
Designing Steel Structures for Fire Safety
where: I is the second moment of area of the cross-section A is the area of the cross-section. 2.
Determine the non-dimensional slenderness for the temperature θa (λθ ) according to Equation 5.13. λθ = λ ky,θ /kE,θ
The term that multiplies the non-dimensional slenderness in Equation 5.13 is the invert of the term that is present in Equation 5.6 and presented on Figure 5.4. This reflects the fact that, because
the Young’s modulus decreases with temperature faster than the yield strength, the non-dimensional slenderness is higher at elevated temperatures, except for temperatures beyond 870◦ C, i.e. at
temperatures that are not practically relevant. Determine the imperfection factor of the utilised steel according to Equation 5.14 α = 0.65 235/fy
where fy is the yield strength in N/mm2 . Determine the coefficient ϕθ according to Equation 5.15. 2
ϕθ = 0.5 (1 + αλθ + λθ ) 5.
Determine the buckling coefficient according to Equation 5.16. χfi =
1 2 ϕθ + ϕθ2 − λθ
Determine the buckling resistance according to Equation 5.17. Nb,fi,θ,Rd = χfi Aky,θ fy /γM,fi
Generally speaking, the above procedure has to be repeated twice, once for each buckling plane. In fact, it is sufficient to duplicate the first step and to pursue steps 2 to 6 for the plane that has
the highest non-dimensional slenderness. If the temperature distribution is non-uniform, the design fire resistance may be calculated according to the same procedure but on the basis of the maximum
steel temperature. Yet, this is only admitted when designing using nominal fire exposure. It has indeed be shown, Anderberg 2002, that the lateral displacements that may be created by the non-uniform
temperature can have a negative effect that outweighs the beneficial effect created by some parts of the section being colder than the maximum temperature. This is especially the case for slender
members. Attention must be paid particularly to cantilevered column as encountered in fire resistant walls with no lateral
Mechanical Analysis
support at the top. Thus, if the design is based on a realistic fire exposure, a similar degree of sophistication should be exercised in the mechanical analysis and these effects must be taken into
account in a quantitative manner (utilising an advanced calculation model). This is clearly a case when, according to 2.4.2 (4) of Eurocode, “the effects of thermal deformations resulting from
thermal gradients across the cross-section need to be considered’’. If, on the other hand, the fire exposure is represented by a nominal fire curve with its inherent arbitrary character, then
Eurocode permits to make also an approximation in the mechanical analysis and to use the simple design equations based on the maximum temperature. Although this is not explicitly mentioned in the
Eurocode, it has to be recognised that a non-uniform temperature distribution that is symmetric in the section, for example the web of an I profile being hotter than the two flanges, does not produce
any lateral displacement and it could be admitted in that case to use the simple design method, even if the fire exposure is not represented by a nominal curve. The restriction should apply to these
non-symmetric temperature distribution that create lateral displacements, for example one of the flanges being colder than the other one. Because the non-dimensional slenderness in the fire situation
λθ depends on the temperature, an iterative procedure appears if the critical temperature corresponding to a given applied load has to be determined (verification in the temperature or in the time
domain, see Section 5.3). Convergence is usually very fast and one single iteration is usually sufficient if, for the first determined temperature, Equation 5.13 is approximated by Equation 5.18. λθ
= 1.2λ
Application of the above equation leads to a first approximation of the critical temperature. The whole process can be repeated once with the exact Equation 5.13 being now used instead of Equation
5.18, with this first temperature being used to determine the non-dimensional slenderness. It will be observed that the second determined value for the temperature is not that much different from the
first one and the iteration process need not to be continued. The research that formed the basis of the proposed equation dealt mainly with hot rolled I sections. It seemed quite logical to extend
the results to welded I sections, probably because the influence of residual stresses that trigger a different behaviour at room temperature is less pronounced at elevated temperatures. For sections
with a totally different shape on the other hand, like for example angles or circular and rectangular hollow sections, utilisation of the proposed equation is indeed an extrapolation of the generated
results to shapes that have not been considered in the study. This is the only alternative until further until further studies are undertaken on these types of section. Numerical analysis tools that
have been established for the analysis of columns at room temperature would thus not produce the appropriate result even if the appropriate buckling length is introduced, because the relative
slenderness is now a function of the temperature and because a buckling curve that is specific to the fire situation must be used.
Designing Steel Structures for Fire Safety
B ea m s w i t h C l a s s 1 , 2 o r 3 c r o s s-s e c t i o n
It has first to be recognised that EN1993-1-2 does not give any definition of what a beam is. Because a separate section is dedicated to members under combined compression and bending, it can be
concluded that a beam is a member under simple bending. On the contrary to what is mentioned in the heading of this section, the Eurocode does not propose any method to design a beam; it simply gives
a method to determine the resistance of a section in bending or in shear. The methods presented here by the authors are direct extrapolation of the methods used at room temperature. 5.6.4.1 R e si s
ta n c e i n s h ea r The design shear resistance should be determined from Equation 5.19. Vfi,t,Rd = ky,θ,web VRD [γM,0 /γM,fi ]
where: θ, web ky,θ,web VRD
is the average temperature in the web of the section, is the reduction factor for the yield strength of steel at the average temperature of the web, is the shear resistance of the gross cross-section
for normal temperature design according to EN 1993-1-1.
The fact that the average temperature of the web is mentioned here does not imply that the hypothesis of a uniform temperature is not admitted. Either a non-uniform distribution is considered, in
which case the average temperature in the web is naturally considered for the shear resistance, or a uniform distribution is considered, in which case the average temperature in the web is equal to
the uniform temperature in the section. The method for designing a beam in shear is that Equation 5.20 is respected in any section of the beam. Vfi,Ed ≤ Vfi,t,Rd
where Vfi,Ed is the shear force in the section in the fire design situation. The shear force has to be determined by an elastic analysis of the effects of action in the beam if the cross-section is
classified as Class 2 or Class 3, and by a plastic analysis in case of a Class 1 cross-section. Indeed, in the later case, the redistribution of plastic bending moment produced by the formation of
plastic hinges also leads to a modification of the reaction forces and hence of the shear forces in the beam. The shear force has also to be determined in order to take into account its effect on the
bending resistance, see section 5.6.4.2 and 5.8.3. 5.6.4.2 R e s i s ta n c e i n b en d i n g 5.6.4.2.1 Un i f o r m t e m p e r a t u r e d i s t ri b u t i o n The design moment resistance of a
section with a uniform temperature is given by Equation 5.21 proposed in the Eurocode or, equivalent, by Equation 5.22. Mfi,θ,Rd = ky,θ [γM,0 /γM,fi ]MRd
Mechanical Analysis
Mfi,θ,Rd = ky,θ [fy /γM,fi ]W
where: MRd is the plastic or elastic (function of the section classification) moment resistance of the gross cross-section for normal temperature design, allowing for the effects of shear if
necessary, according to EN 1993-1-1, W is the plastic modulus of the section Wpl for a Class 1 or a Class 2 section or the elastic modulus of the section Wel for a Class 3 section. Note: Equation
5.21 and the comment that MRd has to be reduced for the effects of shear according to EN 1993-1-1 may lead to the conclusion that the ratio at room temperature VEd /Vpl,Rd has to be considered in the
reduction. In fact, Equation 5.22 shows that it is more consistent to consider the ratio at elevated temperature Vfi,d /Vfi,t,Rd . The proposed method for designing a beam depends on the class of the
section. • •
If the cross-section is a Class 3 section, it has to be verified that the elastically determined bending moment in the fire design situation does not exceed the elastic design moment resistance in
any section of the beam. If the cross-section is a Class 2 section, it has to be verified that the elastically determined bending moment in the fire design situation does not exceed the plastic
design moment resistance in any section of the beam. In other words, the formation of one single plastic hinge is allowed, in the section where Mfi,Ed is equal to Mfi,θ,Rd (plastic value), but no
redistribution of bending moments is admitted. If the cross-section is a Class 1 section, a redistribution of bending moments may occur and lead to a plastic mechanism where the bending moment in the
plastic hinges is determined by Equation 5.21 or 5.22 (plastic value). The resistance of the beam is the same as for a Class 2 section in a statically determinate beam, but is increased in a
statically undeterminate beam.
5.6.4.2.2 Non-u n i f o r m t e m p e r a t u r e d i s t r i b u t i o n If the section is a Class 1 or a Class 2 section, it is possible to determine the plastic design resistance of the section
(Mfi,t,Rd ) taking into account the value of the yield strength in each part of the section. The Eurocode specifies the following equation to compute Mfi,t,Rd . Ai zi ky,θ,i fy,i /γM,fi (5.23)
Mfi,t,Rd = i
where: Ai zi
is the cross section of an elemental area of the cross-section with a temperature θI , is the distance from the plastic neutral axis to the centroid of the elemental area Ai .
The position of the plastic neutral axis changes continuously during the course of the fire and has to be determined at the relevant moment in time. This position can be determined from the fact that
the plastic resistance on one side of the axis is equal to the plastic resistance on the other side of the axis.
Designing Steel Structures for Fire Safety
There is a sentence in Clause 4.2.3.3 (2) of EN 1993-1-2 that says, about Equation 5.23, that fy must be “taken as positive on the compression side of the plastic neutral axis and negative on the
tension side’’. This sentence is without any merit and must be ignored. It would only make sense to take the yield strength as negative on the tension side if the distance zi would also be taken as
negative on the tension side, which is physically not correct because a distance is always positive. It makes sense to count some elemental areas as positive and some others as negative in Equation
5.24 if it is used in order to determine the position of the plastic neutral axis (this equation is valid if the yield strength is uniform in the section).
Ai ky,θ,i = 0
More important is the fact to recognise that if an elemental area comprises the neutral axis, it must then be divided into two sub-areas, one above and one below the neutral axis. This is the case,
for example, in the web of an I section or in the webs of a rectangular hollow structural section. There is in the Eurocode no equation similar to Equation 5.23 for Class 3 sections. Nothing would
theoretically speak against the fact to determine the location of the elastic neutral axis, according to equation 5.25 for example.
Ai (yi − y)kE,θ,i = 0
where: yi y kE,θ,i
is the co-ordinate in an arbitrary reference axis of the centroid of the elemental area Ai , is the co-ordinate in the same reference axis of the elastic neutral axis of the section, is the reduction
factor of the Young’s modulus of steel at temperature θi .
The elastic stiffness in bending could then be determined according to Equation 5.26. EIel,t =
Ai zi kE,θ,i E
where zi would now be the distance from the elastic neutral axis. The stress should then be checked against the yield strength in each area according to Equation 5.27. MEd,fi zi ≤ ky,θ,i fy EIel,t
Yet, the procedure described by Equation 5.25 to 5.27 is not proposed in the Eurocode for Class 3 sections. The more simple procedure described hereafter is proposed in EN 1993-1-2.
Mechanical Analysis
In a member with non uniform temperature distribution, the design bending moment resistance of a cross-section with non-uniform cross-section may be determined from Equation 5.28 for Class 1 or Class
2 sections (in which case this is an alternative to Eq. 5.23) or from Equation 5.29 for Class 3 sections (in which case this is the standard procedure). Mfi,t,Rd = ky,θ [fy /γM,fi ]
Wpl κ1 κ2
Mfi,t,Rd = ky,θ,max [fy /γM,fi ]
Wel κ1 κ2
where: in Equation 5.28 is for Class 1 and 2 sections, a uniform steel temperature in the section that is not thermally influenced by the supports, θ, max in Equation 5.29 is for Class 3 sections,
the maximum steel temperature reached at time t, κ1 is an adaptation factor for non-uniform temperature in the cross-section, κ2 is an adaptation factor for non-uniform temperature along the beam. θ
The following factors should be given due consideration while using Equations 5.28 and 5.29. • •
Equations 5.28 and 5.29 have been developed for the simplest case when there is no reduction of the bending resistance due to the effects of shear. The effect of shear on bending moments has to be
taken into account if necessary. Equations 5.28 and 5.29 take into account the fact that the temperature in steel members may be colder (at least in the heating phase, for real fires) in the zones
near the support than in the zones that are far away from the supports, at mid span for example. This is because the material that physically constitutes the support of a beam may shield the beam
locally from the fire exposure and may act as heat sink. This is the case, for example, if the support is a masonry or a concrete wall. It has indeed been observed after real fires or in experimental
tests that the plastic hinge leading to failure was displaced from the support toward the centre of the span by a distance ranging from 20 to 100 cm. Yet, the real temperature distribution near the
supports can not be determined precisely and a simple method only allows determination of the temperature in the central zones of the beam. Because the structural analysis will be based on this
undisturbed temperature distribution, a correction factor has been introduced, namely the factor κ2 . The Eurocode says that this factor must be given the value of 0.85 at the supports of statically
indeterminate beams and 1.0 in all other cases. The probable reason might be the Eurocode considering a statically determinate beam as a beam simply supported on two end supports. In that case, even
if the temperature may be colder near the supports, this has no effect on the fire resistance because the bending moment at supports is close to zero. In a continuous beam, on the other hand, the
cold effect may be significant if it occurs in the intermediate
Designing Steel Structures for Fire Safety
supports where the bending moment has the highest values. Theoretically speaking, the same effect could also exist at the single support of a cantilever beam which is also a statically determinate
beam, but the Eurocode does not allow to take the effect into account in that case, perhaps because the formation of a single plastic hinge in a statically indeterminate beam does not lead
immediately to failure whereas one single hinge leads to failure in a statically determinate beam and more caution has to be exercised. By analogy, the authors of this book recommend that the effect
not be taken into account in the most exterior supports of a continuous beam with a cantilever part because one single hinge at that location leads to the immediate failure of the cantilever. The
authors also believe that the permission to use the value of κ2 = 0.85 is not automatically granted at every support of every statically indeterminate beam; the designer must be convinced that the
temperature at that location is really lower than in the central parts of the beam and the rational for this should be justified. Such an effect, for example, would certainly not be encountered if
the intermediate supports of the beam are steel tension rods or axially loaded steel columns with a thermal massivity that is smaller than the thermal massivity of the beam, which means that the
temperature might actually be somewhat higher at the supports than in the span. It has to be noted that this effect of colder zones near the supports is not systematically taken into account if the
analysis of a continuous steel beam is performed by the advanced calculation model. Indeed, taking it into account correctly would require a 3D thermal modelling of the zone near the supports. It
would be possible to introduce a small length of the beam near the support that is prevented from the effect of the fire, but this would be an approximation arbitrarily introduced. This effect thus
generally not considered by the advanced calculation model with the consequence that the simple model can yield results that are on the unsafe side compared to the more advanced model. Equations 5.28
and 5.29 take into account the fact that the temperatures in the section of a beam that supports a concrete slab are somewhat lower than the temperatures that are calculated by the simple method.
Indeed, the simple method allows taking into account the fact that the upper side of the top flange is not submitted to the fire, by a simple modification of the thermal massivity. What is not taken
into account by the simple model is the heat sink effect, the fact that some heat is transferred from the top flange of the section to the concrete deck, which delays the temperature increase in the
steel section. In order to take this beneficial effect into account, the factor κ1 is given the value of 0.70 for unprotected beams and 0.85 for protected beams exposed on three sides, with a
composite or concrete slab on side four. Although this is not explicitly mentioned in Eurocode 3, the authors believe that, in order to be consistent with Eurocode 4, a steel beam supporting a
composite slab should be considered as heated on 3 sides only if the area of the upper flange that is covered with the corrugated steel profile of the composite slab is at least equal to 90% of the
whole area of the upper flange; if not, the steel section should be considered as heated on four sides and the value of the factor κ1 kept to 1.0.
Mechanical Analysis
Precise evaluation of the plastic capacity of a steel section covered with a concrete slab, for example by means of an advanced calculation model taking the heat sink effect into account, fails by
far to show such a huge increase compared to the plastic capacity that can be calculated when the heat sink is not taken into account. Burgess et al. (1991) have shown that this beneficial effect is
about 7% in a symmetrical section. In fact, the value of 0.70 for unprotected sections (i.e. an increase of (1 − 0.7)/0.7 = 43%) has been considered by the draft team on EN 1993-1-2 on the basis of
results from an experimental test series performed in the U.K. (Wainman & Kirby 1988). In fact, the test reports mention that “thin gauge steel reinforcing tang’’ had been welded on the top flange of
the sections and cast into the concrete of the supported slabs and may have thus produced some level of composite action. The beneficial effect accounting for the heat sink effect was reduced by a
factor of 2 in the Eurocode, from 0.70 to 0.85, for protected beams in order to reflect the fact that the advanced numerical calculations show an even less pronounced effect in that case. As long as
the temperature at mid span does not reach 560◦ C, the reduction factor of the effective yield strength ky,θ is higher than 0.595. Strict application of Equation 5.28 or 5.29, in which κ1 κ2 = 0.70 ×
0.85 = 0.595, would then yield a resistance to bending at the support that is higher at elevated temperatures than at room temperature! It may be wise to limit Mfi,t,Rd to MRd . The temperature that
has to be taken into account to evaluate the bending resistance of the section is not exactly the same in Equation 5.28 and in Equation 5.29. – For Class 1 and Class 2 sections, see Equation 5.28,
the temperature is the uniform temperature calculated in the central part of the beam, far away from the effect of the supports. – For Class 3 sections, see Equation 5.29, the temperature is “the
maximum steel temperature reached at time t’’, and it is not straightforward to know which temperature exactly has to be taken into account. It seems obvious to take also this temperature in a
section that is not thermally influenced by the supports because; firstly, it is not possible to calculate the temperature on the supports; secondly, there would be no reason to use a κ2 factor if
the temperature would be taken in the region of the supports; thirdly, the definition of this temperature in Clause 4.2.3.4 (2) is finished by a comment that reads “see 3’’, which might be understood
as “see 4.2.3.3’’, i.e. far away from the supports as for Class 1 and Class 2 sections. Where in the section is the maximum temperature? The opinion of the authors is that the designer may consider
the temperature in the section as uniform if the section is exposed on four sides, in which case only the factor κ2 would be considered (κ1 = 1.0). If the beam supports a concrete slab, it is not
easily determined a priori where the maximum temperature is, but it seems certain that this is not in the top flange. One may argue that the maximum temperature is in the web because this is the
thinnest plate of the section, but one could also argue that, on the contrary, the web could feel the cooling influence from the top flange, especially if the section is no very deep. In fact, the
most practical way is to calculate the temperature as if the section was exposed on four sides, i.e. on the basis of
Designing Steel Structures for Fire Safety Table 5.3 Parameters for beam design Section Class
Exposed on all four sides
Exposed on three sides with slab on side four
1, 2
κ1 = 1.0 θa computed considering A/V for four sides κ1 = 1.0 θa,max computed considering A/V for four sides
κ1 = 0.7 θa computed considering A/V for three sides κ1 = 0.7 θa,max computed considering A/V for four sides
the massivity factor of the section exposed on four sides. This is in fact the temperature that would be calculated if the influence of the concrete deck was totally ignored. This temperature may
also be considered to correspond, more or less, to the average temperature in the lower half of the section. Table 5.3 summarizes the above considerations and presents the parameters to be considered
for the beam design, function of the cross-section class and exposure. The proposed method for designing a beam in the case of non uniform temperature distribution is the same as the one proposed in
5.6.4.2.1 for uniform temperature distribution. The bending capacity of the section still depends on the class of the section. The difference is that the plastic capacity may not be the same at the
supports and in the spans because of the κ2 correction factor taking into account the effect of the supports. 5.6.4.3 R e s i s ta n c e to l a ter a l tor s i on a l b u c k lin g The design lateral
torsional buckling resistance moment of a beam (Mb,fi,t,Rd ) should be determined according to Equation 5.30. Mb,fi,t,Rd = χLT,fi Wy ky,θ,com fy /γM,fi
where: Wy ky,θ,com χLT,fi
is the plastic modulus of the section (Wpl,y ) for Class 1 or 2 sections, the elastic modulus of the section (Wel,y ) for Class 3 sections, is the reduction factor for the yield strength of steel at
the maximum temperature in the compression flange, is the coefficient for lateral torsional buckling, calculated from Equation 5.31.
χLT,fi =
ϕLT,θ,com +
2 ϕLT,θ,com − λLT,θ,com
Mechanical Analysis
with 2
ϕLT,θ,com = 0.5(1 + αλLT,θ,com + λLT,θ,com )
α = 0.65 235/fy
λLT,θ,com =
ky,θ,com /kE,θ,com λLT
The temperature considered for evaluating ky,θ and λLT,θ can conservatively be taken as the uniform temperature in Class 1 and 2 sections and as the maximum temperature for Class 3 sections. The
authors recommend calculating this temperature according to Table 5.3. While designing a beam, considering the lateral torsional buckling, the following equation is to be satisfied: Mfi,Ed,max ≤
Mb,fi,t,Rd where Mfi,Ed,max is the maximum bending moment on the beam between two lateral restraints, in the fire design situation. Note: EN 1993-1-2 does not take into account the moment
distribution between lateral restraints of the beam in the computation of Mb,fi,t,Rd , which means that a uniform distribution of the maximum moment along the beam is considered, normally leading to
conservative result. On the contrary, this aspect is considered in EN 1993-1-1 through a factor f that increases the resisting moment, computed function of the shape of bending moment diagram. By
means of numerical investigations (Vila Real et al. 2004, 2005 and Lopes et al. 2004) showed that the shape of bending moment diagram along the beam is important also in case of fire and proposed a
formula for the factor f to be applied in the fire design situation, as is the case at room temperature. 5.6.5
Memb e r s w i t h C l a s s 1, 2 o r 3 c r o s s-s e c t i o ns, s ubj e c t t o com b i n e d b e n d i n g a n d a x i a l c o m p r e s s i o n
This section is related to members that are submitted to axial compression and to bending. This is a very common situation, for example, in moment resisting frames. The design resistance of such a
member subjected to combined bending and axial compression and a uniform temperature is verified according to Equation 5.35 and 5.36. Nfi,Ed χmin,fi Aky,θ γ
Nfi,Ed χz,fi Aky,θ γ
ky My,fi,Ed Wy ky,θ γ
kz Mz,fi,Ed Wz ky,θ γ
kLT My,fi,Ed χLT,fi Wy ky,θ γ
kz Mz,fi,Ed Wz ky,θ γ
Designing Steel Structures for Fire Safety
where: Wy , Wz χmin,fi χLT,fi
are the plastic modulus Wpl,y , Wpl,z for Class 1 and 2 sections and the elastic modulus Wel,y , Wel,z for Class 3 sections. is the smaller of χy,fi and χz,fi , these being calculated from Equation
5.16, is calculated from Equation 5.31,
kLT = 1 −
µLT Nfi,Ed χz,fi Aky,θ γ
with µLT = 0, 15 λz,θ βM.LT − 0, 15 ≤ 0, 9 ky = 1 −
µy Nfi,Ed fy
χy,fi Aky,θ γ
(5.37) (5.38)
with: µy = (1, 2βM.y − 3)λy,θ + 0, 44βM,y − 0, 29 ≤ 0, 8 kz = 1 −
µz Nfi,Ed χz,fi Aky,θ γ
(5.39) (5.40)
with: µz = (2βM.z − 5)λz,θ + 0, 44βM,z − 0, 29 ≤ 0, 8 and λz,θ ≤ 1, 1
βM , the equivalent uniform moment factor, is defined in Figure 5.6 taken from Eurocode 3. The following points should be kept in mind in the application of above Equations. • • •
Some confusion may arise from the fact that, in Equation 5.34 to 5.41, the subscripts y and z normally refer to the two main axes of the section, except in ky,θ where y refers to fy . It must be
mentioned that, in some drafts of prEN1993-1-2, equations 5.38 and 5.40 are not correctly reproduced. An editorial error led to the erroneous suppression of the factors µy in 5.38 and µz in 5.40. The
shape of the proposed equations is similar to the shape of the equations that were present for room temperature in ENV 1993-1-1 and, hence, is different from the shape of the equations now proposed
for room temperature in EN 1993-1-1. The reasons behind this inconsistency are discussed hereafter and illustrated in Figure 5.5.
In fact, the research work carried out in the early 90’s for deriving buckling curves and M-N interaction equations for steel sections subjected to fire have taken as the basis Eurocode provisions
existing at that time, i.e. ENV 1993-1-1. The equations proposed for elevated temperatures in ENV 1993-1-2 (1995) are thus similar to the ENV equations at room temperature published at that time.
When the draft team of EN 1993-1-2 was at work, its members were aware of the fact that, at the same time, the draft team of EN 1993-1-1 was working on the interaction equations at room
Mechanical Analysis
Room temperature design
Fire situation design
ENV 1993-1-1 (1992)
Research work (1998–2002)
EN 1993-1-1 (2003)
Possible harmonisation (???)
Research work (1992–1995)
ENV 1993-1-2 (1995)
Research work (1996–2002)
EN 1993-1-2 (2005)
Fig. 5.5 Evolution of different Eurocodes
temperature, with the aim of proposing an improved model for ambient conditions. However the draft team decided to adhere for the fire situation to the equations proposed in ENV 1993-1-2 and to take
them directly on board in EN 1993-1-2, with a new improvement related to lateral torsional buckling. The two main reasons were: (a)
the equation of ENV 1993-1-2 had been validated and calibrated in the fire situation, whereas the new equations introduced in EN 1993-1-1 had never been validated in the fire situation two different
proposals were under consideration by draft team of EN 1993-1-1 and it was impossible for the members of EN 1993-1-2 to know which one would finally be chosen for ambient situation.
As a consequence, in the current version of Eurocode 3, there is no similarity between the M-N interaction curves at room temperature and in the fire situation. Recent published work seem to indicate
that it could be possible to restore the similarity and that the interaction curves that are now present in the cold Eurocode could be adapted for the fire situation, taking into account the main
results of the scientific research made in the 90’s for the fire situation (Vila Real et al. 2003). •
As already mentioned in Section 5.6.3, the research that is the basis of the proposed equations dealt mainly with hot rolled I sections. For sections with a totally different shape, like for example
angles or circular and rectangular hollow sections, utilisation of the proposed equation is indeed an extrapolation of the obtained results to shapes that have not been covered in the original
studies. If the section has no weak axis, lateral torsional buckling is not possible and Equation 5.35 does not apply; only Equation 5.34 has to be used. Equations 5.34 and 5.35 in fact check the
stability of a member, but the resistance of the section is not checked. It has yet to be mentioned that the proposed equations have been derived on the base of a comprehensive set of numerical
simulations by the advanced calculation model. In these finite element analyses, the resistance of the sections was automatically taken into account and, if extensive yielding at one extremity of the
member occurred in case, for example, of a bi-triangular bending
Designing Steel Structures for Fire Safety
Equivalent uniform moment factor βM
Moment diagram End moments ψM1
βM,ψ 1,8 0,7ψ
1 ≤ ψ ≤ 1 Moments due to in-plane lateral loads βM,Q 1,3 MQ βM,Q 1,4
MQ Moments due to in-plane lateral loads plus end moments M1
MQ βM βM,ψ ∆M (βM,QβM,ψ)
MQ MQ |max M| due to lateral load only ∆M MQ
|max M|
for moment diagram without change of sign
∆M ∆M
|max M||min M|
for moment diagram with change of sign
Fig. 5.6 Equivalent uniform moment factor
moment diagram, this was automatically reflected by a loss of equilibrium. Thus it can then be inferred that these equations for the stability of the member in fact include verification of
equilibrium in the sections. It has to be noted that Eurocode 3 is not fully consistent with regard to consideration of the moment distribution in the element. If a uniform distribution of the
maximum moment is considered in the verification of the beam resistance to lateral torsional buckling, see Section 5.6.4.3, the shape of the bending moment diagram is considered in case of an element
subjected to combined bending and axial compression.
Mechanical Analysis
According to the original research work (Talamona 2005, and Schleich et al. 1998) the correct expressions for µy and µz are: µy = (2βM,y − 5) λy,θ + 0, 44βM,y + 0, 29 ≤ 0, 8
with λy ≤ 1, 1
and µz = (1, 2βM,z − 3)λz,θ + 0, 71βM,z − 0, 29 ≤ 0, 8 5.6.6
Memb e r s w i t h C l a s s 4 c r o s s-s e c t i o ns
Tension members can of course develop the full plastic capacity of the section because there is no possibility of local buckling of a plate in tension. These members are designed according to the
provisions of Section 5.6.2. For members submitted to any other effect of action, it may be assumed that the load bearing function of the member is ensured as long as the steel temperature in all
sections does not exceed a critical temperature, the recommended value of which is 350◦ C (to be chosen as a nationally determined parameter). This is indeed a very simple design method, but a very
restrictive one. In fact, Class 4 sections are those that have the thinnest plates and, hence, those that exhibit the fastest heating rates. Except when effective thermal protection is applied, these
sections attain critical temperature in a very short time and usually fail to meet to required fire resistance time. A first possibility to go beyond this critical temperature failure criterion could
be to accept an elastic design based on the appropriate reductions of the yield strength of steel, provided that all parts of the section are shown to be in tension. Such a stress distribution would
in fact be in equilibrium for members submitted to tension and to a small amount of bending. In other words, the eccentricity of the applied axial force is small. For all other situations, Eurocode 3
allows a more precise determination of the fire resistance time explained in Annex E. In fact, the text of Eurocode 3 says “For further information see Annex E’’. Annex E gives more than just
information about the concept of critical temperature. It gives in fact a simple calculation model to be used for Class 4 cross-sections. This model is based on three basic concepts (Ranby 1998),
namely: 1.
The same equations as those presented in sections 5.6.3 for members in compression, in 5.6.4 for beams and in 5.6.5 for combined bending and compression, are used. In these equations, the area is
replaced by the effective area and the section modulus by the effective section modulus in order to take local buckling into account. These effective properties are based on the material properties
of steel at 20◦ C. This can lead to a curious situation. Indeed, the classification is more severe in the fire situation than at room temperature, see section 5.5 and Equation 5.7, and it could occur
that a cross-section is classified as Class 4 in the fire situation whereas it was classified as Class 3 at room temperature. In that case, the effective properties are equal to the basic properties
(no reduction) because the effective
Designing Steel Structures for Fire Safety
properties are based on the properties at room temperature, i.e. for a Class 3 section. In these equations, the design strength of steel should be taken as the 0.2 percent proof strength, for the
resistance to compression, to shear, as well as to tension. This evolution of the 0.2 percent proof yield strength is given as a function of temperature in Annex E. It can be seen that the reduction
is nearly the same as the reduction exhibited by the Young’s modulus.
5.7 Design in the temperature domain Eurocode 3 dedicates a separate section, namely Section 4.2.4, to the design in the temperature domain. The basic idea is to obtain directly the critical
temperature from the load level or, as called in the Eurocode, the degree of utilisation. For all members in tension and for all members with Class 1, 2 or 3 section, the degree of utilisation is
obtained from Equation 5.42. µ0 = Efi,d /Rfi,d,0
where: Efi,d Rfi,d,0
is the design effect of actions in case of fire, is the design resistance of the member in the fire situation, i.e. determined with the equations mentioned here in Section 5.6, but at time t = 0,
i.e. for a temperature of 20◦ C.
The critical temperature is then given by Equation 5.43 as a function of the degree of utilisation. 1 − 1 (5.43) θa,cr = 482 + 39.29 ln 0.9674 µ3.833 0 In fact, working directly in the temperature
domain with Equation 5.42 and 5.43 is only valid if the design resistance in case of fire Rfi,d,t is directly proportional to fy (θ), see Equation 5.44. Rfi,d,t = mfy (θ)
with m a constant. Indeed, in that case, the basic design equation can be transformed as follows: Efi,d ≤ Rfi,d,t = mfy (θ) = mky,θ fy = Rfi,d,0 ky,θ
ky,θ ≥ Efi,d /Rfi,d,0 = µ0 −1 θ = k−1 y,θ (ky,θ ) = ky,θ (µ0 )
Mechanical Analysis
1200 ky mu
Temperature (°C)
Degree of utilisation ()
Fig. 5.7 Comparison between inverse of yield strength and degree of utilisation factor
This shows that Equation 5.43 that gives the critical temperature as a function of the degree of utilisation must be the invert of the function that gives the reduction of the effective yield
strength as a function of the temperature. Figure 5.7 shows the comparison between the invert of ky,θ as taken from Table 3.1 of the Eurocode and Equation 5.43. It shows that the two curves are very
close for the domain of application mentioned in the Eurocode, i.e. for degrees of utilisation not below 0.013, i.e. for temperatures not higher than 1135◦ C. A design should thus yield very similar
results, be it carried out in the load domain or in the temperature domain. It is not forbidden, instead of Equation 5.43, to use exactly the invert of the function ky,θ when working in the
temperature domain, i.e. to interpolate in Table 3.1 of the Eurocode in order to determine θcr as a function of µ0 although the heading of the Table gives θ as a function of ky,θ . In that case, the
design in the temperature domain will give exactly the same result as a design in the load domain. Similar, or exactly equal results will be obtained only if the load bearing capacity in the fire
situation is strictly proportional to the effective yield strength, see Equation 5.44. If, on the contrary, the evolution of the Young’s modulus also plays a role in the evolution of Rfi,d,t , it is
then not possible anymore to apply directly Equation 5.43. The critical temperature in such a scenario can be determined only by successive verifications in the load domain. This is the case for
example for columns under buckling, for members under combined bending and compression, for members under lateral torsional buckling and for Class 4 sections. This is explained in Clause 4.2.4 (2) of
the Eurocode with a statement: “Except when considering deformation criteria or when stability phenomena have to be taken into account’’. This should be interpreted as “Except when the Young’s
modulus plays a role in the fire resistance’’. As can be seen from the list given above, application of Equation 5.43 is restricted to a rather limited
Designing Steel Structures for Fire Safety
number of cases such as, for example, for bending in Class 1, 2 or 3 sections and for members in tension. Another situation that does not allow direct application of Equation 5.43 to find directly
the critical temperature is when the bending resistance at the supports of statically indeterminate beams has to be reduced for the effects of shear, see Section 5.8.3. Although is it not very clear
from the text, the note under Table 4.1 in the Eurocode, “The national annex may give default values for critical temperatures’’, is mainly introduced to allow member states to introduce some values
of critical temperatures for various situations, for example 540◦ C in beams and 520◦ C when buckling may play a critical role. The designer would then have the possibility not to perform any
structural analysis at all in the fire situation provided that enough thermal insulation is applied in order to limit the temperature increase at the required fire resistance time to be below the
prescribed critical temperature. This option of fixed critical temperatures is quite convenient for the designer. Apart from the fact that theoretical validation for the prescribed critical
temperatures is rather weak, this option usually yields to a quite uneconomic design because it amounts to neglect any favourable effect that may arise from a lower degree of utilisation.
5.8 Design examples For the examples proposed in this section, a default value of 1.0, as proposed in the Eurocode, has been adopted for the partial safety factor γM,fi . 5.8.1
Memb e r i n t e n s i o n
A member with a circular section, diameter D = 250 mm, thickness d = 5 mm, yield strength fy = 355 N/mm2 , is subjected to an axial tension load in the fire situation Ed,fi = 100 kN. The required
fire resistance time is treq = 30 minutes. Perform the verification in the load domain, in the time domain and in the temperature domain. 5.8.1.1 Ve ri f i c a ti on i n th e l oa d d om a i n For
the required fire resistance time treq = 30 minutes, the design resistance of the tension member must be higher than the design load, Equation 5.2. Perimeter of the section: Am = πD = π0.25 = 0.785 m
Area of the section: V = π(D2 − (D − 2d)2 )/4 = π (0.252 − 0.242 )/4 = 0.003848 m2 . Massivity factor: Am /V = 0.785/0.003848 = 204 m−1 Note: this value is very close to 1/d = 200 m−1 Interpolation
in Table I.1 (see Annex I) between 200 and 400 m−1 yields a temperature of 828◦ C after 30 min. Interpolation in Table II.2 (see Annex II) between 800 and 900◦ C yields ky,θ = 0.096 at 828◦ C. fy,θ =
ky,θ fy = 0.096 × 355 = 34.08 N/mm2
Mechanical Analysis
Nfi,θ,Rd = Vfy,θ = 3848 mm2 × 34.08 N/mm2 = 131 kN > Nfi,Ed (100 kN) The safety margin in terms of applied load is (131 − 100)/100 = 31%
5.8.1.2 Ve ri f i c a ti on i n th e ti m e d om a i n The fire resistance time tfi,d of the tension member must be higher than the required fire resistance. It is reached when Nfi,θ,Rd = NfiE,d
Failure occurs when Nfi,θ,Rd = Vfy,θ = 3848 × fy,θ = 100 kN => fy,θ = 25.99 N/mm2 ky,θ = fy,θ /fy = 25.99/355 = 0.0732. Interpolation in Table II.2 (see Annex II) between 0.06 and 0.11 yields a
failure temperature of 874◦ C for a reduction factor of 0.0732. Table I.1 (see Annex I) gives a time of 39 minutes to reach a temperature of 874◦ C in a section with a massivity factor of 200−1 (≈204
m−1 ). Note: See Section 5.8.1.1 for the calculation of the massivity factor. The safety margin in terms of time is 9 minutes.
5.8.1.3 Ve ri f i c a ti on i n th e tem p er a tu r e d o m a in Rd,fi,0 = Vfy = 3848 × 355 = 1366 kN µ0 = Efi,d /Rfi,d,0 = 100/1366 = 0.0732 1 θcr = 39.19 ln − 1 + 482 0.9674µ3.833 0 1 = 39.19 ln −
1 + 482 0.9674 × 0.07323.833 = 876◦ C For the required fire resistance time treq = 30 minutes and for a massivity factor Am /V = 204 m−1 , θd = 828◦ C, see Section 5.8.1.1. The safety margin in terms
of temperature is 48◦ C. Notes: 1.
It can be observed that the degree of utilization µ0 used in the temperature domain, 0.0732, is in fact equal to the reduction factor for the yield strength ky,θ corresponding to the failure time
tfi,d determined in the time domain. The critical temperature θcr = 876◦ C determined in the temperature domain differs by less than 0.3% from the temperature θd = 874◦ C, corresponding to the
failure of the element determined in the time domain. The reason of this difference is explained in Section 5.7, see Figure 5.7.
Designing Steel Structures for Fire Safety
Fig. 5.8 Section of the column under axial compression
C olu m n u n d e r a x i a l c o m p r e s si o n
A simply supported column with I welded cross-section (see Figure 5.8) and length of 2,90 m is made of steel with fy = 235 N/mm2 . The column is supporting a design load of 410 kN in fire situation
and is exposed to fire on all sides. 1. 2.
What is the fire resistance time of the column to the ISO834 standard fire, considering buckling over strong axis? What would be the thickness of a contour encasement protection having a constant
thermal conductivity λp = 0.12 W/mK, for a required fire resistance time of 60 minutes?
5.8.2.1 F i re r es i s ta n c e ti m e of th e c ol u m n w it h u n p r o t e c t e d c r o s s - s e c t io n Classification of the section, see Table 5.2 235 ε = 0.85 = 0.85 fy Flanges: c/tf = 76
/9 = 8.44 < 10ε = 8.5 => The flanges are of Class 2. Web: c/tw = 139/5 = 27.8 < 33ε = 28.1 => The web is of Class 1. => The column cross-section is of Class 2. The failure of the column occurs at
time tfi,d for which the design resistance of the column is equal to the design axial force, for fire situation. Slenderness at room temperature: λ = lf /iz = 290 / 7.23 = 40.11 Eulerian slenderness:
λE = π E/fy = π 2.1 × 105 /235 = 93.91
Mechanical Analysis
Non-dimensional slenderness at room temperature: λ=
λ 40.11 = = 0.427 λE 93.91
The non-dimensional slenderness at elevated temperature is a function of the temperature at failure, which is the unknown in this problem. In a first iteration it will be approximated by: λθ = 1.2λ =
1.2 × 0.427 = 0.512 Imperfection factor: α = 0.65 235/fy = 0.65 235/235 = 0.65 2
ϕθ = 0.5(1 + αλθ + λθ ) = 0.5(1 + 0.65 × 0.512 + 0.5122 ) = 0.797 χfi =
1 1 = = 0.710 √ 2 0.797 + 0.7972 − 0.5122 ϕθ + ϕθ2 − λθ
Design equation: Nb,fi,t,Rd = Nfi,Ed or χfi ky,θ Afy /γM,fi = 410 kN =>ky,θ = 410 000/(0.710 × 3705 × 235) = 0.663 Interpolation in Table II.2 (see Annex II) yields a critical temperature of 528◦ C
for this value of the reduction factor of the effective yield strength. The same table gives a reduction factor for the Young’s modulus kE,θ = 0.490 for this temperature. Second iteration
λθ = λ ky,θ /kE,θ = 0.427 0.663/0.490 = 0.497
ϕθ = 0.5(1 + 0.65 × 0.497 + 0.4972 ) = 0.785 χfi =
0.785+ 0.7852 − 0.4972
= 0.718
=> ky,θ = 410 000/(0.718 × 3705 × 235) = 0.656 Critical temperature θa = 540◦ C, see Table II.2 kE,θ = 0.484, see Table II.2 Third iteration λθ = λ ky,θ /kE,θ = 0.427 0.656/0.484 = 0.497 => The
iteration process has converged in two iterations and the critical temperature of 540◦ C has been obtained.
Designing Steel Structures for Fire Safety
For I sections under nominal fire, and considering the shadow effect: A∗m /V = ksh Am /V = 0.9Amb /V A∗m /V = 0.9 × 2(h + b)/A = 0.9 × 2(0.165 + 0.165)/37.05 × 10−4 = 160 m−1 Double interpolation in
Table I.1 (see Section I) yields: For A∗m /V = 100 m−1 , 540◦ C are obtained after 14.17 min. For A∗m /V = 200 m−1 , 540◦ C are obtained after 9.77 min. => For A∗m /V = 160 m−1 , 540◦ C are obtained
after 11.53 min. 5.8.2.2 C o l um n p r otec ted wi th c on tou r e n c a s e m e n t o f u n if o r m t h ic k n e s s The steel temperature corresponding to the failure of the column is θd = 540◦
C. From Table I.2 for protected sections, at tfi,req = 60 min., this temperature is obtained for a protection section with kp = 1323 W/m3 K. For protection with contour encasement: [Ap /V] = steel
perimeter cross-section area = [2b + 2h + 2(b − tw )]/A = [2 × 0.165 + 2 × 0.165 + 2(0.165 − 0.005)]/(37.05 × 10−4 ) = 265 m−1 kp = [Ap /V][λp /dp ] = 1323 W/m3 K => dp,min = 265 × 0.12/1323 = 0.024
m (24 mm)
Fix ed - f i x e d b e a m s u p p o r t i n g a c o nc r e t e s l ab
A HE160A beam of 4 m length and with fixed end rotations is supporting a concrete slab. The beam is fabricated from steel of fy = 355 N/mm2 and has design load in fire situation of 9500 N/m. The
requested fire resistance for the beam is 30 minutes. 1. 2.
Perform the verification in the load domain, in the time domain and in the temperature domain. What would be the failure time, for the steel beam protected by a hollow encasement of 12 mm thickness
plates with λp = 0.15 W/mK? Properties of section HE160A: h = 152 mm, b = 160 mm, tf = 9 mm, tw = 6 mm, r = 15 mm, A = 38.77 cm2 , Wpl = 245.1 cm3
5.8.3.1 C l ass i fi c a ti on of th e s ec ti on, s ee T a b le 5.2 235 235 = 0.85 = 0.692 ε = 0.85 fy 355 Flanges: c/tf = (b/2 − tw /2 − r)/tf = (160/2 − 6/2 − 15)/9 = 6.89 < 10ε = 6.92 The flanges
are of Class 2
Mechanical Analysis
Web: c/tw = (h − 2tf − 2r)/tw = (152 − 2 × 9 − 2 × 15)/6 = 17.33 < 72ε = 49.8 The web is of Class 1: =⇒ The beam cross-section is of Class 2. The method for designing the beam is thus to limit the
elastically determined bending moment to the plastic bending moment in the sections, see Section 5.6.4.2. Note: For a Class 1 section, a plastic design of the beam would be performed and the support
span basic design equation in bending would be Mfi,t,Rd + Mfi,t,Rd = pfi,d L2 /8 5.8.3.2 Ve ri f i c a ti on i n th e l oa d d om a i n For I sections under nominal fire, exposed on three sides and
considering the shadow effect: A∗m /V = ksh Am /V = 0.9Amb /V = 0.9(2h + b)/A = 0.9(2 × 0.152 + 0.16)/38.77 × 10−4 = 108 m−1 For A∗m /V = 108 m−1 and tfi,req = 30 min., Table I.1 gives θa = 772◦ C
For this temperature, Table II.2 gives a reduction factor for effective yield strength ky,θ = 0.144 At the supports Verification of the design shear resistance Vfi,d = 9500 × 4 / 2 = 19 000 N (A∗m /
V)web = 2/tw = 2/0.006 = 333 m−1 For tfi,req = 30 min. and A∗m /V = 333 m−1 , Table I.1 gives θweb = 834◦ C => ky,θ,web = 0.093, see Table II.2. √ √ VRd = Av fy /( 3 γM0 ) = 1321 × 355/( 3 ×1.00) =
270 750 N with Av = A − 2btf + (tw + 2r)tf = 3877 − 2 × 16 × 0.9 + (0.6 + 2 × 1.5)0.9 = 1321 mm2 Vfi,t,Rd = ky,θ,web VRd [γM0 /γM,fi ] = 0.093 × 270 750[1.00/1.00] = 25 180N > Vfi,d = 19 000 N The
safety margin is 33%. A reduction of Mfi,t,Rd allowing for effects of shear is necessary if Vfi,d > 0.5Vfi,t,Rd which is the case here. 2 2 2 Vfi,d 2 × 19.00 −1 = − 1 = 0.259 => ρ = Vfi,t,Rd 25.18
Verification of the bending resistance Mfi,Ed = pL2 /12 = 9.500 × 42 /12 = 12.67 kNm ρ(hw tw )2 ky,θ fy Wpl − 4tw Mfi,t,Rd = k1 k 2
Designing Steel Structures for Fire Safety
where: k1 = 0.7 for unprotected beam exposed on three sides, with a concrete slab on side four, k2 = 0.85 at the supports of the statically indeterminate beam. Mfi,t,Rd = 0.144 × 355 × [245 100 −
0.259(104 × 6)2/4 × 6]/(0.70 × 0.85) = 20.70 kNm > 12.67 kNm => The design moment resistance of the beam at supports, at time tfi,req = 30 min., is higher than the design moment, for fire situation.
The safety margin is 63%. At mid span Mfi,Ed = pL2 /24 = 9.500 × 42 /24 = 6.33 kNm Mfi,t,Rd = ky,θ fy Wpl / (k1 k2 ) Mfi,t,Rd = 0.144 × 355 × 245 100/(0.70 × 1.00) = 17.90 kNm > 6.33 kNm where: k1 =
0.7 for unprotected beam exposed on three sides, with a concrete slab on side four, k2 = 1.00 if not at the supports. => The design moment resistance at mid span, at time tfi,req = 30 min., is higher
than the design moment, for fire situation. The safety margin is 183%. 5.8.3.3 Ve ri f i c a ti on i n th e ti m e d om a i n The critical section is obviously at the supports. It is yet not possible
to derive directly the failure time at this section because of the interaction between shear and bending. The situation is also complicated by the fact that different massivity factors have to be
considered for the section on one hand and for the web on the other hand. Different possibilities allow determining the fire resistance time. 1.
Verification in the load domain can be performed iteratively at various times, until exact equivalence is found between the applied load and the load bearing capacity. In this case, for example, one
could check the resistance after 40 minutes of fire and see how the safety margins change for shear and bending at the supports and bending at mid span. Linear interpolation or extrapolation between
the values obtained at 30 and 40 minutes should yield the time for which all safety margins are greater or equal to 1.00. If needed, a subsequent verification in the load domain for this new time
should confirm the result or give a third point for interpolation. Utilization of computer tools, such as spreadsheet software for example, is highly recommended. It is possible to iterate between
the bending and shear resistance at the support. The resistance time could be calculated from the bending resistance at the support, considering the same reduction from shear effects as the one
considered at 30 minutes. The shear resistance would then be checked for this new time and the new reduction of the bending resistance would then be evaluated, allowing a new fire resistance time in
bending to be determined. The process would be repeated
Mechanical Analysis
until convergence. It has to be noted that, because of the second order power of the shear resistance that is taken into account in the reduction of bending resistance, this procedure is not
guaranteed to converge. 5.8.3.4 Ve ri f i c a ti on i n th e tem p er a tu r e d o m a in Interaction between shear and bending also makes it impossible to determine the critical temperature directly
from the utilization factor. Iteration procedures similar to those mentioned above should be applied. 5.8.3.5 Be am p r otec ted wi th h ol l ow en c a s e m e n t For hollow encasement protection on
three sides, with a concrete floor on the fourth side, the section factor is: Ap /V = (2h + b)/A = (2 × 0.152 + 0.160)/(38.77 × 10−4 ) = 120 m−1 [Ap /V] [λp /dp ] = 120 × 0.15/0.012 = 1 500 W/m3 K
For this section protected by a hollow encasement, it is assumed that the temperature of the web is equal to the uniform temperature in the section. Iteration 1 Let us assume that ky,θ = 0.088
Vfi,t,Rd = ky,θ,web VRd [γM0 /γM,fi ] = 0.088 × 270 750 = 23.83 kN > Vfi,d = 19 kN 2 2 × 19.00 => ρ = − 1 = 0.354 23.83 Bending at the support: Mfi,t,Rd = 0.088 × 355 × [245 100 − 0.354(104 × 6)2 /4
× 6]/(0.70 × 0.85) = 12.57 kNm < 12.67 kNm Iteration 2 ky,θ = 0.089 Vfi,t,Rd = 0.089 × 270 750 = 24.10 kN > Vfi,d = 19 kN 2 2 × 19.00 => ρ = − 1 = 0.333 24.10 Bending at the support: Mfi,t,Rd = 0.089
× 355 × [245 100 − 0.333(104 × 6)2 /4 × 6]/(0.70 × 0.85) = 12.73 kNm > 12.67 kNm The value of 0.089 for ky,θ satisfy the load bearing requirement. For this value, Table II.2 gives a failure
temperature of 842◦ C.
Designing Steel Structures for Fire Safety
Interpolations in Table I.3: For kp = 1200 W/m3 K, 842◦ C is obtained after 164 min. For kp = 2000 W/m3 K, 842◦ C is obtained after 116 min. => For kp = 1500 W/m3 K, 842◦ C is obtained after 146 min.
C la s s 3 b e a m i n l a t e r a l t o r s i o nal buc k l i ng
A simply supported beam of span of 6 m and of section HE180A, fy = 355 N/mm2 , is supporting a secondary beam at mid span. The secondary beam induces in the main beam a concentrated load of 20 kN in
fire situation. What is the temperature at failure of the main beam? Properties of section HEA180: h = 171 mm, b = 180 mm, tf = 9.5 mm, tw = 6 mm, r = 15 mm, Wel,y = 293.6 cm3 , Iz = 924.6 cm4 , Iw =
60 210 cm6 , It = 14.8 cm4 Classification of the section, see Table 5.2
235 235 = 0.85 = 0.692 ε = 0.85 fy 355 Flanges: c/tf = (b/2 − tw /2 − r) /tf = (180/2 − 6/2 − 15) / 9.5 = 7.57 < 14ε = 9.69 The flanges are of Class 3. Web: c/tw = (h − 2tf − 2r) / tw = (171 − 2 ×
9.5 − 2 × 15) / 6 = 20.33 < 72ε = 49.8 The web is of Class 1. =⇒ The beam cross-section is of Class 3. Mfi,Ed = qd,fi L/ 4 = 20 × 6 / 4 = 30 kNm Failure is reached when: Mb,fi,t,Rd = χLT,fi Wel,y
ky,θ,com fy /γM,fi = Mfi,Ed = 30 kNm From which: ky,θ,com = Mfi,Ed γM,fi / (χLT,fi Wel,y fy ) The critical elastic moment of the beam will be needed for checking stability with regard to lateral
torsional buckling. For a simply supported beam with I section, the critical elastic moment is computed according to Annex F of ENV 1993-1-1. ⎡ ⎤ 2 2 π EIz ⎣ Iw L GIt Mcr = C1 + 2 + (C2 yg )2 − (C2
yg )2 ⎦ = 210.9 kNm L2 Iz π EIz with: L = 6/2 = 3 m, considering that the secondary beam provides lateral support at mid span, yg = 0.5 h = 85.5 mm C1 = 1.365, C2 = 0.553 (for concentrated load in
the middle span).
Mechanical Analysis
Non dimensional slenderness at room temperature Wel,y fy 293.6 × 355 λLT = = 0.703 = Mcr 220 900 Iteration 1 In the first iteration, the non-dimensional slenderness at elevated temperature will be
estimated by: λLT,θ,com = 1.2 λLT = 1.2 × 0.703 = 0.844 α = 0.65 235/fy = 0.529 LT,θ,com = 0.5[1 + αλLT,θ,com + (λLT,θ,com )2 ]
= 0.5[1 + 0.529 × 0.844 + 0.8442 ] = 1.079 1 = LT,θ,com + [LT,θ,com ]2 − [λLT,θ,com ]2 =
1.079 + = 0.571
1 1.0792 − 0.8442
=> ky,θ,com = Mfi,Ed γM,fi /(χLT,fi Wel,y fy ) = 30 × 106 /(0.571 × 293 600 × 355) = 0.504 with the corresponding temperature θa,com = 589◦ C, see table II.2. Iteration 2 In the second iteration,
χLT,fi is computed considering the temperature determined in the first iteration. For θa,com = 589◦ C, kE,θ,com = 0.340, see Table II.2 λLT,θ,com = λLT ky,θ,com /kE,θ,com = 0.703 0.504/0.340 = 0.856
LT,θ,com = 0.5[1 + 0.529 × 0.856 + 0.8562 ] = 1.093 χLT,fi =
1.093 +
1 1.0932 − 0.8562
= 0.564
=> ky,θ,com = 30 × 106 /(0.564 × 293 600 × 355) = 0.510 with the corresponding temperature θa,com = 587◦ C, see table II.2. Because the difference between the temperatures calculated in the two
iterations is less than 0.4%, the iteration process may stop. The temperature of the compression flange, corresponding to the failure of the beam is θa,com = 587◦ C.
Chapter 6
6.1 General Joints, also sometimes referred to as connections, play an important role in a structural framing since they transfer forces from one member to another. The performance of connections is
highly important under extreme loading conditions since the critical regions of a steel structure are in plastic state and also due to the greater need for redistribution of forces from one critical
region to another critical region. In most fire situations, it is likely that some parts of the structure in the non-fire exposed regions have much greater capacity than the fire exposed regions and
thus the fire resistance of the structure is highly dependent on the extent of redistribution (through connections) that occurs from highly stressed regions to less stressed regions. Thus the
performance of connections is crucial for the stability of structural systems in buildings when they are exposed to fire. This aspect, of the importance of connections, has been observed and
documented (FEMA 2002) in the damaged WTC buildings around Ground Zero as a result of fires initiated after the collapse of twin-towers. In modern steel-framed buildings, connections between various
members may be either of bolted or welded construction, or a combination of these types. Most codes and standards require steel connections to be provided with some level of fire protection. However,
many codes do not explicitly state fire resistance requirements for connections. Further, in establishing fire ratings through prescriptive approaches, connections are generally not included as part
of the assembly tested in traditional fire-resistance tests. Furthermore, most modelling efforts assume that the pre-fire characteristics of a connection are preserved during the fire exposure. US
codes generally give little guidance on the fire design of connections. No explicit provisions are specified in US codes and standards. A closer look at the overall fire resistance provisions clearly
imply that the connections should be protected to the same level of fire resistance as that of the connecting members. In Europe, it was also commonly assumed since the 70’s that there is no need to
take special provisions for the connections as long as they are protected at least in the same manner as the adjacent members that they connect (ECCS, 1983). This implies that, if none of the
connected members is protected, then there is no need to protect the joint. This concept was based on the idea that the thermal massivity of the joint should be higher than the massivity of the
members because of the presence of additional mass in the connection zone, either from end plates, fin plates, web cleats or stiffeners. It was also based on the observation of numerous unprotected
steel structures that completely
Designing Steel Structures for Fire Safety
collapsed in severe fire, and where the steel beams were severely distorted, but rarely detached from the columns. The perception has evolved during the last decade because of the appearance of the
conceptual design of multi-storey buildings where the columns are protected while the beams and the joints are not. The demand on the connections in such systems is of course much higher. This is
especially the case when axial restraint induces axial forces in the beams and thus in the connections. Compression forces first develop in the beams due to restraint against thermal expansion and
tension forces may develop in a later stage when significant vertical deflections in the beams transform the beams from elements in bending to elements in tension, more like cables. More tension can
even develop in the beams, and thus in the joints, in the cooling phase of a natural fire. It has also been observed that not only the resistance to the varying forces, but also the ductility of
connections must be very important in order to accommodate the large rotations linked to the large displacements that develop when the beams act in a catenary mode. A good review of recent research
performed on the behaviour of joints in the fire situation may be found in Al-Jabri et al (2008). However, in Eurocode 3, design rules for both bolted and welded connections in the fire situation are
only specified through the introduction of strength reduction factors. Moreover, simplified temperature distribution of joints in the fire situation is also proposed, which may be used in strength
analysis. This approach, though only primitive in nature, is rational and one step ahead of the “connection provisions’’ in other Codes of practice. This Chapter is devoted to highlighting the
Eurocode methodology for the design of connections.
6.2 Simplified procedure Eurocode 3 states in the main text that the fire resistance of a bolted or a welded joint may be assumed to be sufficient, provided that the three following conditions are
satisfied: •
• •
The joint has at least the same fire protection as all connected members. In particular, this means that it is not necessary to verify the joints in an unprotected steel structure if the other
conditions, especially the one regarding the utilisation of the joint, are fulfilled. The utilisation of the joint is equal or less than the highest value of utilisation of any of the connected
members. The resistance of the joint at room temperature satisfies the recommendations given in EN 1993-1.8. This condition that the design must comply with the rules at ambient temperature is not
specific to joints, as it is applicable for every part of the steel structure.
The justification usually given for this recommendation is that due to the additional material in the joints and also to the shadow effects created by the connected members, the temperatures are
lower in the joints than those within the adjacent members. Another explanation for the lower temperatures in joints is linked to the geometry of the fire compartment, that results in lower
temperatures at the corners, where the joints are usually located. This idea of lower temperatures at the joints is a similar concept to the one leading to the utilisation of the reduction factor k2
that is considered at the supports of statically indeterminate beams, see Section 5.5.4.2.2. But if this is generally
true for the beam-column connections of frame structures, it is certainly not the case for a continuity joint in the middle of the lower chord of a roof truss, for example. For the same reason, the
Eurocode states that when verifying connection resistance, the net-section failure at fastener holes need not be considered, provided that there is a fastener in each hole. However, simple numerical
simulations yet show that this provision of not considering net section failure is not valid in general (Franssen and Brauwers, 2002). In fact, if the standard temperature-time curve is used, there
is a tendency for the temperatures in steel to level off after a certain amount of time and it requires a very significant difference in thermal massivity in order to produce a noticeable temperature
difference, see Figure I.4. This is especially true in the case of long fire durations and for thermally protected members, where the benefit yielded by a somewhat lower temperature in the section
with the fastener may well be totally overwhelmed by the reduction of net section. Therefore, designer are well advised to perform this verification when the resistance of this net-section is
critical for the stability of the structure. Eurocode does not define in the section on design of joints the utilisation factor for the application of the second condition. It has to be assumed that
the same definition has to be used as the one mentioned in the section dealing with the design in the temperature domain, see Equation 5.42, given here again for clarity reasons. µ0 = Efi,d /Rfi,d,0
This equation implies that, in order to be allowed not to calculate the fire resistance of the joint, the fire resistance of the joint at time t = 0 has to be calculated first! It is then no miracle
that, as a further simplification of the method, it is stated that the comparison of the level of utilisation within the joints and joined members may be performed for room temperature. This means
that Equation 6.1 will be used for the utilisation of the joints and of the adjacent members. µ = Ed /Rd
Due care must be taken when applying this method to one particular situation. Let us assume that the starting point is a design at room temperature, in which the joints and the connected elements
were normally proportioned. If all connected elements satisfy the fire resistance requirement, the recommendation above implies that the joints also meet this requirement. If, on the contrary, the
fire resistance time of one element, a beam for example, is not sufficient and instead of applying a fire protection it is preferred to use a stronger section for this element, it is not to be
expected that the resistance of the joint will increase in the same proportion. Consequently, if the elements are overdesigned for fire resistance purposes, the adjacent joints must be overdesigned
accordingly. The new value of utilisation for the overdesigned elements must be the reference when the resistance of the joint is revisited. As an alternative to the above mentioned method, the fire
resistance of a joint may be determined using the method given in Annex D of Eurocode 3. This method is presented and discussed in the following section.
Designing Steel Structures for Fire Safety
6.3 Detailed analysis When assessing the fire resistance of a joint, the temperature distribution of the joint components should be evaluated and then the resistance of each component at high
temperature should be determined accordingly. 6.3.1 Temp e r a t u r e o f j o i n t s i n f i r e As an alternative to the above simplified method, the informative Annex D in Eurocode states that,
in order to compute the fire resistance of the components, the temperature of a joint may be assessed using the local thermal massivity of the parts forming that joint as shown, for example, in ECCS
(2001). In a more simplified way, a uniform distributed temperature may be assessed within the joint; this temperature may be calculated using the minimum value of the thermal massivity of the steel
members connected to the joint. A more refined procedure is given for beam to column and beam-to-beam joints, where the beams are supporting any type of concrete floor. In this case, the temperature
distribution in the joint may be obtained from the temperature of the bottom flange at mid span. The following formulas are recommended for computing the temperature of joint components. θh = 0.88 θ0
[1 − 0.3h/D] θh = 0.88 θ0
for h ≤ D/2
θh = 0.88 θ0 [1 + 0.2(1 − 2h/D)]
for h > D/2
if D ≤ 400 mm
if D > 400 mm
where: θ h is the temperature at height h in the steel beam, θ 0 is the bottom flange temperature of the beam remote from the joint, i.e. based on the thermal massivity of the section heated on 4
sides, see Table 5.3. h is the height of the component being considered above the bottom of the beam, D is the depth of the steel section. 6.3.2
Des ign r e s i s t a n c e o f b o l t s a n d w e l ds i n f i r e
Verification is based on the strength as determined at room temperature multiplied by the reduction factors for strength of bolts and welds given in Table 6.1. These factors are based on the research
by Kirby (1995) and Latham and Kirby (1990). 6.3.2.1 Bo l t ed j oi n ts i n s h ea r The shear design resistance of an individual bolt in fire should be determined from Equation 6.4: Fv,t,Rd = Fv,Rd
γM2 γM,fi
Table 6.1 Strength reduction factors for bolts and welds at various temperatures θa [◦ C ]
1,000 0,968 0,952 0,935 0,903 0,775 0,550 0,220 0,100 0,067 0,033 0,000
kw,θ 1,000 1,000 1,000 1,000 1,000 0,876 0,627 0,378 0,130 0,074 0,018 0,000
where: Fv,Rd kb,θ γ M2 γ M,fi
is the design shear resistance of the bolt per shear plane calculated, at normal temperature, assuming that the shear plane passes through the threads of the bolt (from Table 3.4 of EN 1993-1-8), is
the reduction factor determined for the appropriate bolt temperature given in Table 6.1, is the partial safety factor at normal temperature, is the partial safety factor for fire conditions.
In the computation of Fv,Rd , it must be noted that, irrespective of whether the shear plane passes through the threaded or unthreaded portion of the bolt, the shear area of the bolt in case of fire
must be considered as the tensile stress area of the bolt As , i.e. assuming that the shear plane passes through the threads of the bolt. The design bearing resistance of an individual bolt in fire
should be determined from Equation 6.5: Fb,t,Rd = Fb,Rd kb,θ
γM2 γM,fi
where Fb,Rd is the design bearing resistance at room temperature, determined from Table 3.4 of EN 1993-1-8. The slip resistant type joints are not efficient in case of fire and therefore should be
considered as having slipped in fire. Consequently, the resistance of a single bolt should be verified as for a usual bolt in shear with the formulas above. 6.3.2.2 Bo l t e d j oi n ts i n ten s i
on The design tension resistance of an individual bolt in fire should be determined from Equation 6.6: Ften,t,Rd = Ft,Rd kb,θ
γM2 γM,fi
Designing Steel Structures for Fire Safety
where Ft,Rd is the design tension resistance of the bolt at room temperature, determined from Table 3.4 of EN 1993-1-8. 6.3.2.3 Fi l l e t wel d s The design resistance per unit length of a fillet
weld in fire should be determined from Equation 6.7: Fw,t,Rd = Fw,Rd kw,θ
γM2 γM,fi
where: kw,θ is obtained from Table 6.1 for the appropriate weld temperature, Fw,Rd is the design strength of the fillet weld at normal temperature, determined from Clause 4.5.3 of EN 1993-1-8.
6.3.2.4 But t w el d s For normal temperatures, the design strength of a full penetration butt weld should be taken as equal to the design resistance of the weakest connected parts, provided that the
conditions in 4.7.1.(1)/EN1993-1-8 are fulfilled. The fire design strength of a full penetration butt weld is computed using the design strength at normal temperature, corrected by the following
reduction factors: • •
For temperatures up to 700◦ C, the reduction factors for structural steel in fire situation. For temperatures above 700◦ C, the reduction factors kw,θ given in Table 6.1.
Chapter 7
Advanced Calculation Models
7.1 General Much of the discussion presented in previous chapters is focused on application of simple calculation approaches for tracing the fire response of structural members. These simple
calculation techniques, though one step above the prescriptive based approaches, may not provide realistic fire response of structural system since a number of factors cannot be accounted for in
simplified analysis. In lieu of these simple calculation methods, advanced calculation models can come handy for predicting the realistic fire response of structural response. These advanced
calculation models are generally based on finite element techniques and can account for geometric and material nonlinearity, in addition to various high temperature effects, such as creep and
transient strains, and restraint effects. The accuracy of the predictions is often dependent on the level of the complexity of the model adopted for the analysis, discretization utilized in the
analysis, and on the accuracy of the input (material) data. In addition, the application of these advanced calculation models requires significant expertise, time and computational effort. As
discussed in Chapter 2, advanced calculated models could be applied to a single member, an assembly, or the entire building frame. The US codes do not specifically provide requirements or guidance on
the use advanced calculated models. No specific guidelines are provided in North American codes such as IBC and NBCC, on the analysis procedure, or high temperature properties material models or
design fires. However, the recent edition of AISC design manual (AISC 2005) describes the requirement for advanced methods of structural analysis for performance-based fire resistant design. It also
recommends the use of Eurocode material properties for carrying out advanced analysis. Thus it is possible in North America to get regulatory approvals by undertake fire resistance analysis using the
advanced calculation models. Eurocodes are more pro-active in facilitating the use of advanced calculation models for fire resistance design and analysis. These codes provide detailed procedure to be
used for analysis, constitutive models for high temperature material properties, as well as various parametric (design) fire curves. The details of these provisions are discussed in this Chapter.
Designing Steel Structures for Fire Safety
7.2 Introduction Advanced calculation models are defined in Eurocode 3 as “design methods in which engineering principles are applied in a realistic manner to specific applications’’. This definition
is quite vague, to say the least. Eurocode provide additional information in Section 4.3 that deals specifically with advanced calculation models. For example, it is stated that “advanced models
should be based on fundamental physical behaviour’’. This means that, for example, equations derived from best fit with experimental results cannot be labelled as advanced models. The Eurocode states
that advanced calculation models should include separate models for the determination of temperature in the structure, on one hand, and the mechanical behaviour of the structure, on the other hand.
This is indeed the procedure adopted in most calculation advanced models nowadays; most models are made of two separate sub-models. It fact, there is no reason why a model that has as sole objective
the determination of temperatures in structures subjected to fire would not deserve the title of advanced model. The same could be said of a mechanical model that has to rely on other means for the
determination of the temperature distribution. On the other hand, there is no reason why a model that would perform a fully coupled thermo-mechanical analysis could not deserve the title of advanced
model. For some problems such as, for example, spalling in concrete, the integrated models are the only possible hope to achieve any result. To the knowledge of the authors, such coupled models have
not been used for steel structures so far, but developers should not be forbidden to try. Too much importance must probably not be paid to the restrictive character of this word “should’’ that is
present in the Eurocode. Eurocode states that advanced models may be used with any heating curve, provided that the material properties are known for the relevant temperature range. In fact, the
sentence should be written or, at least, understood the other way around: advanced models should not be used with heating curves for which the material properties are not known for the relevant
temperature range. The heating rate of steel that must be comprised between 2 and 50 K/min for the Eurocode material model to be valid is one point on which the Eurocode draws the attention. There is
yet another important point on which the Eurocode gives no guideline for the use of advanced models, namely the cooling phase that is present in parametric or zone fire models. No recommendation is
given about the mechanical properties of steel during the cooling phase. Therefore designers have to make their own judgement and decision about the reversible character of, for example, the yield
strength of steel or its thermal elongation. Advanced calculation methods may be used with any type of cross-section. Here again, there is no reason why a model dedicated to a particular type of
sections should not deserve the name of advanced model, provided that all other requirements related to fundamental physical behaviour be fulfilled. The only factor is that such a model would have a
field of application limited to this type of section. Practically speaking, application of the advanced calculation model requires utilization of sophisticated non-linear numerical computer software.
The finite element method appears to be the method of choice for thermal as well as for mechanical analyses but finite differences have also been used in thermal analyses. Finite volumes and boundary
elements could also be considered, although they have rarely been used in structural fire engineering applications until now.
Advanced Calculation Models
More information about advanced calculation models is given in the subsections of the Eurocode dedicated first to the thermal analysis and, then, to the mechanical analysis. These provisions are
mentioned and discussed in the following sections.
7.3 Thermal analysis 7.3.1
Gen er al f e a t u r e s
Eurocode states that advanced models for the thermal response should be based on the acknowledged principles and assumptions of the theory of heat transfer. This sentence is nothing more than a
particularization of the concept of “fundamental physical behaviour’’ to the case of temperature calculations. Advanced models should consider the variation of the thermal properties of the material
with temperature. The fact that the term “material’’ is singular, plus the comment at the end of the sentence “see section 3’’, i.e. “see the section on material properties of steel’’, seem to
indicate that only thermal properties of steel must be taken as temperature dependent. Thermal properties of insulating materials, on the contrary, may then be taken as temperature dependent or not,
depending of the outcome of experimental test results made to determine these properties. Eurocode specifies that the effects of non-uniform thermal exposure and of heat transfer to adjacent building
components may be included where appropriate. Nonuniform thermal exposure is indeed an essential feature of the design if any localized fire model is used. Heat transfer to adjacent building may be
considered if the cooling effect that would result is expected to have a significant influence on the mechanical behaviour, see Section 7.3.4. According to Eurocode provisions, the influence of
moisture content and moisture migration within the fire protection material may conservatively be neglected. In fact, in most advanced models practically used, moisture is taken into account
implicitly by a suitable modification of the apparent specific heat. Some models take into account the energy of evaporation explicitly. Moisture movements are rarely modelled. It has to be
understood that, if any moisture movement in the insulation may have a slight influence on the temperature elevation of steel around and below 100◦ C, these effects are completely diluted and damped
when steel reaches the temperature levels usually associated to failure, typically several hundreds of degrees centigrade. The most important point, for a designer using such an advanced calculation
model, is to understand the capabilities offered by the model, as well as the limitations inherent to the model. They can differ from one model to the other. The following sections nevertheless
present some aspects that are thought to be rather general. 7.3.2
C a p a b i l i t i e s o f t h e a d va n c e d the r m al m o de l s
The main advantage of a thermal advanced model is its ability to determine the nonuniform temperature distribution in sections or members. There is thus no need to make the hypothesis that the
temperature is uniform, to calculate local thermal massivity in different parts of a member, or to wonder where the maximum temperature in the section is. Each point in the member has its own
temperature. Figure 7.1 shows the isotherms calculated in a steel hot rolled HE400M section after 10 minutes of ISO fire. There is a difference of some 120◦ C between the coldest part of the section,
namely the
Designing Steel Structures for Fire Safety
Diamond 2004 for SAFIR File: HE400M Nodes: 534 Elements: 444 Solids plot Temperature plot
Y X
Time: 600 sec 326.60 324.00 304.00 284.00 264.00 244.00 224.00 204.00 184.00 164.00 144.00 124.00 104.00 Tmin
Fig. 7.1 Isotherms in a steel section resulting from thermal analysis
junction between the web and the flanges, and the hottest part of the section, namely the centre of the web. Heat sink effects from concrete slabs supported by a steel beam are considered in detail.
Figure 7.2 shows the isotherms as calculated in a steel section supporting a concrete slab on the upper flange. Note that the scale of grey in the figure has been chosen from 410 to 910◦ C in order
to highlight the differences in the steel section and in the lower part of the slab. This is the reason why the upper part of the slab appears as very black. A great advantage of the finite element
method is the total versatility of the geometries that can be considered. Nearly all sections or members encountered in civil engineering applications can be represented by linear triangles or
quadrangles, for 2D analyses, or by volume elements with 6 or 8 nodes for 3D analyses. The same software can thus allow considering H or I sections, rectangular or circular hollow steel sections,
angles, or complex 3D joints, with or without thermal protection, either as a contour or as a hollow encasement. The finite difference method is in this respect somewhat more restrictive in the sense
that it is more adapted to structured meshes and, therefore, to regular and repeatable shapes. As mentioned above, 3D analyses can be performed in order to calculate the temperature distribution in
complex assemblies, although at a higher price in term of computing time and, even more, in term of time required for the discretisation. Figure 7.3 shows the discretisation of the structure that has
been created in order to analyse the temperature distribution in a beam to column joint with a concrete slab. The column is protected by concrete located between the flanges. Only quarter of the
assembly is modelled owing to the presence of two vertical planes of symmetry.
Advanced Calculation Models
Diamond 2004 for SAFIR File: HE400M_slab Nodes: 934 Elements: 829
Y X
Contour plot Temperature plot Time: 3600 sec 930.80 910.00 810.00 710.00 610.00 510.00 410.00 310.00 210.00 110.00 10.00 Tmin
Fig. 7.2 Isotherms in a steel section supporting a concrete slab resulting from thermal analysis Diamond 2004 for SAFIR File: a0d Nodes: 9458 Elements: 6054 Solids plot Contour plot
SteeleC3 SilconceC2
Fig. 7.3 Discretisation of beam-column joint with a concrete slab for thermal analysis
The quantity of data provided by such an analysis is enormous. It is as if a thermal sensor was present at every node of the numerical model and there are easily several hundreds of them in a 2D
analysis and several thousands of them in a 3D analysis. This allows, with the aide of now commonly available graphic interface software, plotting isotherms on the structure at every time interval
during the fire, something that would be unfeasible from the result of a limited number of experimental measurements. The quality of the results is also remarkable. Bearing in mind all hypotheses and
limitations mentioned otherwise, it has to be recognised that these virtual thermal sensors normally do not malfunction during the course of the fire, and do not produce
Designing Steel Structures for Fire Safety
results that suddenly diverge and make no sense compared to the measurements of the other sensors, whereas this happens regularly during experimental tests. This is the case of course if the software
has been used appropriately and this depends very much on the expertise of the user, but this expertise can be gained or learnt. The location of each node is also perfectly known without any
uncertainty, which is not always the case in experiments. 7.3.3
L imit at i o n s o f t h e a d va n c e d t he r m al m o de l s
Amazingly enough, one of the major limitations that designers are facing in order to perform a non-linear thermal analysis is perhaps the lack of reliable information on the thermal properties of
insulating materials. Not all companies producing these types of products have dedicated necessary resources required to perform the experimental test programs yielding the high temperature thermal
properties used in numerical analyses. Most of them simply rely on graphical design aids that are a summary of the test results, in the form, for example, of required product thickness as a function
of the thermal massivity of the section and, sometimes, the critical temperature. It has to be mentioned that this limitation also applies to simple calculation models. Another important limitation
is that, with nearly all software currently available, the geometry of the structure in which the temperatures will be determined is fixed by the user and it will not vary during the analysis.
Spalling of concrete, for example, is normally not predicted nor taken into account by such software. Falling off of gypsum plaster boards from steel studs are also modelled or taken into account
with great difficulty. The increase in thickness of intumescent insulation products is also normally not represented explicitly. There are some possibilities to consider these effects in an
approximate manner, depending on the expertise of the user and on the capabilities of the software. For example, it is possible to model an intumescent product by a constant thickness layer of a
product with “equivalent’’ thermal properties. In the software SAFIR developed at the University of Liege (Franssen 2003), it is possible to restart a thermal analysis at a defined time considering
that one or several layers of finite elements have suddenly disappeared from the model. This can be useful in certain cases, but it has to be understood that the amount of material to suppress as
well as the time to do it must be decided by the user. The limitation of a fixed geometry, as well as the limitation about the knowledge of thermal properties of insulating materials, are not
relevant in unprotected steel structures. When different elements are in contact with each other, it is usual to assume a perfect thermal contact, which is an approximation. This can be the case
between the steel of a hollow section and the concrete that is inside, or between a steel beam and the concrete slab that it may support. This can also be the case between different steel components
in a joint. If appropriate information is available about the contact resistance, it is of course possible to model it by a very thin layer of elements with appropriate thermal properties (Renaud
2003), but this is not commonly done. Boundary conditions are easily taken into account when the fire is represented by a temperature-time curve representing the condition of the gases surrounding
the structure, as is the case, for example, with nominal fires, with parametric fires or zone models. It is also possible to take into account a prescribed impinging heat flux as is the
Advanced Calculation Models
case, for example for the localized fire model of Eurocode 1 although, in this case, the variability of the flux in space makes the procedure somehow more complex. The complexity is yet of a higher
order of magnitude if such a model for determining the temperature in the structure has to be interfaced with CFD software, “Computational Fluid Dynamics’’. Such complex models indeed describe the
situation in the fire compartment by an enormous amount of information such as the temperature in virtually every point in the compartment, plus radiation intensities at every point from every
direction. However, this information is transferred to the thermal model for determining the temperatures in the structure is still a topic of research at present (Welsch et al., 2008). 7.3.4
Dis cr ep a n c i e s w i t h t h e s i m p l e c al c ul at i o n m o de l s
It has to be accepted that some discrepancies would exist between the results provided by an advanced and a simple calculation model because of the approximations that have to be introduced in the
latter one. Normal philosophy, when introducing approximations or simplifications, is that they are on the conservative side. Yet, if an advanced calculation model is used for calculating the
temperature distribution in a concave unprotected steel section and if the nominal boundary conditions prescribed in Eurocode 1 are applied as such on the whole perimeter area of the section, the
temperatures found by the advanced model will be higher than the temperature found with the simple model. This goes against the usually accepted principle that a simplified model should be on the
conservative side compared to a more advanced model. Although this is not specifically mentioned in the Eurocode, the authors recommend that one of the following procedures be used when using the
advanced calculation model in order to introduce the concept of the shadow effect in the model. 1.
The simplest procedure is to multiply the value of the coefficient of convection and the emissivity of steel by the value of ksh on the whole perimeter of the section. This will ensure the same
boundary conditions for the advanced model as that for the simple model. A more refined procedure is to calculate in each concave part of the section the view factors between, on one hand, each of
the surfaces of the real section where the energy arrives and, on the other hand, the surface of the box contour through which this energy passes. The coefficient of convection (αc ) and the
emissivity of the material (εm ) are then multiplied by the relevant view factor depending on their position. Figure 7.4 shows an example of the multiplicative coefficients that could be found in a
hypothetical I section.
Each view factor between the internal surface of the flanges and the web, on one hand, and the surface of the box contour around the flanges, on the other hand, is calculated according to the rule of
Hottel. This rule, valid in a 2D situation, is given by Equation 7.1 with the symbols defined in Figure 7.5. Fij =
AD + BC − AC − BD 2 AB
Except in very re-entrant concave sections, the first procedure should yield results that are very close to those yielded by the most sophisticated second procedure.
Designing Steel Structures for Fire Safety
300 0.42
0.42 1.0
Fig. 7.4 Coefficients for multiplying the boundary conditions in an I section
Surface j
B A
Surface i
Fig. 7.5 The rule of Hottel
7.4 Mechanical analysis 7.4.1
Gen er a l f e a t u r e s
Advanced calculation models for the mechanical response should be based on the acknowledged principles and assumptions of the theory of structural mechanics. This sentence is a particularization of
the concept of “fundamental physical behaviour’’ to the case of structural analyses. The changes in mechanical properties of steel with temperature should be taken into account, obviously.
Temperature dependent mechanical properties have to be taken into account. The effects of thermal expansion should be taken into account. This is done at the level of the constitutive model used in
analyses software to represent the behaviour of the material. A component accounting for thermal strain has to be explicitly introduced, which is standard practice. The combined effects of mechanical
actions, thermal actions and geometrical imperfections have to be taken into account. If the structural analysis is performed for the structure under design load in case of fire and the effects of
temperatures on the material properties are taken into account, the first two effects are automatically combined. Geometrical imperfections have to be introduced. It is logical to introduce the same
imperfections as for ambient temperature design because they represent the initial state
Advanced Calculation Models
of the structure, independent of the load case or of the action, normal action or accidental action. In fact, geometrical imperfections of members are not so critical if the member is submitted to
bending moments or to thermal gradients across the depth of the section because these effects will generate transverse displacements that are of an order of magnitude higher than initial
imperfections. For example, the Eurocode states that a sinusoidal imperfection of maximum 1/1000 of the length of the bar should be applied when not specified otherwise by the relevant product
standard. But this provision is only for a steel isolated vertical member. It is likely that the meaning of “isolated vertical member’’ should be understood as “simply supported member submitted to
axial loading’’. If the member is also subjected to bending moments, these will induce first order lateral displacements which are probably of an order of magnitude higher than initial imperfections.
Geometrical non-linear effects have to be taken into account. This can be understood as a requirement that large displacements must be taken into account explicitly. Indeed, these displacements are
usually very large in the fire situation. They may generate second order effects that lead to failure, as in a cantilever fire wall heated on one side for example. On the contrary, large
displacements may allow the structure to find another load path for supporting the loads, as is the case, for example, in members that are mainly submitted to bending at room temperature and may
develop a catenary effect in the fire situation. Taking large displacements implicitly as, for example, by a buckling coefficient in member under compression, is not an advanced method but a simple
calculation method. It is for example well known that, in some conditions, although unprotected steel beams may loose nearly all their load bearing capacity in bending because of the high
temperatures that they experience, the concrete floor that they support may be sufficient to withstand the applied loads. This is due to the tensile membrane forces that develop in the steel mesh
present in the slab. This effect is not presented in details in this book because, although its existence has been shown by the steel industry, it has more to do with the design of a concrete slab or
a composite steel-concrete assembly. The effects of non-linear material properties have to be taken into account including, states the Eurocode, “the unfavourable effects of loading and unloading on
the structural stiffness’’. The material model proposed by the Eurocode for steel is indeed highly non-linear. It is important, in the opinion of the authors, that the full stressstrain relationship
be represented, including the descending branch. If not, excessively ductile behaviour could be predicted, linked to very high displacements. The effects of loading and unloading on the structural
stiffness mean that the material model is not elastic. For steel, a plasticity model is normally used. On Figure 7.6, for example, if the material has experienced a loading from O to A on the virgin
curve for a defined temperature and the strain thereof decreases, the unloading will follow the path A-B and not the reversible path A-O. This schematic example is shown here for a constant
temperature. Somewhat more complex algorithmic considerations have to be taken into account and implemented if the temperature as well as the strain and the stress change simultaneously, see Franssen
(1990) for increasing temperature, and El-Rimawi et al. (1996), the generalisation for the cooling phase. Whether these effects have favourable or unfavourable effects is not certain. Anyway, they
have to be implemented in the constitutive model.
Designing Steel Structures for Fire Safety
Stress A
Fig. 7.6 Material plasticity model illustrating loading and unloading phases
The previous requirement on unloading has as a consequence the fact that a step by step or a transient analysis of the structure should be performed, with the time progressively increasing. In fact,
because of thermal effects and large displacements, strain reversals and unloading in the material occur continuously in a structure subjected to fire. A model that would consider the structure with
the temperature distribution that is relevant at the required fire resistance time and would determine the load bearing capacity at that temperature would not be able to take the effects of unloading
into account and would thus not comply with the requirement of the Eurocode. If the material model recommended in the Eurocode is taken into account, effects of transient thermal creep need not be
given explicit consideration. In fact, transient thermal creep, i.e. the additional deformation that occurs during first heating under load, is a feature known essentially for concrete. Steel is more
prone to exhibit steady state creep, at least at elevated temperatures and load levels. This creep is supposed to be implicitly incorporated in the proposed stress-strain relationship and an explicit
creep term is thus not required in the constitutive model. Because numerical structural models can predict a very ductile behaviour of the member, it is not sufficient to let the computer make a run
and simply note the fire resistance time. The deformations of the structure that have been calculated by the model have to be checked by the designer. It may happen, for example, that a simply
supported steel beam exhibits at high temperatures a very large horizontal displacement of the end that is free, to a point that the beam would fall from its support in a real case. If the material
model has an infinitely long plateau without a descending branch, it may even occur that the beam is totally folded by 180◦ in its centre, with one end coming to the other one and the load supported
in pure tension. The displacements have also to be checked in order to verify that compatibility is maintained between all parts of the structure. The beam of a portal frame, for example, may not
fall down below the level of the ground, something that would cause no problem to a computer program. Any potential failure mode that is not covered by the model should be eliminated by appropriate
means. Failure in shear or in local buckling are mentioned as example of such failure mode. This is correct only if a Bernoulli beam finite element is used. A model based on shell finite elements,
for example, is perfectly able to track failure modes by shear and/or local buckling, but it has to be recognise that the utilisation of shell elements for modelling entire structures is not yet
standard practice. The biggest
Advanced Calculation Models
danger when using a Bernoulli beam element is to bypass the verification of the class of the section because such an element treats all sections as Class 1, however thin the plates of the section
are. Yet, contrary to a commonly heard opinion, hot rolled H and I sections are not systematically Class 1 or 2, especially in the fire situation. What is an appropriate mean to eliminate other
failure modes is not explained by the Eurocode. Shear can for example be checked separately by the simple calculation model and, if interaction between bending and shear has to be considered, it is
possible to introduce a reduce yield strength for the elements near the supports where this is the case. The failure of a structure that comprises Class 2 elements should be specified by the user to
occur when the first plastic hinge has developed; any further calculation that produces a redistribution of the bending moments between the sections along the elements should not be considered.
C a pa bi l i t i e s o f a d va n c e d m e c hani c al m o de l s
The most important features of the advanced model are, first, the fact that indirect effects of actions are taken into consideration and, secondly, the fact that large displacements are taken into
account exactly. Thermal expansion of steel indeed generates various effects: a variation of the effects of actions due to restraint in statically indeterminate structures; a variation of stresses in
the section due to temperature differences, see Figure 7.1, that exists even in statically determinate structures; large displacements that also modify the effects of actions because of P-δ effects.
Whereas simple calculation models can take these effects into account only in an approximate manner, or not at all for some of them, the advanced model consider these effects exactly and continuously
during the course of the fire. The bending moments, shear forces and axial forces vary continuously during the fire depending on the evolution of thermal expansion and of stiffness of the members and
depending on the evolution of large displacements. Figure 7.7, for example, shows the bending moment distribution in a multi-storey steel frame. The right hand extremity of each beam is fixed
horizontally by a shear resisting concrete core that is not represented here. On the left is the moment distribution at time t = 0 and on the right is the moment distribution after 30 minutes of fire
developing at the second floor. The axial compression force developing in the beam at the second floor, because of the restraint provided by the bending stiffness of the columns, and the tension
force that develops in the beam at the third floor because of equilibrium reasons, highly modify the bending moment distribution during the course of the fire as illustrated in the right hand part in
the Figure. Figure 7.8 shows the evolution of forces and displacements in a quarter of a concrete slab supported on steel beams at the edges (Lim 2003). The deformed shape of the slab is shown before
the fire and at failure on the left hand side of the Figure. The right hand side shows how the membrane force distribution is modified because of the large deflections. A compression ring develops at
failure near the edges of the slab, allowing tension to develop in the centre of the slab. It is only when this membrane action mechanism is taken into account that the stability of the slab can be
explained; any analysis that considers only bending as a possible load path fails to explain the high fire resistance times computed numerically and observed in experimental fire tests.
Designing Steel Structures for Fire Safety
Y Z
5.0 E 05 Nm t0
5.0 E 05 Nm t 30 min.
Fig. 7.7 Bending moment distribution in a multi-storey steel frame. From Garlock & Quiel (2007)
Theoretically speaking, the physical dimensions of the structure are not an issue in a numerical simulation, whereas experimental facilities are limited to dimensions in the order of a few meters.
Larger and larger steel structures have been analysed in recent years, the destruction of the twin towers and of building 7 at the World Trade Center in New York City in 2001 being the most
spectacular motivation in a common trend that existed anyway. Figure 7.9 shows the numerical model of a composite steel-concrete bridge with a span of 176 meters. This bridge suffered from a severe
localized fire from a leaking gas pipe that it supported and collapsed in the water canal underneath. Only with the advanced model is it possible to analyse 3D structures. This possibility is
normally not feasible through the simple calculation models, because of an excessive complexity of the problem, or to experimental testing, because of the size and of the complexity of the support
and loading systems that would be required. Figure 7.10 shows a steel building failing partially because of a localized fire. The quantity and the quality of the results obtained from a numerical
analysis is higher by several order of magnitude than the results provided by simple calculation models or by experimental tests. It is as if several displacement and rotation transducers were
present at every node of the mechanical model and there can easily be several hundreds of them. It is, on the contrary, not so easy to measure correctly the displacements of a structure located in a
furnace and subjected to very high temperatures. Quantities such as strains, stresses or bending moments at virtually any point in
Advanced Calculation Models
Z Y
5.0 E 01 m
Displacements at time t 0
Membrane forces at time t 0
Z X
Z Y
5.0 E 01 m
Displacements at time t 187 min.
Membrane forces at time t 187 min.
Fig. 7.8 Evolution of displacements and forces in ¼ of a concrete slab. Courtesy Dr L. Lim, Univ. of Canterbury
the structure are easily retrieved from a numerical analysis whereas they are not accessible from a test and are only approximated by the simple calculation models. The information that can be gained
on the behaviour and the failure mode of the structure is extraordinary compared to the information gained from other methods that, very often, simply yield a fire resistance time. 7.4.3
L imit a t i o n s o f t h e a d va n c e d m e c hani c al m o de l s
The size of the structure is, and probably will remain for a long time, one of the limits encountered in the step by step large displacement non-linear analyses that have to be performed in the fire
situation. The towers of the World Trade Center are again the example that may come to mind first; others are the structure of the Piper Alpha oil platform that was destroyed by a fire following an
explosion in the North Sea in 1988. A steel rack structure in a typical storage building analysed by two of the authors (Zaharia & Franssen 2002) contained an estimated number of 200 000 individual
bars. The limit lies in the capabilities of the computer as well as in the resources, in term of
Designing Steel Structures for Fire Safety
Diamond 2004 for SAFIR F0 F0
F0 F0
Beams plot Trusses plot Imposed DOF plot
Z X
File: pont Vivegnis Nodes: 1046 Beams: 498 Trusses: 20 Shells: 0 Soils: 0
Deck.tem Arches.tem Ties.tem
F0 F0
Fig. 7.9 Composite steel-concrete bridge under fire
Diamond 2004 for SAFIR File: Frame3d_dyn Nodes: 1223 Beams: 585 Trusses: 0 Shells: 0 Soilds: 0 Y Z
Displacement plot (1) X
Time: 1257.368 sec
5.0 E 00 m
Fig. 7.10 Failure of a 3D steel building due to localised fire
time and money, that can be allocated for descretising the structure, performing the calculations and analysing the results. Because of the former limit, analysing a part of the entire structure, see
Section 5.1, is very often necessary even when using an advanced calculation model. The choice of the forces or restraint to displacements that have to be applied at the boundaries between the
substructure that is analyzed and the rest of the structure have to be chosen by the designer. No computer software will ever be able to substitute for the judgement of the designer. It has to be
realized that the choice that is made can have a significant influence on the failure mode, certainly, and on the fire resistance time in most cases.
Advanced Calculation Models
Diamond 2004 for SAFIR File: studfixed12 Nodes: 6231 Beams: 0 Trusses: 0 Shells: 5992 Soils: 0 Displacement plot (10) Time: 2750 sec
Z X
5.0 E 03 m
Fig. 7.11 Steel stud wall in a fire analysis
A limit encountered by every numerical software that performs a step by step analysis of the structure is the too frequent occurrence of numerical false and premature failures. This problem could be
traced to the impossibility for the software to cope with local but temporary failures that may occur in single elements or parts of the structure, whereas they do not endanger the load bearing
capacity o the complete structure as a whole. Franssen and Gens (2004) showed that performing a dynamic analysis of the structure is a simple and effective way toward solving this problem. It has to
be realized that the Bernoulli beam finite element, certainly the workhorse of structural fire modelling during the last decade, has inherent limits that make it impossible to detect some failure
modes such as, for example, local buckling and shear failure. Figure 7.11 shows a steel stud wall to illustrate how local buckling can be tackled by using shell finite elements. Of course, although
using shell elements is feasible for the analysis of a single member or of a detail, it is unthinkable for the analysis of a complete structure of significant size, at least for the time being.
Figure 7.12 shows the deformed shape of a cellular steel beam at failure. The length of the beam is short in this example, because this corresponds to a real specimen that has been tested
experimentally. Note that the displacements have not been amplified in the drawing. Although the beam and the loading are normally symmetrical, there is a localization of the distortion near one end
at failure. Such unsymmetrical failure
Designing Steel Structures for Fire Safety
Diamond 2004 for SAFIR File: acb_dyn_hot Nodes: 905 Beams: 0 Trusses: 0 Shells: 608 Soils: 0
Z X
Imposed DOF plot Point loads plot Displacement plot (1) Time: 651.1728 sec
F0 F0F0 F0 F0F0 F0 F0 F0 F0 F0F0 F0
F0 F0F0 F0 F0 F0 5.0 E 01 m
Fig. 7.12 Failure of cellular steel beam in fire
Diamond 2004 for SAFIR File: vernon Nodes: 1548 Beams: 394 Trusses: 0 Shells: 960 Soils: 0 Displacement plot (5)
Time: 985 sec X
1.0 E 00 m
Fig. 7.13 Composite steel-concrete car park exposed to fire
modes are commonly observed in experimental tests. These very large displacements and the non-symmetrical failure mode could be obtained only with a dynamic analysis of the problem. Beam and shell
finite elements can of course be combined, as is the case for the car park presented on Figure 7.13. The steel columns and beams have been represented by beam elements whereas the concrete slab has
been represented by shell elements.
Advanced Calculation Models
Dis c r ep a n c i e s w i t h t h e s i m p l e c al c ul at i o n m o de l s
It has been explained that two correction factors, κ1 and κ2 , are used in the simple calculation model for taking into account the effects of non-uniform temperature distribution, see Section
5.6.4.2.2. It would theoretically be possible, in a numerical analysis, to make a full 3D thermal analysis of the beam, including the eventual protection and cooling effect induced at the supports.
This is yet by far a too complex analysis for real applications and the effect is usually neglected. It has yet to be realized that, because of that, the results of an analysis by the advanced model
may be less favourable than the results of an analysis of the same beam by the simple model. It is possible and easy to take the effect of colder temperatures at the supports in the advanced model in
an approximate manner: a certain length of the elements on each side from the support may be given a less severe thermal axposure and, hence, a less severe temperature increase. What length has to be
given to the affected zone and what level of decrease in severity of the thermal attack has to be chosen in order to achieve an increase of the load capacity up to 1/0.85 = 1.18 is not known. As
already stated in Section 5.6.4.2.2, the increase to 1/0.70 = 1.43 that is provided by the simple model for unprotected sections supporting a concrete slab will by no means be provided by the
advanced model, except in the case of unsymmetrical sections, those with the top flange that is much weaker than the lower flange.
Chapter 8
Design Examples
8.1 General This chapter presents four examples showing how a complete structure can be analised in fire situation using the concept of sub-structure analysis or element analysis. The first example
shows how a single span is extracted from a continuous beam allowing a member analysis by the simple calculation model. The discussion illustrates the concept explained in Section 5.1.2. The second
example illustrates the same concept for extracting a sub-structure from a moment resisting frame allowing analysis by the advanced model of the sub-structure instead of the complete structure. The
third example presents the steps to be followed when designing a hypothetical industrial building, using the simple method for the verification of the elements. The last example presents a case study
of a particular building: a high-rise rack steel structure supporting the building envelope. Due to the complexity of the structure, sub-structures have also to be defined, those being analyzed by an
advanced model. The discussion here encompasses also the decisions that were taken for representing the fire exposure.
8.2 Continuous beam The procedure described in Section 5.1.2 is applied here for a continuous beam that is not restrained against axial expansion, see Figure 8.1. The mains steps to be followed in
the analysis are as follows: Step 1
Step 2
Determine the effects of action in the whole beam under the design load combination for the fire situation. In this particular case, this step might be omitted as will be seen later in step 5. Each
span will be successively analysed as an element. This is because, on one hand, the structural analysis of a single span beam is a trivial problem with very well known solutions, whatever the
boundary conditions, and, on the other hand, the plastic theory indicates that the ultimate load of a span is not affected by the loads applied on the other spans. Here, the load is constant, but the
same theory says that the fire resistance of a span is not influenced by the loads applied on the other spans. The next steps here deal with the analysis of the central span.
Designing Steel Structures for Fire Safety
Fig. 8.1 Continuous beam exposed to fire
Step 3 Step 4 Step 5
Step 6 Step 7
The vertical restrains provided by the supports at both ends of the span are taken into account for the element because these are supports of the structure. The loads applied on this central span are
taken into account. Degrees of freedom at the boundary. (a) For the horizontal degrees of freedom, one is fixed, in order to prevent a rigid body movement, and on the other one the displacement is
not fixed but the axial force determined during the structural analysis of the total beam is applied, in this case 0. This way, no axial restraint is created in the element, which corresponds to the
situation in the structure. (b) The rotational degrees of freedom are fixed (this would not be the case for the exterior supports of first and last span). The fact that they are fixed will allow the
development of the plastic hinges on the supports, in similarity with the failure mode that develops when this span fails in the structure. Of course, plastic hinges can develop only in Class 1 and
Class 2 sections. The effects of actions, here bending moments and shear forces, can be determined in this element, a one span beam with fixed end rotations. No indirect fire action is considered in
an element analysis.
The resistance and stability of this span of the beam is then verified according the appropriate equations from Section 5.6.4 or 5.6.6 taking into account that the beam is laterally restrained or
not, that the temperature in the section and along the beam is uniform or not, and depending whether the section is a Class 1, 2, 3 or 4 section in the fire situation. The procedure has to be
repeated for every span, every load case and every fire scenario.
8.3 Multi-storey moment resisting frame The procedure described in section 5.1.2 is applied here for a multi-storey moment resisting frame, see Figure 8.2. This example shows how a sub-structure is
extracted from the frame and gives the considerations at the base of the choice for the boundary conditions between the sub-structure and the rest of the frame. It is assumed that each floor of the
building constitutes a separate fire compartment. Step 1
Determine the effects of action in the total structure under the fire load combination that is being considered.
Design Examples
Fig. 8.2 Moment resisting frame exposed to fire
Step 2
The first decision, concerning the limits of the substructure, is here to limit the analysis to a plane frame. This is usually the case if the structure of the entire building is made of parallel
similar frames with only secondary beams spanning from one frame to the other. In this plane frame, the columns directly exposed to the fire have to be part of the substructure, as well as the beam
which is directly exposed to the action of the fire, i.e. the beam above the fire compartment. The beam and columns under the fire compartment may be excluded, because they will be represented by
rigid boundary conditions, see next step. The columns above the fire compartment, on the other hand, will be included in the substructure because they offer less restraint to rotation at the top of
the fire columns and their flexibility will influence the moment redistribution in the part of the substructure that is directly affected by the fire.
Designing Steel Structures for Fire Safety
Step 3 Step 4
Step 5
Note: in this case, and strictly speaking, there is an interaction between steps 1 and 2. The true decision process would probably be first to decide to make a 2D plane analysis (and this is a
decision that belongs to step 2), then to make step 1, the analysis of the structure, i.e. the plane frame only. The substructure shown in the lower part of Figure 8.2 has no direct support to the
foundations. The loads directly applied on the substructure are taken into account. In this case they consist mainly of the dead weight and of the loads applied on the beam above the fire
compartment. Wind load could also be applied on the external columns if relevant. Degrees of freedom at the boundary. (a) All degrees of freedom out of the plane of the substructure are fixed at each
point where a cut has been made to separate the substructure out of the structure, i.e. certainly at all beam to column joints. No out of plane displacement or rotation can exist at these points. Out
of plane displacements will probably also be fixed for the beam, owing to the presence of a floor slab. Whether out of plane displacements will be fixed for the columns depends on the local
arrangements of the building. This will be the case, for example, if there are transverse masonry walls between the columns. In this case, the substructure will be considered as a 2D object in the
structural analysis. If this is not the case, a 3D analysis may prove to be necessary, even if the substructure is plane, in order to be able to check the out of plane stability of the columns. If
the frame is a sway frame, it would be more representative of the real behaviour not to fix the horizontal displacements at the top of the columns but to impose the same displacement for all the
columns. (b) If the frame is a braced frame, it will be assumed for the analysis of this substructure that the efficiency of the bracing system is maintained for a duration that is longer than the
fire resistance of the substructure. Three fixed horizontal restraints are thus introduced at the connection between the bracing system and the substructure, here at the bottom, at mid level and at
the top of the right hand column. The fire resistance of the bracing system has to be checked in a subsequent analysis, considering the bracing system as another substructure. (c) At the bottom of
the heated columns, • the vertical displacement is fixed. This reflects the fact that the true vertical displacements at these points are very similar for all the columns, • the horizontal
displacement of all columns will be the same, which reflects the high axial stiffness of the lower beam. Because the bottom of the right hand column is fixed by the bracing system, all columns will
be fixed, • the rotation is also fixed, owing to the presence at each joint of the lower beam and the lower column which remain cold and thus much stiffer than the heated columns of the fire
compartment. (d) At the top of each non-heated column, • the vertical displacement is free, because there is no reason to introduce an axial restrain at these points. The axial force resulting
Design Examples
Step 6 Step 7
from the analysis of the entire structure at time t = 0 is applied here, • the horizontal displacement of all columns is the same, owing to the axial stiffness of the upper beam. Because the right
hand column is fixed by the bracing system, all columns will be fixed. • the rotation is free and the bending moment derived from the analysis of the entire structure is applied. It is not necessary
to determine the effects of actions at time t = 0, see step 7. Indirect fire actions have to be considered in the substructure which implies that, practically, the only possible calculation method is
the advanced calculation model, see Section 5.2.1 and Chapter 7. The effects of actions are thus evaluated continuously during the course of the fire and their value at time t = 0 is not relevant.
The procedure is repeated for every load case and every fire scenario, i.e., in case of a nominal fire, for the fire being supposed to be located successively at each different floor.
8.4 Single storey industrial building The structure considered in this example is presented on Figure 8.3. The skin of the building is made of corrugated steel sheets. The diaphragm effect from the
sheeting is nevertheless not considered in the design of the building, at least not in the fire situation because of the uncertainty on the behaviour of the connectors at elevated temperature. Seam
fasteners, for example, may well be made of aluminium alloy blind rivets and aluminium looses strength faster than steel under increasing temperatures. Steel self-drilling self-taping screws used as
sheet to perpendicular member fasteners may be associated with neoprene fasteners and neoprene may soften under high temperatures. The sheets of the roof and of the walls are supported by continuous
steel purlins spanning from frame to frame. The transversal frames, with hinged bases, are linked by means of rafters at the level of knee and corner joints.
Fig. 8.3 Steel industrial building exposed to fire
Designing Steel Structures for Fire Safety
Intermediate columns supporting the vertical loads as well as the wall purlins are placed in the two end frames. Cross-bracings are provided between the frames in the two end spans, in the lateral
walls and in the roof. In the transversal direction, cross-bracings are provided in the lateral parts of end frames. The loads to be considered are: • • • •
The permanent load, G The wind in the transversal direction, WT The wind in the longitudinal direction, WL The snow load, S
If the frequent value of the dominant variable action is taken into account, see Equation 2.5.a, the load combinations in case of fire are: • • • •
Combination 1: Gk + ψ1 Sk + ψ2 WTk Combination 2: Gk + ψ1 Sk + ψ2 WLk Combination 3: Gk + ψ1 WT k + ψ2 Sk Combination 4: Gk + ψ1 WLk + ψ2 Sk
If the quasi-permanent value of the dominant variable action is taken into account, see Equation 2.5.b, the load combinations in case of fire are: • •
Combination 1: Gk + ψ2 WT k + ψ2 Sk , Combination 2: Gk + ψ2 WLk + ψ2 Sk .
The effects of actions in the structural elements have to be determined in fire situation, at time t = 0. The analysis of the structure is performed in the same way as for the analysis at room
temperature, considering successively either the four or the two load combinations presented above. An appropriate fire scenario has to be chosen. Considering that an analysis by a CFD model will
probably not be performed for such a simple and relatively common structure, the following fire models may be considered: • • •
the standard ISO temperature-time curve, localised fire models, two zone models.
The parametric temperature-time curve may be applied if the dimensions of the fire compartment (considered as the entire volume of the building) do not exceed the limits of application of this
simplified fire model. Thermal actions from localised fires may be taken into account until the flash-over occurs. The occurrence of the flash-over may be determined by a simulation of the fire
development, using an advanced two-zone model. If for the required fire resistance time of the main structural elements the flash-over is still not produced, then it is sufficient to perform the
analysis of different elements under combined effects of the localised fire and of the hot zone of the two zone model. If the zone model indicates the
Design Examples
occurrence of flash-over, the analysis should be continued from that time and further in the one-zone condition, without any effect from the localised fire. Note: the software OZone developed at the
University of Liege, Cadorin and Franssen 2003 and Cadorin et al. 2003, has the ability to switch automatically from a two zone to a one zone configuration when certain criteria are met. It can be
downloaded for free from the web site of the university. By default, in case of lack of the appropriate software tool to determine a fire scenario considering a two-zone advanced model, the standard
ISO temperature-time curve is always accepted. For this example, all further considerations concerning the analysis of the elements in case of fire are made assuming that the entire compartment is
engulfed in fire and the temperature of the gases in the compartment follows the standard ISO temperaturetime evolution. The normal and most usual situation is that the required fire resistance time
is known, since it is prescribed by the relevant fire authorities or in the local building codes. The analysis of each structural element can thus be performed in the load domain. The required fire
resistance of the frames is verified on the basis of the governing equations given in Chapter 5 considering each separate element, namely the columns, subjected to combined bending moment and axial
force, and the beam, subjected mainly to bending moment. The fire resistance of the frame is the lowest resistance of all the elements. It comes then as a consequence that, if the required fire
resistance for beams and columns is different as is the case in some regulations, function of the building destination, it is indeed meaningless in the case of single storey frames like in the
present example. Even if the required fire resistance for beams is less than that of the columns, the same fire resistance time will in fact be obtained for all elements, because collapse of the beam
produces the collapse of the entire frame. The calculation of the fire resistance of the frame, i.e. of its elements, columns and beam, is performed assuming that the stability of all other members
that are required for the stability of the frame is ensured. These are, for example, the bracings and the rafters. Of course, the fire resistance of these elements will have to be checked in a
subsequent stage of the analysis. Indeed, the columns may be considered as fixed in the longitudinal direction by the rafters at the corner connection only on the condition that the rafters and the
cross-bracings from longitudinal walls are still effective, at least until the corresponding required time resistance. Therefore, all the cross-bracings and rafters in the longitudinal walls must be
designed to withstand the loads in the fire situation resulting from the static analysis, during the same time as the columns. The buckling length of the columns to be considered in this case for the
longitudinal direction would be the same as for room temperature, i.e. the column height. As a simplification, the buckling length in the plane of the frame may also be calculated as for room
temperature. The beam may be considered laterally fixed between the corner and the knee connection of the transverse frame, provided that the conditions above are fulfilled and also the rafters at
knee joint and the bracing system in the roof are effective at the corresponding resistance time of the beam. Considering the lateral restraint of the beam against lateral-torsional buckling, this
could be provided for the upper compressed flange by the purlins. But the problem is that, generally, the roof is considered as a secondary system of the structure. It is thus not required to have
any fire resistance
Designing Steel Structures for Fire Safety
time. The purlins can perhaps stabilise the beam if they are able to act as tension elements transmitting the stabilising effect to one of the end bracing systems. Two end bracing systems have to be
present if this mechanism is to be expected to work because the lateral instability of the beam may be in either direction. If this is not the case, and if the beam does not satisfy the required
resistance time, instead of reinforcing the purlins until they can work in compression, which could lead to important supplementary costs, additional rafters may be provided in the plane of the roof,
at mid span of the beam, in line with the front columns, as marked on Figure 8.3. The end frames have to be verified separately. It is indeed likely that the presence of the columns in the end gables
has been accounted for in the design at room temperature and that the members of these frames are somewhat weaker than the design of the current frames. For the front columns in the end gables, the
buckling length may be considered equal to that determined at room temperature, i.e. equal to the length of the columns for both directions. Being part of the front transverse frame, the front
columns are subjected to combined axial force, received from the end transverse frame, and to bending moment from wind action. Their resistance time in fire conditions must be the same as that of the
columns and beams of the frames, considering that their collapse may produce the collapse of the end transverse frames. Due to the presence of the hinged columns in the end gables, which offer
supplementary support to the beams but no rigidity to the lateral loading, and because the end frames take less vertical loading than the main transverse frames, the lateral columns and the beams of
these end frames are weaker that those of the main transverse frames. Therefore, the vertical cross-bracings in the end gables will be designed in order to ensure for the front transverse frames at
least the same stiffness as that of the main transverse frames. Because no precise recommendation is given for limiting the displacements in case of fire (see Section 1.2), it may be considered that
the resistance time of these cross-bracings is not critical. However, if the resistance time of the cross-bracings in the front transverse frame is less than the required resistance time, a
supplementary verification must be performed for the front frames, considering that the cross-bracings are not present. It must be noted that, considering the flexibility of the front frame in this
case, second order effects may became important and should then be considered in the static analysis, which is not easy with the simple calculation model. The bracings that resist the longitudinal
wind loads have to be verified. Usually, only the diagonals that work in tension are considered, while those that are submitted to compression are neglected because their high slenderness induces
buckling failure. Diagonals that work in compression can be considered if they are appropriately sized. Finally, the rafters that stabilize the frames have also to be checked. The axial forces
required to mobilise the stabilising effect to the frame have to be evaluated. It is likely that the load combination with the longitudinal wind loads will be the most critical one for these
elements. The high number of elements to be verified, combined with the number of load combinations (consider that each wind load may in fact comprise various possibilities for the inside pressure),
lead to an enormous amount of calculations to be performed. It has also to be considered that the procedure described above is only applicable for verifying a structure with known dimensions and
properties. In the case that
Design Examples
the outcome of the fire verification is not satisfactory, a new verification has to be performed with new sections. This is why the advanced calculation models, although they required an additional
cost for acquiring the appropriate software and for learning to use it, offer enormous advantages. Most often, the advanced model would be used to analyse one frame as a 2D statically indeterminate
structure loaded in its plane but modelled by 3D beam elements in order to allow out of plane instability to develop. Other elements such as rafters and members of the bracing system would be
analysed by the simple calculation model because the mechanical behaviour of these elements is much simpler. It may be possible that in the future the whole building structure will be modelled
completely and analysed as one single object, as is more and more often performed at room temperature. The main difficulty in such a global analysis is the fact that a great number of members exhibit
instability because of the restraint forces induced by thermal expansion. A step by step algorithm relying on successive static analyses is not suitable for analysing such a complex structure. A
dynamic analysis is normally required, see Section 7.3.3.
8.5 Storage building This example presents a fire design case study for a high-rise storage rack system that supports the skin of the building. From the legal point of view, the classic rack systems
located within a building with its own independent structure are usually not required to have any fire resistance. These racks are considered as furniture. On the contrary, if the skin of the
building is directly applied on the rack system, the racks then become the structure of the building, and they must have the required fire resistance. The structure considered here (Zaharia &
Franssen, 2002) is a steel storage building of this type built by the Belgian company TRAVHYDRO. It has a floor surface of 9168 m2 for 30 m high. There are 36 racks on the 160 m length of the
warehouse. Between the cross-aisle frames of the racks, horizontal elements are provided in order to maintain the distance between the rails for the wagons of the automatic pallet transport system,
as shown on Figure 8.4. On the down-aisle direction, one rack has 60 m length and is provided with 10 levels for pallet disposal, see Figure 8.5. For this type of industrial building in which hardly
any person is present, and accounting for the existence of an automatic sprinkler extinguishing system, a fire resistance time of 15 minutes is required. Concerning the safety of the occupants of the
building, it must be emphasised that only two people are authorised to enter the building, for two hours, once a week. These people are trained for fire situations, know the building, and they may
evacuate in less than 10 minutes from the moment of fire discovery. A special requirement for this building was that, even if one rack of the structure collapses, because of a malfunction of the
sprinkler system for example, the entire building must not present a progressive collapse phenomenon. This requirement is for preserving the safety of fire fighters that may have entered the building
to fight the fire that engulfs one single rack. The analysis of the warehouse subjected to fire was performed with the SAFIR computer program.
Designing Steel Structures for Fire Safety
Fig. 8.4 Cross-aisle direction (elevation) in a warehouse
Roof slope 1195/59779 2% 150
DETAIL 3
6 DETAIL 2
Fig. 8.5 Down-aisle direction (elevation) in a warehouse
Design Examples
A mechanical analysis under elevated temperatures considering the standard ISO fire in the entire compartment was realised first, and showed that the 15 minutes of fire resistance required for this
type of building cannot be obtained. The reasons are: • •
First, the low thermal massivity of the cold-formed profiles normally used to build rack systems leads to a rapid temperature increase in the members and, consequently, a rapid decrease of the load
bearing capacity. A second reason is that the combination factors for frequent or quasi-permanent variable actions for storage systems are much higher than for other types of buildings, typically in
the order of 0.8 to 0.9, see Table 2.1. The structure is thus highly loaded in the fire situation. Also playing a key role are the important indirect effects of action caused by thermal expansion in
such building systems, often continuous over 100 meters, and supposed to be uniformly heated when submitted to a nominal fire.
It is thus desirable to consider a more realistic approach for the fire scenario, taking into account the pre-flashover phase as well as different physical parameters, concerning the fire load and
the building itself. In order to determine the temperature-time curves for the thermal analysis of cross-section, using the combined two zone and one zone model, the computer program OZone developed
at the University of Liege was used. Depending on the fire spread area, of the height of the hot zone or on the temperature in the hot zone, a transition from the two zone situation to a one zone
situation, with an uniform temperature in the entire compartment, may occur. Flashover is assumed to occur when the temperature in the hot zone reaches 500◦ C. There are no large openings in the
storage building. There are no windows, only normal doors for personnel access. Smoke exhaust systems are provided in the ceiling, with a surface equal to 2% of the surface. The fire load density was
evaluated as 8400 MJ/m2 , taking into account the combustibles provided by the stored goods on the 25 m height of storage in the building, and the maximum rate of heat release density is 8620 kW/m2 ,
according to a previous study made by CTICM (Joyeux & Zhao, 1999). A very important parameter for the evolution of the temperature-time curve is the fire growth rate. Recent tests on rack fires made
at TNO in The Netherlands demonstrated that the fire growth rate in racks may be faster than “ultrafast’’, to which corresponds a time constant of 75 seconds in a t 2 model. For this building, a time
constant of 66 seconds was taken into account as the time to obtain 1 MW of heat release, which represents the fastest fire growth rate obtained in the experimental tests. On the basis of these
hypotheses, the temperature-time curve for the hot zone obtained with the two zone model is shown in Figure 8.6. The compartment is under a two zone situation until 17 minutes after the fire
beginning, when the interface of the two zones reaches less than 20% of the building height, so one of the criteria for one zone transition is fulfilled. The flashover is reached after 24 minutes.
After 10 minutes of fire, the thickness of the smoke layer is around 20 meters and the temperature in the lower zone is less than 30◦ C, so the smoke and the temperature should not impair the ability
of the occupants to evacuate. For the rack where the fire starts, the fire scenario supposes that a column and the adjoining rails are subjected to localised fire. The rest of the structure is under
Designing Steel Structures for Fire Safety
Temperature (°C)
Time (min)
Fig. 8.6 Temperature-time curve for the upper zone in the warehouse
Fig. 8.7 Collapse of one rack under natural fire
the influence of the two zone temperature distribution, as indicated schematically by Figure 8.7. Because it is totally unrealistic to model the whole structure in the mechanical analysis, a series
of uncoupled analyses is performed considering separately the behaviour in one plane, Figure 8.4, then in the other planes, Figure 8.5. Because of the high restraint to thermal expansion in this
highly indeterminate structure, the buckling of some elements within the bracing system of the rack occurs after less than 3 minutes of fire exposure, but this does not lead to the collapse of the
entire rack. The numerical analysis indicates that the global collapse of the rack under fire occurs after around 6 minutes, just after the collapse of the upright under localised fire. In the other
direction, and assuming that the fire started in the rack located in the middle of the building, it can be considered that the building is cut in two separate parts by the collapse of the rack under
fire at 6 minutes, as shown in Figure 8.8. After the collapse of the first rack, it can be considered that the fire spreads to the two adjoining
Design Examples
Fig. 8.8 Cross-aisle configuration after collapse of the first rack
frames, and they are under the influence of a localised fire. The working hypothesis is that the collapse of a rack produces the breaking of the horizontal elements between cross-aisle frames so it
does not produce the collapse of adjoining frames. Therefore, in order to ensure that the progressive collapse is avoided, it is necessary to design the horizontal elements connections to withstand
the necessary compression and tension efforts induced by the wind and by the movement of the wagons of automatic transport system, but to have poor resistance for relative displacements between
frames in both vertical and horizontal directions. The proper design of these elements and their relevant connections that ensure the appropriate behaviour may be the biggest challenge for the fire
safety of this structure. These horizontal elements between the cross-aisle frames are the next elements that will suffer and fail from the fire after the collapse of the first rack, especially those
that are located in the upper zone, see Figure 8.8. Indeed, due to the continuous temperature increase in the upper zone, these elements exhibit thermal expansion and the horizontal displacement that
should be produced at the top of the racks is opposed to by the stiffness of the lower and colder part of the racks. The compression forces in the horizontal elements increase until failure by
buckling occurs, usually in the middle of the remaining structures, i.e. in the case shown on Figure 8.8 at ¼ and ¾ of the length of the hall. This buckling reduces the restraint forces in the
elements, for a while at least because further temperature increase will again induce increasing compression forces in the elements that have not yet failed. The analysis of the cross-aisle direction
of the building, see Figure 8.8 is thus performed, considering the expansion and the progressive suppression of the horizontal elements. The analysis using the two zone situation for the entire
building combined with the localised fire for the racks in the middle of the building shows that collapse occurs within about another 6 minutes in the two racks under localised fire. The localised
collapse of the racks in the middle of the building may continue in the same manner, without producing progressive collapse of the whole structure, on the condition that the horizontal elements
between adjacent racks break at the moment of collapse of one of the racks that they connect. The global collapse of the entire structure corresponds to the moment of collapse of all other current
racks, not under localised fire, but under the influence of the increasing temperature in the two zones. The structure as originally proposed exhibited a global and progressive collapse before
flash-over occurs and this situation was judged as unacceptable because this
Designing Steel Structures for Fire Safety
could be a great thread, or even a possible killer, for fire fighters that may have entered the building owing to the fact that the situation is still tenable. Some slight modifications have been
proposed, mainly in the bracing system, until it could be shown that the modified structure survives long enough to reach the time corresponding to the flashover phase. After this limit, it would be
difficult to obtain more structural resistance, taking into account the high level of temperatures in the compartment. It would also probably be totally irrelevant. The collapse of the first racks
under localised fire, after 6 or 12 minutes, does not present a danger because at this moment the persons who evacuate the building cannot be present in the area that is engulfed in fire. Considering
the fire fighting and rescue services, which may arrive in less than 15 minutes after the fire beginning, the main risk for those who may enter the building is the progressive collapse of the
structure. The present study tends to indicate that no progressive collapse has to be feared as long as the flash over does not occur. When and if this happens, the structure is certainly in great
danger of immediate collapse but, at this time, this is not anymore a real issue for the firemen, who should have evacuated the compartment. There is certainly an uncertainty attached to the results
presented because of the uncertainties linked to the data and to the fire model. Such a study can nevertheless provide very interesting and useful information. It shows that it is nearly impossible
to strengthen the structure until it presents a fire resistance of 15 minutes to the standard fire. Trying to reach this objective would cost enormous amount of money and would not necessarily
improve the fire safety in the building. Indeed, because any real fire would start as a localized fire, it may even occur that the additional stiffness induced in the system would be detrimental to
the stability of the rack under fire. Analysis of the development of the temperatures likely to exit in case of a fire shows that there is a sufficient period of time available for evacuation before
flash-over. This is because of the huge volume of the compartment in which the energy produced by the fire is dissipated as well as to the presence of heat and smoke evacuation systems in the roof.
The failure of the racks under fire cannot be avoided, but can probably be accepted. The key issue is that this local failure does not lead to progressive collapse of the whole structure. The failure
mechanism has been identified and the role of the horizontal elements between the racks has been pointed out. Care should be taken in order to ensure an appropriate behaviour of these elements.
Simple and economic modifications of the structure have been proposed in order to extend the time before global collapse, the aim being that it does not occur before flashover occurs and makes the
situation in the compartment untenable for any occupant eventually still in the building or for fire fighters who may have entered the building.
Annex I
High Temperature Properties and Temperature Profiles
I.1 Thermal properties of carbon steel The following sections describe the thermal properties of carbon steel. The thermal properties of stainless steel are given in Annex C of Eurocode 3 and in
Appendix of ASCE manual (1992). However, AISC steel design manual lists Eurocode thermal properties. There are slight differences between Eurocode and ASCE specified high temperature thermal
properties. I.1.1
E u r oco d e p r o p e r t i e s
I.1.1.1 Th e rma l c on d u c ti v i ty Thermal conductivity of carbon steel decreases with temperature in the manner described by Eq. I.1, see Figure I.1. λa = 54 − θa /30 ≥ 27.333
where θa is the steel temperature in ◦ C. Although this is not explicitly stated in Eurocode 3, the thermal conductivity of steel is generally assumed to be reversible during cooling, which means
that the thermal conductivity of steel varies according to Eq. I.1 during heating from 20◦ C to a temperature θa,max as well as during subsequent cooling back to 20◦ C. I.1.1.2 S pe c i f i c h ea t
Specific heat of carbon steel in J/kgK is varying with temperature in the manner described by Eq. I.2, see Figure I.2 (in kJ/kgK). ca = 425 + 0.773 ϑa − 1.69 × 10−3 ϑa2 + 2.22 × 10−6 ϑa3 13 002 738 −
ϑa 17 820 ca = 545 + ϑa − 731
ca = 666 +
ca = 650
ϑa < 600◦ C
600◦ C ≤ ϑa < 735◦ C
735◦ C ≤ ϑa < 900◦ C
900◦ C ≤ ϑa
Designing Steel Structures for Fire Safety
Thermal conductivity (W/mK)
Steel temperature (°C)
Fig. I.1 Thermal conductivity of carbon steel
Specific heat (kJ/kgK)
Temperature [°C]
Fig. I.2 Specific heat of carbon steel
High Temperature Properties and Temperature Profiles
The peak in the curve around 735◦ C is due to the crystallographic phase change of the material. This peak will induce an S shape in the curves that show the evolution of steel sections with time;
the temperature increase is slowing down around 735◦ C, just to accelerate again for higher temperatures. I.1.2 Th er ma l p r o p e r t i e s o f s t e e l a c c o r di ng t o A SC E I.1.2.1 Th e
rma l c on d u c ti v i ty ASCE (Lie, 1992) provides empirical relationships for high temperature thermal properties of steel. While the trends in thermal conductivity are similar to that of
Eurocode, there are slight variations in the actual values at different temperature ranges. for 0 ≤ T ≤ 900◦ C ◦
for T > 900 C Where T ks
ks = −0.022T + 48 Wm−1 ◦ C −1
−1 ◦
ks = 28.2 Wm
is temperature in steel and is thermal conductivity.
I.1.2.2 S pe c i f i c h ea t ASCE lists thermal capacity relationships for structural steel at various temperature ranges. The thermal capacity, defined as product of specific heat and density,
trends are similar to those of Eurocode specific heat values, but there are slight variations in actual values at different temperature ranges. for 0 ≤ T ≤ 650◦ C ρS CS = (0.004T + 3.3) × 106 Jm−3 ◦
C −1 ◦
for 650 C < T ≤ 725 C ρS CS = (0.068T + 38.3) × 106 Jm−3 ◦ C −1 for 725◦ C < T ≤ 800◦ C ρS CS = (−0.086T + 73.35) × 106 Jm−3 ◦ C −1 for T > 800◦ C ρS CS = 4.55 × 106 Jm−3 ◦ C −1
Where T is temperature ρs is density Cs is specific heat
I.2 Thermal properties of fire protection materials The room temperature thermal properties of commonly used fire protection materials are given in ECCS (1995) and are reproduced in Table I.1. The
data on high temperature properties for many of these materials are scarce. The general trends for some of these high temperature properties can be found in SFPE handbook [Kodur and Harmathy 2002].
Designing Steel Structures for Fire Safety
Table I.1 Thermal properties of commonly used fire protection material (ECCS 1995) Material
unit mass ρp
mosture content p [%]
thermal conductivity λp [w/(m · k)]
specific heat cp [ J/(kg · K)]
0.12 0.12 0.12
0.15 0.15 0.20
2 4 5 8 – –
0.20 1.60 0.80 1.00 0.40 1.20
[kg/m3 ] Sprays – mineral fibre – vermiculite cement – perlite High-density sprays – vermiculite (or perlite) and cement – vermiculite (or perlite) and gypsum Boards – vermiculite (or perlite) and
cement – fibre-silicate or fibre-calcium-silicate – fibre-cement – gypsum board Compressed fibre boards – fibre silicate, mineral-wool, stone-wool Concrete Light weight concrete Concrete bricks
Bricks with holes Solid bricks
The mineral fiber mixture combines the fibers, mineral binders (usually, Portland cement based), air and water. It is a limited combustible material and a poor conductor of heat. Mineral fiber fire
protection material is spray-applied with specifically designed equipment which feeds the dry mixture of mineral fibers and various binding agents to a spray nozzle, where water is added to the
mixture as it is sprayed on the surface to be protected. In the final cured form, the mineral fiber coating is usually lightweight, essentially non-combustible, chemically inert and a poor conductor
of heat.
High Temperature Properties and Temperature Profiles
I.3 Temperatures in unprotected steel sections (Eurocode properties) Table I.2 Temperature in unprotected sections subjected to the ISO fire A∗m /V [m−1 ] V/A∗m [mm]
400 2.5
200 5.0
60 16.7
40 25.0
25 40.0
Steel temperature in ◦ C
Time [min.] 0 5 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 45
100 10.0
Designing Steel Structures for Fire Safety
Tsteel (°C)
400 m1 or 2.5 mm 200 m1 or 5.0 mm 100 m1 or 10.0 mm
60 m1 or 16.7 mm 40 m1 or 25.0 mm 25 m1 or 40.0 mm
Time (min.)
Fig. I.3 Temperatures as a function of time for various massivity factors
High Temperature Properties and Temperature Profiles
Tsteel (°C)
300 30 min. 25 min. 200
20 min. 15 min. 10 min.
5 min.
250 ∗ Section factor Am / V [m1]
Fig. I.4 Temperatures as a function of the massivity factor for various times
Designing Steel Structures for Fire Safety
I.4 Temperatures in protected steel sections (Eurocode properties) Table I.3 Temperature in protected sections subjected to the ISO fire kp [W/m3 K]
Steel temperature in ◦ C
Time [min.] 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240
High Temperature Properties and Temperature Profiles
Tsteel (°C)
Time (min.)
Fig. I.5 Temperatures as a function of time for various values of the factor kp
Annex II
Mechanical Properties of Carbon Steels
II.1 Eurocode properties The following sections describe the mechanical properties of carbon steel. The mechanical properties of stainless steel are given in Annex C of Eurocode 3. AISC steel design
manual also lists Eurocode mechanical properties in the Appendix. II.1.1
S t r en g t h a n d d e f o r m a t i o n p r o p e r t i e s
The stress-strain relationship for carbon steel at elevated temperatures is showed in Figure II.1. The following parameters define the shape of this characteristic: fy,θ fp,θ Ea,θ εp,θ εy,θ εt,θ εu,θ
effective yield strength; proportional limit; slope of the linear elastic range; strain at the proportional limit; yield strain; limiting strain for yield strength; ultimate strain.
Stress σ fy,θ
Ea,θ tan α
α εp,θ
Strain ε
Fig. II.1 Stress-strain relationship for carbon steel at elevated temperatures
Designing Steel Structures for Fire Safety Table II.1 Formulas to determine the stress and the tangent modulus Strain range
Stress σ
Tangent modulus
ε ≤ εp,θ
εp,θ < ε < εy,θ
f p,θ − c + (b/a)[a2 − (εy,θ − ε)2 ]0,5
εy,θ ≤ ε ≤ εt,θ
εt,θ < ε < εu,θ
fy,θ [1 − (ε − εt,θ )/(εu,θ − εt,θ )]
ε = εu,θ
εp,θ = fp,θ /Ea,θ
a2 = (εy,θ − εp,θ )(εy,θ − εp,θ + c/Ea,θ )
b(εy,θ − ε) a[a2
− (εy,θ − ε)2 ]0,5
εy,θ = 0,02 εt,θ = 0,15 εu,θ = 0,20
b2 = c(εy,θ − εp,θ )Ea,θ + c2 c=
(fy,θ − fp,θ )2 (εy,θ − εp,θ )Ea,θ − 2(fy,θ − fp,θ )
Table II.2 Reduction factors for stress-strain relationship of carbon steel at elevated temperatures Steel temperature θa
Reduction factor (relative to fy ) for effective yield strength ky,θ = fy,θ /fy
Reduction factor (relative to fy ) for proportional limit kp,θ = fp,θ /fy
Reduction factor (relative to fy ) for design yield strength kp0,2,θ = fp0,2,θ /fy
Reduction factor (relative to Ea ) for the slope of the linear elastic range kE,θ = Ea,θ /Ea
20◦ C 100◦ C 200◦ C 300◦ C 400◦ C 500◦ C 600◦ C 700◦ C 800◦ C 900◦ C 1000◦ C 1100◦ C 1200◦ C
1,000 1,000 1,000 1,000 1,000 0,780 0,470 0,230 0,110 0,060 0,040 0,020 0,000
1,000 1,000 0,807 0,613 0,420 0,360 0,180 0,075 0,050 0,0375 0,0250 0,0125 0,0000
1,000 1.000 0.890 0.780 0.650 0.530 0.300 0.130 0.070 0.050 0.030 0.020 0.000
1,000 1,000 0,900 0,800 0,700 0,600 0,310 0,130 0,090 0,0675 0,0450 0,0225 0,0000
Formulas to determine the stress and the tangent modulus at a given strain are given in Table II-1. Table II-2 gives the reduction factors for the stress-strain relationship for steel at elevated
temperatures given in Figure II.1. Linear interpolation may be used for values of the steel temperature intermediate to those given in the table. The reduction factors are defined as follows: – – – –
effective yield strength, relative to yield strength at 20◦ C: ky,θ = fy,θ /fy proportional limit, relative to yield strength at 20◦ C: kp,θ = fp,θ /fy design yield strength, relative to yield
strength at 20◦ C: kp0,2,θ = fp0,2,θ /fy slope of linear elastic range, relative to slope at 20◦ C: kE,θ = Ea,θ /Ea
A n n e x II – M e c h a n i c a l P r o p e r t i e s o f C a r b o n S t e e l s
Reduction factor kθ
Effective yield strength ky,θ fy,θ /fy
0.6 Slope of linear elastic range kE,θ Ea,θ /Ea
0.4 Proportional limit kp,θ fp,θ /fy
Temperature [°C] Reduction factor kθ 1.000 Slope of linear elastic range kE,θ Ea,θ /Ea
Design yield strength kp0,2,θ fp0,2,θ /fy
0.000 0
Temperature [°C]
Fig. II.2 Reduction factors for the stress-strain relationship of carbon steel at elevated temperatures
The variation of these reduction factors with temperature is illustrated in Figure II.2. II.1.2 Th er mal e l o n g a t i o n The relative thermal elongation of steel should be determined from the
following: –
for 20◦ C ≤ θa < 750◦ C: l/l = 1,2 × 10−5 θa + 0,4 × 10−8 θa2 − 2,416 × 10−4
Designing Steel Structures for Fire Safety
Relative Elongation ∆ l/l [x103] 20 18 16 14 12 10 8 6 4 2 0 0
Temperature [°C]
Fig. II.3 Relative thermal elongation of carbon steel as a function of the temperature
for 750◦ C ≤ θa ≤ 860◦ C: l/l = 1,1 × 10−2
for 860◦ C < θa ≤ 1200◦ C: l/l = 2 × 10−5 θa − 6,2 × 10−3
where: l is the length at 20◦ C; l is the temperature induced elongation; θa is the steel temperature [◦ C]. The variation of the relative thermal elongation with temperature is illustrated in Figure
II.2 ASCE properties ASCE lists two sets (version 1 and version 2) of high temperature stress-strain relationships for structural steel. The first set (version 1) more suited for cases where load
bearing is a primary function and is applicable for reinforcing steel or concrete-filled steel. The second set is recommended for structural steel. (More conservative than Version 2, but may be used
for reinforcing steel or concrete-filled steel, where the role of steel in carrying the load at failure point is secondary). II.2.1
S t r es s s t r a i n r e l a t i o n s f o r s t ee l (Ve r s i o n 1)
for εS ≤ εP Fy =
f (T, 0.001) εs 0.001
A n n e x II – M e c h a n i c a l P r o p e r t i e s o f C a r b o n S t e e l s
where εp = 4 × 10−6 Fy0 and f (T, 0.001) = (50 − 0.04T) × {1exp[(30 + 0.03T) (0.001)]} × 6.9 for εS > εP Fy =
f (T, 0.001) εp + f [T, (εs − εp + 0.001)] − f (T, 0.001) 0.001
where f [T, (εs − εp + 0.001)] = (50 − 0.04T)
× {1exp[(−30 + 0.03T) (εs − εp + 0.001)]} × 6.9
and εs = strain in steel εp = strain pertaining to proportional stress-strain relation II.2.2
S t r es s s t r a i n r e l a t i o n s f o r s t e el (Ve r s i o n 2)
(Less conservation then Version 1, recommended for structural steel, in particular where the rope of the steel in carrying the load is primary.) for εS ≤ εP fT = ET εs where εp =
0.975fyT − 12.5(fyT )2 ET − 12.5fyT
for εS > εP fT = (12.5εS + 0.975)fyt −
12.5(fyT )2 ET
In the above equation the yield strength fYT is given by the following equations: for 0 < T ≤ 600◦ C ⎤
⎡ ⎢ fyT = ⎢ ⎣1.0 +
T 900ln
T 1750
⎥ ⎥ ⎦fy0
Designing Steel Structures for Fire Safety
for 600◦ C < T ≤ 1000◦ C fyT =
340 − 0.34T fy0 T − 240
and the modulus of elasticity is: for 0 < T ≤ 600◦ C ⎡ ⎢ ET = ⎢ ⎣1.0 +
T 2000ln
T 1100
⎤ ⎥ ⎥ ⎦E0
for 600◦ C < T ≤ 1000◦ C ET =
690 − 0.69T E0 T − 53.5
where εs = strain in steel εp = strain pertaining to proportional stress-strain relation fy0 = yield strength of steel at room temperature fyT = yield strength of steel at temperature T E0 = modulus
of elasticity at room temperature ET = modulus of elasticity at temperature T II.2.3
C oeff i c i e n t o f t h e r m a l e x p a n s i o n
for T < 1000◦ C αs = (0.004T + 12) × 10−6◦ C−1 for T ≥ 1000◦ C αs = 16 × 10−6◦ C−1 where αs = coefficient of thermal expansion T = temperature
AISC (2005), Specification for structural steel buildings, American Institute of Steel Construction, Inc., Chicago, IL. Al-Jabri, K S, Davison, J B & Burgess, I W, (2008), Performance of
beam-to-column joints in fire – A review, Fire Safety Journal, 42, 50–62 Anderberg, Y (2002), Structural behaviour and design of partially fire-exposed slender steel columns, Proc. 2nd int. Workshop
“Structures in Fire’’, Univ. of Canterbury, Christchurch, P. J. Moss ed., 319–336 ASCE (2005), Minimum design loads for buildings and other structures (ASCE Standard 7-05), American Society of Civil
Engineers, Reston, VA. ASCE (2005), Standard calculation methods for structural fire protection (ASCE/SFPE Standard 29-05), American Society of Civil Engineers, Reston, VA. ASTM (2005), Standard
methods of fire tests of building construction and materials (ASTM Standard E119-05), American Society for Testing and Materials, West Conshohocken, PA. ASTM (2005), Standard methods of fire tests
for determining effects of large hydrocarbon pool fires on structural members and assemblies, (ASTM Standard E1529-05), American Society for Testing and Materials, West Conshohocken, PA. Buchanan, A.
H. (2001), Structural design for fire safety, John Wiley and Sons, Ltd., Chichester, UK. Burgess, I W, El Rimawi, J & Plank R G (1991), Studies of the Behaviour of Steel Beams in Fire, J. Construct.
Steel Research, 19, 285–312 Cadorin, J-F & Franssen, J-M (2003), A tool to design steel elements submitted to compartment fires – OZone V2. Part 1: pre- and post-flashover compartment fire model,
Fire Safety Journal, Elsevier, 38, 395–427 Cadorin, J-F, Pintea, D, Dotreppe, J-C & Franssen, J-M (2003), A tool to design steel elements submitted to compartment fires – OZone V2. Part 2:
Methodology and application, Fire Safety Journal, Elsevier, 38, 439–451. CEB (1991), Fire Design of Concrete Structures in accordance with CEB/FIP Model Code 90 (Final Draft), C.E.B., Paris, Bulletin
d’Information n◦ 208, pp 188. ECCS (1983), European Recommendations for the Fire Safety of Steel Structures, ECCS – Technical Committee 3 – Fire Safey of Steel Structures, Elsevier, Amsterdam, pp
106. ECCS (1995), ECCS Technical Committee 3, Fire Resistance of Steel Structures, ECCS Publication No. 89, European Convention for Constructional Steelwork, Brussels, Belgium.
ECCS (2001), Model Code on Fire Engineering, ECCS – Technical Committee 3 – Fire Safey of Steel Structures, European Convention for Constructional Steelwork, First Edition, N◦ 111, May 2001,
Brussels, Belgium. Ellingwood, B R (2005), Load combination requirements for fire-resistant structural design, J. Fire Protection Engrg. 15(1):43–61. El-Rimawi, J A, Burgess, I W & Plank, R J (1996),
The Treatment of Strain Reversal in Structural Members During the Cooling Phase of a Fire, J. Construct. Steel Res., Vol. 37, No. 2, 115–135 ENV 13381-1 (2002), Test methods for determining the
contribution to the fire resistance of structural members – Part 1: Horizontal protective membranes, European Committee for Standardization, Brussels ENV 13381-2 (2002), Test methods for determining
the contribution to the fire resistance of structural members – Part 2: Vertical protective membranes, European Committee for Standardization, Brussels ENV 13381-4 (2002), Test methods for
determining the contribution to the fire resistance of structural members – Part 4: Applied protection to steel structural elements, European Committee for Standardization, Brussels Eurocode (2005),
Eurocode 3: Design of Steel Structures, Part 1.2: General Rules, Structural fire design, European Committee for Standardisation, Brussels, Belgium. Eurocode (2005), Eurocode 4: Design of Composite
Steel and Concrete Structures, Part 1-2: General Rules, Structural Fire Design, European Committee for Standardisation, Brussels, Belgium. FEMA (2002), World Trade Center Building Performance Study:
Data Collection, Preliminary Observations, and Recommendations, Federal Emergency Management Agency (FEMA), Federal Insurance and Mitigation Administration, Washington, DC. Franssen, J-M (1987),
Etude du Comportement au Feu des Structures Mixtes AcierBéton, Ph.D. thesis, Univ. of Liege, Collections de la F.S.A., N◦ 111, pp 276. Franssen, J-M (1990), The unloading of building materials
submitted to fire, Fire Safety Journal, Vol. 16, N◦ 3, 213–227 Franssen, J-M, Schleich, J-B & Cajot, L (1995), A Simple Model for the Fire Resistance of Axially Loaded Members According to Eurocode
3, J Construct. Steel Research, Vol. 35, 49–69 Franssen, J-M (1997), Contributions à la Modélisation des Incendies dans les Bâtiments et de leurs effets sur les Structures, Thèse d’agr. de l’ens.
sup., Univ. of Liege, pp 391. Franssen, J-M, Cajot, J-G & Schleich, J-B (1998), Effects caused on the structure by localised fires in large compartments, Proc. of the EUROFIRE conference, Brussels,
pp, 19 in a CD-ROM, Franssen, J-M, Talamona, D, Kruppa, J & Cajot, L-G (1998), Stability of Steel Columns in Case of Fire : Experimental evaluation, J Struct. Engng, ASCE, Vol. 124, No 2, 158–163
Franssen, J-M (2000), Improvement of the Parametric Fire of Eurocode 1 based on Experimental Tests Results, Proc. 6th int. Symp. on Fire Safety Science, IAFSS, Curtat, M. ed., Poitiers, 927–938
Franssen, J-M & Brauwers, L (2002), Numerical Determination of 3D temperature fields in steel joints, Proc. 2nd int. Workshop “Structures in Fire’’, Univ. of Canterbury, Christchurch, P. J. Moss ed.,
1–20 Franssen, J-M (2003), SAFIR. A Thermal/Structural Program Modelling Structures under Fire, Proc. NASCC 2003, Baltimore, A.I.S.C. Inc., Garlock, M & Quiel, S E (2007), The Behavior of Steel
Perimeter Columns in a High-Rise Building under Fire, Engineering Journal, AISC, 44(4). Gulvanessian, H, Calgaro, J-A & Holicky, M (2002), Designer’s Guide to EN 1990. Eurocode: Basis of structural
design, Thomas Telford Publishing, London Hasemi, Y & Tokunaga, T (1984), Flame Geometry Effects on the Buoyant Plumes from Turbulent Diffusion Flames, Fire Science and Technology, Vol. 4, N◦ 1.
Hasemi, Y & Tokunaga, T, Fire Science and Technology, 4, 15. Hasemi, Y, Yokobayashi, Y, Wakamatsu, T & Ptchelintsev, A (1995), Fire Safety of Building Components Exposed to a Localized Fire – Scope
and Experiments on Ceiling/Beam System Exposed to a Localized Fire, ASIAFLAM’s 95, Hong Kong Heskestad, G (1983), Fire Safety Journal, 5, 103. Heskestad, G (1995), Fire Plumes, The SFPE Handbook of
Fire Protection Engineering, 2nd Edition, SFPE-NFPA. IBC (2006), International Building Code, 2006 Edition, International Code Council, Country Club Hills, IL, 2006. ICC (2003), ICC Performance Code
for Buildings and Facilities, International Code Council, Country Club Hills, IL. ISO (2002), ISO Standard 834: Fire resistance tests – Elements of building construction, International Organization
for Standardization, Geneva. Joyeux, D. & Zhao, B. (1999), Analyse du comportement de la structure porteuse du bâtiment de stockage automatise Procter & Gamble, Centre technique Industriel de la
Construction Métallique, CTICM – France Kamikawa, D, Hasemi, Y, Wakamatsu, T & Kagiva, Y (2001), Experimental flame heat transfer and surface temperature correlations for a steel column exposed to a
localized fire, Ninth Interflam conf., Interscience Ltd Kirby, B R, Lapwood, D G & Thomson G (1986), The reinstatement of Fire damaged Steel and Iron Framed Structures, British Steel Corporation
Swinden Laboratories. Kirby, B R (1995), The Behaviour of High-Strength Grade 8.8 Bolts in Fire, J. Construct. Steel Research, 33, 3–38 Kodur, V K R, and Harmathy, T. Z. (2002), Properties of
Building Materials, Chapter 10 of “The SFPE Handbook of Fire Protection Engineering’’, Third Edition, Society of Fire Protection Engineers, Bethesda, MA. Latham, D J & Kirby, B R (1990), Elevated
Temperature Behaviour of Welded Joints in Structural Steel Works, ECCS Research Project 7210-SA/824(F6.3/90) Lie, T T (1992) (Editor), Structural Fire Protection. ASCE Manuals and Reports of
Engineering Practice, No 78. American Society of Civil Engineers, New York. Lie, T T (2002) Fire Temperature-Time Relations. Chapter 4-8 of “The SFPE Handbook of Fire Protection Engineering’’. Third
Edition. Society of Fire Protection Engineers, USA. Lim, L (2003), Membrane Action in Fire Exposed Concrete Floor Systems, Fire Engineering Research Report 03/2, Univ. of Canterbury, New Zealand
Lopes, N, Simões da Silva, L, Vila Real, P & Piloto, P (2004), Proposals for the Design of Steel Beam-Columns under Fire Conditions, Including a New Approach for the Lateral-Torsional Buckling of
Beams, Computer & Structures, ELSEVIER, 82/17-19, 1463–1472 NBCC (2005), National Building Code of Canada, National Research Council of Canada. Ottawa, Canada. NFPA (2003), Building Construction and
Safety Code, NFPA 5000. National Fire Protection Association, Quincy, MA. NFPA (2006), Standard Methods of Tests of Fire Endurance of Building Construction and Materials. NFPA 251. National Fire
Protection Association, Quincy, MA. PrEN 1991-1-2 (1992), Eurocode 1 – Actions on Structures. Part 1-2 : General Actions – Actions on structures exposed to fire, Final Draft Stage 49, European
Committee for Standardization, Brussels, 10 January 2002. PrEN 1993-1-2 (1993), Eurocode 3 : Design of steel structures - Part 1.2 : General rules - Structural fire design, European Committee for
Standardization, Brussels, December 2003. ProfilARBED (2001), Background document on Parametric temperature-time curves according to Annex A of prEN1991-1-2 (24-08-2001), Profil ARBED document n◦
EC1-1-2/73, CEN/TC250/SC1/N298A, 22 October 2001. Ptchelintsev, A, Hasemi, Y & Nikolaenko, M (1995), Numerical Analysis of Structures exposed to localized Fire, ASIAFLAM’s 95, Hong Kong Ranby, A
(1998), Structural Fire Design of Thin Walled Steel Sections, J. Construct. Steel Research, 46, 303–4 Renaud, C (2003), Modélisation numérique, expérimentation et dimensionnement pratique des poteaux
mixtes avec profil creux exposés à l’incendie, Ph. D. Thesis, INSA Rennes, France, pp. 334 Ruddy, J, Marlo, J P, Ioannides, S A & Alfawakhiri, F (2003), Fire Resistance of Structural Steel Framing,
AISC Steel Design Guide 19, American Institute of Steel Construction, Chicago, IL, December 2003. SAA (1990), Fire Resistance Tests of Elements of Structure. AS 1530.4-1990. Standards Association of
Australia. Schleich, J-B, Cajot, L-G, Kruppa, J, Talamona, D, Azpiazu, W, Unanua, J, Twilt, L, Fellinger, J, Van Foeken, R-J & Franssen, J-M (1998), Buckling curves of hot rolled H steel sections
submitted to fire, C.E.C., EUR 18380 EN, pp. 333 Schleich, J-B, Cajot, L-G, Pierre, M, Brasseur, M, Franssen, J-M, Kruppa, J, Joyeux, D, Twilt, L, Van Oerle, J & Aurtenetxe, G (1999), Development of
design rules for steel structures subjected to natural fires in large compartments, European Commission, technical steel research, Report EUR 18868 EN, pp 207. SFPE (1995), A Practical User’s Guide
to Fires-T3, A Three-Dimensional HeatTransfer Model Applicable to Analyzing Heat Transfer through Fire Barriers and Structural Elements, Society of Fire Protection Engineers, Task Group on
Documentation of Fire Models, Bethesda, MD. SFPE (2000), Engineering Guide to Performance-Based Fire Protection. Society of Fire Protection Engineers, Bethesda, Maryland. SFPE (2002), Guide to
performance-based fire protection analysis and design of buildings, Society of Fire Protection Engineers, Bethesda, Maryland.
SFPE (2004), SFPE Handbook of Fire Protection Engineering. Society of Fire Protection Engineers, SFPE. SFPE (2004), Engineering Guide: fire exposures to structural elements, Society of Fire
Protection Engineers, Washington, DC. Talamona D (1995), Flambement De Poteaux Métalliques Sous Charges Excentrées, A Haute Température, Ph. D. thesis, Univ. Blaise Pascal – Ecole doctorale sciences
pour l’ingénieur de Clermont-Ferrand, N◦ D.U. 726, EDSPIC: 85. Talamona, D, Kruppa, J, Franssen, J-M & Recho, N (1996), Factors influencing the behavior of steel columns exposed to fire, J Fire
Protection Engng, 8(1), 31–43 Talamona, D, Franssen, J-M, Schleich, J-B & Kruppa, J (1997), Stability of Steel Columns in Case of Fire: Numerical Modelling, J. Struct. Engng, ASCE, Vol. 123, No. 6,
713–720 UL (2003), Fire Tests of Building Construction and Materials, UL 263. Underwriters Laboratories Inc, Northbrook, Illinois. UL (2004), Fire Resistance Directory, Volume 1, Underwriters
Laboratories Inc., Northbrook, IL, 2004. ULC (2004), Standard Methods of Fire Endurance Tests of Building Construction and Materials. CAN/ULC-S101-04. Underwriters Laboratories of Canada, Toronto,
Ontario, Canada. Vila Real, P M M, Lopes, N, Simões da Silva, L, Piloto, P & Franssen, J-M (2003), Towards a consistent safety format of steel beam-columns: application of the new interaction
formulae at ambient temperature to elevated temperatures, Steel & Composite Structures, an International Journal, Techno-Press, Vol. 3, No. 6, 383–401. Vila Real, P M M; Lopes, N, Simões da Silva, L
& Franssen, J-M (2004), Lateral-Torsional Buckling of Unrestrained Steel Beams Under Fire Conditions: Improvement of EC3 Proposal, Computer & Structures, ELSEVIER, 82/20-21, 1737–1744 Vila Real, P M
M , Simoes da Silva, L, Lopes, N & Franssen J-M (2005), Fire resistance of unrestrained welded steel beams submitted to lateral-torsional buckling, EUROSTEEL 2005, 4th European Conference on Steel
and Composite Struct., Hoffmeister B & Hechler O ed., Aachen, 119–126 Wainman, D E & Kirby, B R (1988), Compendium of UK Standard Fire Test Data. Unprotected Structural Steel – 1, BRE, BSC Swinden
Laboratories. Wakamatsu, T, Hasemi, Y, Yokobayashi, Y & Ptchelintsev, A (1996), Experimental Study on the Heating Mechanism of a Steel Beam under Ceiling exposed to a localized Fire, Proc. 7th
Interflam Conf., Cambridge, 509–518. Wakamatsu, T, Hasemi, Y, Kagiya, K & Kamikawa, D (2002), Heating Mechanism of Unprotected Steel Beam Installed beneath Ceiling and Exposed to a Localized Fire:
Verification using the real-scale experiment and effects of the smoke layer, Proc. 7th IAFSS Symp., Worcester Polytecnic Inst., Worcester, MA, 1099–1110. Wang, Z (2004), Heat Transfer Analysis of
Insulated Steel Members Exposed to Fire, Masters thesis, School of Civil and Env. Engng, NTU, Singapore. Welch, S, Miles, S, Kumar, S, Lemaire, T & Chan, A (2008), FIRESTRUC – Integrating advanced
three-dimensional modelling methodologies for predicting thermo-mechanical behaviour of steel and composite structures subjected to natural fires, Proc. 9th IAFSS symposium, Karlsruhe.
Wickström, U, Application of the standard fire curve for expressing natural fires for design purposes, Science and Engineering, ASTM, STP 882. Wickström, U (1985), Temperature analysis of
heavily-insulated steel structures exposed to fire, Fire Safety Journal, Vol. 5, 281–285. Wickström, U (2001), Calculation of heat transfer to structures exposed to fire – Shadow effects, Ninth
Interflam conf., Interscience Ltd, 451–460. Zaharia R & Franssen J-M (2002), Fire design study case of a high – rise steel storage building. Third European Conference on Steel Structures, EUROSTEEL
2002, 19–20 September, 2002, Coimbra, Portugal
Subject index
action 4, 7, 11–15, 18, 58, 115, 130, 132, 135, 158; mechanical 9, 114; membrane 115, 117, 119, 157; thermal 9, 21–38, 42, 114, 127, 130 advanced; calculation model 4, 8, 10, 39, 61–3, 65–6, 70, 75,
80–1, 85, 107–25, 129, 133, 159; fire model 32, 130–1 air control 24–5, 36–7, 53; see ventilation control AISC 4, 6, 39, 63, 107, 139, 149, 155, 157–8 analysis 10; global 57–8, 62, 129–33; level of
57–60, 62; mechanical 45, 57–99, 114–23; member 57–9, 61, 125–6; sub structure 57–9, 126–9, 133–8; thermal 39–56, 109–14, 159–60 ASCE 3–4, 14, 39, 62–3, 139, 141, 152–7, 159 beam 6, 8–9, 15–19, 31,
44, 51–2, 57–9, 63, 70–1, 76–83, 86–7, 90, 94–9, 102–4, 110, 112, 115–17, 121–3, 125–9, 131–3, 155, 157–9; see also bending bending 9, 58–9, 67–70, 76–83, 90, 95–7, 115, 117–18, 128–9, 132; and
compression 83–7, 131; see also beam cellular beam 122 CFD 31, 113, 130 class of section 9, 67–71, 76–79, 81–84, 87–90, 92, 94–5, 98–9, 117, 126 codes 1–6 column 6, 16–19, 28, 31, 33, 51–2, 55, 58,
63, 70–1, 73–5, 80, 89, 92–4, 102–4, 110–11, 117, 122, 127–32, 135, 155–7, 159; see also compression compression 68–70, 72–5, 92–4, 102, 117, 132, 137: see also column
convection 26, 31, 43, 52, 113, critical temperature 39, 63, 65, 70–1, 75, 87–93, 96–8, 112, deformation 6, 8, 9, 11, 16, 32, 67–8, 116, 149; criteria 8, 9, 89, thermal 8, 75 effects of action 11,
17–20, 57–60, 63, 65–6, 68, 70–1, 76, 87–8, 117, 125–6, 129; indirect 13, 16–18, 58–9, 62, 70, 117, 126, 129, 135 emissivity 26–7, 43, 53–5, 113 equivalent time 21–3 errata XV external steelwork
21–2, 26–7, 52–6 fire; curve 26, 61–2; external 21, 26; E119 32; growth rate 23, 25, 135; hydrocarbon 22, 26, 42, 155; localised 21, 26–31, 33–6, 109, 113, 118, 120, 130–1, 135–8, 156–9; nominal
21–2, 75; parametric 23–6, 36–8, 61, 107; standard 3–6, 17, 21–2, 26, 32, 43, 47, 50, 60–1, 63, 66, 72, 92, 103, 130–1, 135, 138, 157, 160 flame 26–30, 34–5, 40, 52–5, 157 flux 9–10, 26–31, 33–6, 40,
43, 46, 52, 55–6, 112–13 fuel control 24–5, 37–8, 53 heat release 27–9, 35, 53, 135; density 27–8, 135 heat screen 51–2 insulation; criteria 7, intumescent 48, 112; materials 4, 8, 39, 45–6, 48–9,
51, 60, 63, 90, 109 joint 101–6, 110–12, 155, 157
Subject index
lateral torsional buckling 71, 82–3, 85–6, 89, 98–9, 131, 158–9 load: bearing capacity 5–6, 8, 50, 57, 64–6, 70, 87, 89, 96–7, 115–16, 135, 152; combination 10, 13–14, 18–19, 58, 70, 125–6, 130, 132,
156; dead 14–16; design 14–15, 18, 59, 90–4, 114, 125, 155; domain 63–6, 89–90, 94–6, 131; fire 5, 22–3, 25, 36, 52, 61, 126, 135; imposed 13; lateral 86, level 60–3, 67–8, 88, 116; live 14–15;
mechanical 3, 9–20, 59, 114; plastic 72; snow 13–16, 130; traffic 13, 16; wind 13–15, 19, 128, 130, 132, 137 material properties 4, 11, 61–3, 66, 68–9, 71, 73, 87, 107–9, 114–15 mechanical properties
44, 66–7, 108, 114, 149–54 non-uniform temperature distribution 70–2, 74–82, 109, 123 objective 1–3 opening 5, 22–5, 36–8, 52, 55, 135; factor 23–4, 36–8, 53, 61; see also window protected steel
33–4, 45–52, 61, 72, 80–1, 94–8, 101–3, 110, 142, 146–7 radiation 26, 31, 34, 40, 42, 52, 55, 113 section factor 40–7, 97 SFPE 3–4, 32, 62–3, 141, 155, 157–9 shadow effect 39–42, 94–5, 102, 113, 160
shear 76–7, 79, 88, 90, 95–7, 104–5, 116–17 simple calculation model 9, 39, 61–2, 65, 70–88, 90–9, 107, 112–14, 123 specific heat 23, 39–40, 43, 45, 47–9, 109, 139–41 standards 1–6, 158 tabulated
data 4, 9, 60–2 temperature domain 63–6, 88–92, 97, 103 tension 71–2, 87–8, 90–2, 102, 105–6, 117, 132 test 2–4, 6, 8, 17, 21, 25, 28–9, 32, 34, 40, 48, 51, 60, 63, 79, 81, 101, 109, 112, 117–19,
122, 135, 155–9 thermal; conductivity 23, 45–50, 92, 139–41; expansion 6, 13, 16, 32, 58–9, 102, 114, 117, 125, 133, 135–7, 154; properties 8, 22–3, 25, 39, 47–8, 109, 112, 139–42 time domain 63–6,
75, 91, 96–7 unprotected steel 39–45, 61, 80–1, 92–4, 96, 101–2, 112–13, 115, 123, 143–5, 159 ventilation control; see air control wall 5, 7–8, 22–5, 36, 52–3; fire 115 window 23, 36–7, 52–6, 135;
see also opening zone model 25–6, 34–5, 108, 112, 130–1, 135–7, 155
|
{"url":"https://epdf.pub/designing-steel-structures-for-fire-safety240cee444c3faf3798e7441f2d96d79a78985.html","timestamp":"2024-11-08T11:29:20Z","content_type":"text/html","content_length":"450814","record_id":"<urn:uuid:104c6344-98a1-48e4-b83d-391bdd1bc65d>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00602.warc.gz"}
|
MATH 3230.A (Fall 2024)
Ordinary Differential Equations
• Syllabus for Math 3230 (3 pages)
□ Background from Algebra, Precalculus, and Calculus I, mastery of which I expect from students in this class.
☆ This video was prepared for another class, but it stresses the same concepts that I expect you to use in this class.
□ Test 1 will be given after we cover Sec. 3.1.
□ Test 1 will be on Wednesday, October 9, at 6:45 - 8 pm in Lafayette L300 (2 floors above our regular classroom).
□ Tentatively, Test 2 will be given after we cover Sec. 3.10.
□ Test 2 will be on Wednesday, Nov. 6. from 6:30 - 7:50 pm in Lafayette L102.
• MATHEMATICA resources
□ Follow this link to install Mathematica on your computer.
□ On that page, read the (very short) instructions, click on the link "UVM Software Portal" and follow the directions there. When prompted, you will need to login with your UVM NetID.
☆ After you install Mathematica on your computer, it will ask you to Activate the program and will display a window with two Activation options: (1) with an Activation Key and (ii) through
your organization. Choose the latter and sign in with your UVM credentials. In this process you will again be presented with the options to sign in with your Wolfram account or via UVM;
choose the latter.
□ Be prepared that the installation process takes about 15+ minutes (or longer if your connection to the Internet is slow). The entire process consists of two stages, which you will be guided
through by the Installer: You first download the installation package, and then the Installer installs them on your computer.
□ On the same page, notice the link to "tutorial videos" at the end. It is a very friendly resource, which will help you if you do not feel comfortable using Mathematica. For example, Tutorial
5 in that link talks about defining and plotting functions.
☆ You will unlikely ever need Tutorials 6 and 7.
□ An alternative, but arguably a less friendly place to get started is my own introductory lab to Mathematica: On this page, you need to open Lab 1, Part 1 and work through the first half of it
up to (i.e., not including) the section "Parametric plots in 2D".
• An alternative to installing Mathematica on your computer which may suffice for the needs of this class is to use its online version, WolframAlpha.
Lecture Notes
• Background from Algebra, Precalculus, and Calculus I, mastery of which I expect from students in this class.
□ This document has already been listed above, after the link to Homework, but it is worth repeating.
Lecture 1 Introduction
Lecture 2 Linear 1st-order differential equations
A story of how a simple drag force model helped resolve a controversy of an atomic waste disposal problem (from M. Braun, "Differential equations and their applications," 3rd Ed., Springer, New York,
Lecture 3 General properties of solution of 1st-order linear differential equations
Lecture 4 Some applications of 1st-order linear differential equations
Description of Radiocarbon dating technique from Wikipedia
A story about uncovering of an art forgery after World War II using radioactive isotope dating (from M. Braun's textbook)
Lecture 5 Separable differential equations
Lecture 6 Existence and uniqueness of solution of 1st-order nonlinear diferential equations
Lecture 7 Special cases when a 1st-order nonlinear differential equation can and cannot be solved
Lecture 8 An application of 1st-order nonlnear differential equations: The Logistic model
Lecture 9 Euler's method
Lecture 10 Introduction to 2nd-order differential equations: motivation; basic properties; linear differential equations
Lecture 11 The general solution of a homogeneous linear 2nd-order differential equation
Lecture 12 Homogeneous linear differential equations with constant coefficients
Lecture 13 Real repeated roots; Reduction of order of a linear differential equation
Lecture 14 Complex roots; Oscillatory solutions of 2nd-order diffrential equations
Lecture 15 Unforced mechanical vibrations: Linear oscillator model revisited
Almost all realistic models of nature have damping in them, however small it may be. Without damping, one would have perpetual motion. This has been known to be impossible since the times of Isaac
Newton. However, in some very sophisticated and carefully engineered systems perpetual motion is possible! Here is an example.
Lecture 16 General solution of a linear nonhomogeneous differential equation
Lecture 17 Particular solution of a nonhomogeneous 2nd-order differential equation: Method of Undetermined Coefficients
Lecture 18 Particular solution of a nonhomogeneous 2nd-order differential equation: Method of Variations of Parameters
Lecture 19 Resonance in undamped and damped linear oscillator models
One of the dramatic manifestations of resonance is the destruction of Tacoma Narrows Bridge in 1940. Here is a video of the event. However, a connection between that event and a resonance is far from
trivial. (One obvious observation is that the resonance was not caused by a wind that blew with periodic intensity -- such winds do not exist in nature.) A critical review of various explanations of
the Tacoma Bridge destruction can be found in a paper by H. Petroski, published in 1991.
Online demonstration of the phenomenon of beats
Lecture 20 General properties of higher-order linear differential equations
Lecture 21 Higher-order linear differential equations with constant coefficients
Lecture 22 Systems of first-order linear differential equations: Introduction
Lecture 23 General properties of linear systems of first-order differential equations
Lecture 24 Homogeneous linear systems with constant coefficients
Lecture 25 Nonhomogeneous linear systems of differential equations: Method of Variation of parameters
Lecture 26 Laplace Transform: Motivation and Introduction
Lecture 27 Using Laplace Transform to solve Initial value problems
|
{"url":"https://tlakoba.w3.uvm.edu/24_fall/math_3230/index.html","timestamp":"2024-11-14T00:51:05Z","content_type":"text/html","content_length":"23050","record_id":"<urn:uuid:0512da07-5b1d-47b6-aab9-92f71d767952>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00317.warc.gz"}
|
College of Science and Mathematics
Department of Mathematics
Bachelor of Arts Degree in Mathematics
The requirement for entrance to the major program is completion of two years of algebra as well as courses in geometry and trigonometry, or a sequence of courses containing their equivalents, such as
MATH 4R and 5. It is strongly recommended that such study be completed before entrance to the university.
Major requirements (48-54 units)
Core curriculum (30 units)
MATH 75 (or 75A and B), 76, 77, 81 (15 units)
MATH 111 (3 units)
MATH 151, 152 (8 units)
MATH 171 (4 units)
Elective curriculum (18-24 units)
Six mathematics courses, upper-division or graduate (see note 1), minimum of 3 units per course, excluding MATH 100, 102, 133, 134, 137, 138, and 139.
Additional requirements (7 units)
CSCI 40 (4 units)
PHYS 4A (see note 2) (3 units)
General Education (51 units) (see note 3)
Electives and remaining degree requirements (8-14 units)
Total (120 units)
Advising Notes
1. Special conditions apply for graduate courses; see a department adviser.
2. PHYS 4AL is not required for the math major. If students wish to use PHYS 4A as a General Education Breadth course, they must also take PHYS 4AL.
3. MATH 75 (and 75A) satisfies the Quantitative Reasoning requirement within General Education Foundation courses. It also satisfies the requirement within General Education Core courses, for
students under 1998-1999 or earlier catalogs.
4. See Mathematics Road Map.
It is strongly recommended, to all math majors, to have an advising session at least once a semester. See the department chair for assignment to an adviser.
Grade Requirements
All courses required as prerequisites for a mathematics course must be completed with a grade of C or better before registration will be permitted. All courses taken to fulfill major or minor
requirements must be completed with a grade of C or better.
|
{"url":"https://csm.fresnostate.edu/math/degrees/ba.html","timestamp":"2024-11-10T09:42:38Z","content_type":"text/html","content_length":"30061","record_id":"<urn:uuid:5f5ded2d-04a1-4649-acb7-377a1992de62>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00553.warc.gz"}
|
Mimic Logic - Where to Find Mimics
A basic list of tips for Mimic Dungeon for finding those pesky mimics
Welcome to my Guide for Mimic Logic. In this Guide I hope to arm you with all the knowledge you will need to beat every dungeon. In this guide is a collection of my “best practices” as well as some
example puzzles. due to the random nature of the game I can’t give specific solutions, but instead general strategies to solve every puzzle.
General Tips for all Dungeons (except doubt and Confuse)
• mimics cannot self report
• if a chest claims that an adjacent box is a mimic, either the Box it claims is a mimic is a Mimic, or the Chest is lying
• While Hunger might feel pressuring, it usually isn’t an issue. if you starve often, invest in hunger upgrades
• sometimes there are multiple valid solutions. in those cases it’s usually better to just grab chests that are guaranteed to not be mimics and skip any 50/50 chests/
• usually the best shop items to buy are the attack and defence candies
• If the same statement is repeated more times than the number of mimics, it must be true
• Sometimes Mimics are really dumb. Things like claiming there are 2 mimics in black chests when the puzzle only has a single black chest
Standard dungeon and general mimic finding guide
Welcome to your first Dungeon. The standard dungeon has the fewest mechanics to worry about, with every chest room usually (but not always) being not too difficult to solve. Unless otherwise stated,
every dungeon follows these rules:
• Mimics always lie, non-mimics always tell the truth
• each dungeon floor gives the exact mimic and loot count
• every 10 floors is a town, except for the last floor
Mimic Pairs:
With that background info, lets look at the most useful strategy for this and most other dungeons, mimic pairs.
A mimic Pair is a set of chests that has to include at least one mimic. For example, if a chest says that the Chest on the left is a mimic, either the chest on the left is a mimic, or the Chest is
lying and it is a mimic. either way you know the location of one of the mimics. Using mimic pairs allows you to break down larger puzzles into smaller ones. Any time two or more chests give
conflicting statements, at least one must be a mimic.
Lets take the following Standard Dungeon puzzle as an example:
While seven chests might be a little overwhelming at first, this puzzle gives us a very good mimic pair to work with. The middle right chest claims the chest above it is a mimic, forming a mimic
pair. Now we can mark both of them (I use a 1, use whatever number you like)
so instead of doing a 7 chest 2 mimic puzzle, we can ignore the pair and treat the rest of the puzzle as a 1 mimic 5 box puzzle
The joy of 1 mimic puzzles:
One mimic Puzzles are quite easy to solve, due to the following:
• any chest claiming an adjacent non-marked box as not a mimic must be telling the truth, since if it was lying that would make the Puzzle have too many mimics
• Mimics still Cannot self report
Let’s continue the example from above. looking at the 5 unmarked boxes, the one on the bottom left is claiming that the box to the right of it must be true. this box must be telling the truth, since
if it was lying both it and the box next to it would be mimics, which would mean our puzzle has 3 mimics, which cannot be true. Therefore we can mark both the boxes on the bottom as real.
While we are at it we can also mark the 2 middle boxes as a pair, as both of their statements cannot be correct. this also means that the last unmarked box must be real, since both mimics must be in
the pairs.
Finishing it all off:
once all mimic Pairs are marked and all easy real chests are found, the following can be used to finish the puzzle:
• All real chests statements that haven’t yet been marked can be marked
• all remaining mimic pairs that interact in some way can be used to deduce where the mimic is in each
the 50/50 conundrum:
sometimes you are left with pairs that have multiple solutions. in our example there are 2 possible solutions. either the top right and middle left chest are mimics, or the middle and middle right
chests are mimics. in cases like these, open all safe chests, then leave. if you have a blue orb you can use it to break the 50/50 after you open all safe chests. Otherwise, just leave. the 40 gold
is not worth it.
But what if I have no pairs?
sometimes a puzzle will have either no pairs, or not enough pairs. In those cases, there are other quick strategies that can be used.
Duplicated statements:
Take the following puzzle
In it there is a single marked mimic pair, but no other easy to find pair. However the puzzle has also given us a huge hint by having both top chests have the same statement. If more chests are
making a statement than there are unmarked mimics, that statement must be true. using the previous example, we could mark the two top chests, then the black chests, then the bottom right chest,
leaving 2 remaining chests where the mimics are
Pair Positioning:
consider the following Puzzle:
3 mimics, two pairs (only one of which is marked). however in this case the location of the pair allows us to solve this puzzle. since there must be at least 1 mimic in the top row, the chest that
claims the top row has no mimics is lying and is therefore a mimic. similarly, the chest that says the top row must have at least 1 mimic must be telling the truth, which allows us to easily mark the
you can then mark the chest in the top pair that says the top row must have at least 1 mimic as true, the other one on top as a mimic, and leave the last 2 as it’s a 50/50
Expert dungeon Guide
Welcome to the Expert dungeon. for the most part the strategies are the same as the standard dungeon, just with a few new chest types. So instead of repeating all of those I will instead go into the
new chests and a couple more advanced strategies.
Truth Chains:
consider the following puzzle:
In it we have solved all the easily solvable chests, leaving us with 2 pairs remaining. In situations like this simply getting to a Solution may lead as down the wrong path, as some puzzles have
multiple valid solutions. The technique I like to use is what I call a truth chain, although trial and error would be another title for it.
The basis for a truth chain is “what happens if X is true or false?” The easiest place to start in this puzzle is with the statement “The bottom row contains no mimics” as 2 Chests are saying it,
meaning either they must both be real or mimics.
Lets start with the assumption that they are both telling the truth. we would mark them both as true and also mark the other chest on the bottom as true. this would leave us with 2 mimics when the
puzzle has 3, meaning this solution isn’t correct. Therefore “The bottom row contains no mimics” must be a false statement.
Do note that if your first assumption produces a Valid solution,
do not immediately assume it is correct
. you must then test the opposite. Sometimes both solutions are correct, leading to a gridlock.
look at the following puzzle:
This puzzle is unsolvable. Using a truth chain on the two boxes that claim the red box is a mimic leads to 2 valid solutions. either the red, middle right and top left boxes are mimics, or the bottom
right, middle left, and middle boxes are mimics. Sometimes you just need to move on
Color based deduction
look at the following puzzle:
in it there is a set of clashing statements. the statements “every mimic is the same color” and “there are more mimics in red than blue boxes” cannot both be true. leading to one of two
possibilities. either both of the boxes claiming that the mimics are the same color are lying, or they are both real and the remaining 2 black boxes are mimics. In either case the blue chest must be
real, and we can mark it. we can then use the true statement from the blue chest to solve the puzzle.
Gold Logic
Consider the following puzzle:
Since both the other chests on the right were solvable, and since neither one contains gold, the chest in the top right becomes very important. using a logic chain, we can see the if it’s lying, it
would lead to too many mimics. therefore it must be real and contains gold. This means that both chests saying “There’s no gold in the top row” must be lying
Random Dungeon Guide
Random dungeons are the first time the game shakes up the rules a bit. instead of giving you an exact chest count, you get a range for the number of mimics. Usually this leads to the safest strategy
being to treat each puzzle as if it contained the maximum amount of mimics. However some strategies involving mimic counts become less reliable.
Basics of randomness
Consider the following puzzle:
due to how this is set up, either both bottom chests are mimics or both chests claiming at least one of them are. however we are still left with a problem. we don’t know if 2 or 3 chests are mimics.
We can however, confirm that the middle right box is telling the truth. if it is lying then the box up-left from it is also lying, creating a situation where there are 4 mimics. marking this box as
true allows us to solve the puzzle and see that there are 2 mimics.
Single box 50/50
Random Dungeons can have single box 50/50 chances as in the following:
In this puzzle the unmarked box may or may not be a mimic. since we don’t know the mimic count, we can’t tell if it’s telling the truth and there is only one mimic, or if it’s lying and it’s the
second mimic.
Robber Dungeon Guide
Robber Dungeons include a new type of chest, the robber. This chest always tells the truth, but if opened steals 100 gold from you. if you have less than 100 gold it just steals as much as you have.
Because of this, testing possible robber chests when you have very little gold (such as at the start or after a shop) is usually better than using the open all button.
Getting robbed for profit
Look at the following puzzle:
while a bunch of chests are concerned about the robber, the two most important ones are the marked pair. one of those is the mimic. additionally, the bottom row must contain the robber. Therefore all
non marked non bottom chests are safe. this means that the chest saying the top row has no mimics is true, leaving us with both the mimic and the bottom chest as being unsafe to open (one is the
robber, one is the mimic)
eyes on the prize
Consider the following:
after marking the first round of mimic locations, the 3 middle chests are immediately truthful. this means the top chests are safe, the and the blue chest is the mimic. following this, we arrive at
this solution:
What you do next depends on your gold count. if you Just visited a shop or just started you should open the triangle marked boxes one at a time until you find the robber. since none of the chests
contain any gold you will be able to open all of them without fear of the robber taking any. If you have 100 or more gold, then you need to weigh the value of 2 items/equipment against 100 gold. this
is largely subjective so it’s up to you.
robber reports
In a 1 mimic situation, a chest reporting the location of a robber guarantees a non mimic chest, such as in the following puzzle:
In this puzzle, the bottom right chest claims the middle is the robber. since one mimic is already in a mimic pair, the middle chest must not be the mimic. as either the bottom right chest is telling
the truth, or it is a mimic, and lying about a normal chest being the robber. Similarly, the chest on the middle right must also be telling the truth, since if blue was a robber they would not be
able to lie about their location.
I am not the robber
If a chest ever says it is not the robber, such as claiming red chests have no robbers when it is the only red chest, it cannot be the mimic or the robber. The robber can’t lie, and a mimic would lie
and say that a red chest was the robber.
Number Dungeon Guide
Welcome to the minesweeper dungeon. While the number dungeon initially might seem daunting, it’s not that bad once you get a basic understanding of how to navigate it. Most boxes have a number, that
number is how many mimics are near them, counting diagonals.
how does this number thing even work?
A number on a box means that there are X mimics nearby, X being the number. So a box with a zero is telling you all nearby chests are safe, while a four would mean there are four mimics nearby.
however due to each puzzle always being three by three, a basic strategy is fairly easy to do.
Go for the Centre
checking the centre box allows you to solve many of the number puzzles, take this one for example:
In this puzzle the centre box must be real. if it was a mimic, the three 0 boxes would also be mimics. that would leave us with 4 mimics, which is Impossible.
Check the corners
If a 3 Box is in a corner, it means that either all the boxes surrounding it are mimics, or it itself is a mimic. usually it isn’t too hard to figure out which one it is.
Doubt Dungeon Guide
Welcome to the Dungeon most likely to give you a headache. The doubt dungeon flips the rules on their head by having mimics always tell the truth, while all-non mimic chests of the same color as a
mimic lie. This means that many of the normal strategies will not work. However, once you wrap your head around this new ruleset, there are a few new strategies that you can use to quickly find the
Color Based Self reporting
Unlike the dungeons before it, mimics self report in doubt. Take the following puzzle:
The Black box in the bottom left claims there is a mimic among the black boxes. This box must be a mimic, as only a mimic is able to self report in Doubt dungeons. Same goes for the top middle blue
reverse self reporting
similarly, if a chest claims it is not a mimic, it must not be a mimic, although it might be lying, as in the following:
In this puzzle, the bottom middle chest cannot be a mimic, if it was, it would tell the truth that there is a mimic among the blue boxes.
Single Color statements
Continuing from the puzzle above, the red chest must be telling a true statement. If a chest is the only chest of it’s color, the statement it says must be true. Using this, we can mark the blue
boxes all as true (since if both were lying we would have 3+ mimics), then mark the red box as a mimic, as only one black box can be a mimic, then finally mark the top middle box as a mimic, as it is
telling a true statement, and whichever black box is telling the truth is a mimic.
Location based self reporting
In order for a mimic to be “self reporting”, all possible mimics of that color must be included in their statement. Take the following:
While at first glance you might think the mimic in the top right is self reporting, This is not the case. since all possible black mimics aren’t in the right column, it could be a real box that is
lying. In this specific puzzle it is real and is telling the truth.
Confuse Dungeon Guide
Welcome to the final dungeon of the game. Confuse dungeons don’t need their own dedicated strategies, just a modification to the rules we use.
Here’s a quick list of rules in confuse dungeons:
• statements said more times than there are mimics + confuse chests must be true
• true statements mean the chest is real
• a chest being real does not always mean it’s statement is true
• puzzles can be treated as containing one more mimic for the purpose of mimic pairs
• if you hit the maximum number of mimics/mimic pairs, all remaining chests must be telling the truth
• in one mimic/confused box situations, all rules that apply to one mimic dungeons work, treating confused boxes as the only mimic
• blue orbs are very useful for cracking open later puzzles, 50/50 situations are also much more common, so try to not use them on every 50/50 you find
• A chest that self reports must be real
Basic Town Overview
A town is a Location that spawns on floor 10 and floor 20. A town will always contain the following:
• NPC with 1 tip
• restaurant that sells two different gut and hp restore meals, one that is a full heal, and one that is a small heal
• A shop that contains a shopkeeper
Towns also have the chance to spawn one of the following NPCs:
• Strange Doctor: gives you medicine that can either heal you or damage you 30hp
• lottery ticket salesman: sells you a scratch ticket for 200 gold. The prizes are 1100 gold for 1st, 400 for second, and 300 for third.
• Blacksmith: will restore 10 durability on your weapon and armour for 10 gold.
Notable Chests
The following is a list of noteworthy chests:
Sleepy Chest
A sleeping chest is just that, sleeping. You can wake them up using a specific item, otherwise they will provide no useful information
Countdown Chest
A countdown chest comes with a timer that counts down from 100. If it’s real you will die when time runs out. If it’s lying nothing will happen.
“Innocent” Chest
Chest that claims it is not a mimic. on most Dungeons this provides no useful information. In Doubt dungeons it allows you to mark all chests of that colour as real. In confuse dungeons it lets you
know it isn’t confused
Thanks for taking a look, I hope something here helps. If you have any strategies or patterns that I missed feel free to add them in the comments and I can add them to the guide (with credit of
Leave A Reply
|
{"url":"https://shonendaily.com/mimic-logic-where-to-find-mimics","timestamp":"2024-11-11T20:36:23Z","content_type":"text/html","content_length":"110180","record_id":"<urn:uuid:9571897e-7c3c-4218-b604-3fbcd41f0b25>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00813.warc.gz"}
|
longmath – Nested delimiter groups extending over multiple array cells or lines
This package provides yet another solution to some well known typesetting problems solved in a variety of ways: multi line formulas with paired and nested delimiters. It tackles the problem at the
Lua level, which has some advantages over solutions implemented in TeX. In particular, the TeX code need not be executed multiple times, and there is no interference between TeX grouping and the
nesting of delimiter groups.
As a byproduct, delimiters can be scaled in various ways, inner delimiters come in different flavours like relational and binary operators, punctuation symbols etc., and outer delimiters can be
selected automatically according to the nesting level. Last but not least, delimiter groups can even extend across several array cells or across the whole document.
A special environment is provided as well, which allows multi line expressions to be placed inside a displayed equation and make TeX do the line splitting and alignment.
Sources /macros/luatex/latex/longmath
Version 1.0 2024-07-04
Licenses The LaTeX Project Public License 1.3
Copyright 2024 Hans-Jürgen Matschull
Maintainer Hans-Jürgen Matschull
Contained in TeXLive as longmath
MiKTeX as longmath
Topics Maths
Parentheses management
Download the contents of this package in one zip archive (185.6k).
Community Comments
Maybe you are interested in the following packages as well.
|
{"url":"https://ctan.org/pkg/longmath","timestamp":"2024-11-03T18:37:25Z","content_type":"text/html","content_length":"17490","record_id":"<urn:uuid:ce4bab2b-1889-4ae6-9cf1-974ee136661b>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00474.warc.gz"}
|
Why is called an open interval?
An open interval is an interval that does not include its end points.
What I don't understand is why is it called an open interval?
It's all about sets;
As an example, consider the open interval (0, 1) consisting of all real numbers x with 0 < x < 1. Here, the topology is the usual topology on the real line. We can look at this in two ways. Since
any point in the interval is different from 0 and 1, the distance from that point to the edge is always non-zero. Or equivalently, for any point in the interval we can move by a small enough
amount in any direction without touching the edge and still be inside the set. Therefore, the interval (0, 1) is open. However, the interval (0, 1] consisting of all numbers x with 0 < x ≤ 1 is
not open in the topology induced from the real line; if one takes x = 1 and moves an arbitrarily small amount in the positive direction, one will be outside of (0, 1'].
If you're asking about the reason behind the name, then I'm not sure, although I guess it's because it "goes on for ever" (ie.: has no minimum or maximum) therefore it's not "closed", but "open".
EDIT: Woops, didn't notice the post above
If you're asking about the reason behind the name, then I'm not sure, although I guess it's because it "goes on for ever" (ie.: has no minimum or maximum) therefore it's not "closed", but "open".
EDIT: Woops, didn't notice the post above
See http://www.mathwords.com/o/open_interval.htm
But in this example it does. In this example the minimum is -2 and the maximum is 3. So why is it really called an open interval?
Here is what I mean:
As an example, consider the open interval (0, 1) consisting of all real numbers x with 0 < x < 1. Here, the topology is the usual topology on the real line. We can look at this in two ways. Since
any point in the interval is different from 0 and 1, the distance from that point to the edge is always non-zero. Or equivalently, for any point in the interval we can move by a small enough
amount in any direction without touching the edge and still be inside the set. Therefore, the interval (0, 1) is open. However, the interval (0, 1] consisting of all numbers x with 0 < x ≤ 1 is
not open in the topology induced from the real line; if one takes x = 1 and moves an arbitrarily small amount in the positive direction, one will be outside of (0, 1'].
|
{"url":"https://www.scienceforums.net/topic/37115-why-is-called-an-open-interval/#comment-469294","timestamp":"2024-11-09T22:39:02Z","content_type":"text/html","content_length":"107253","record_id":"<urn:uuid:f5800544-123b-461b-a8f5-3b73246333e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00455.warc.gz"}
|
Category:Course materials
Computer Vision Primer: beginner's guide to image analysis, data analysis, related mathematics (calculus, topology, linear algebra), image analysis software, and applications in sciences and
Category:Course materials
From Computer Vision Primer
This category contains syllabi of some math courses.
There is 1 subcategory to this category.
Articles in category "Course materials"
There are 13 articles in this category.
A I O
E L P
G • Linear algebra: course T
• Group theory: course V
|
{"url":"https://inperc.com/wiki/index_title_Category_Course_materials.html","timestamp":"2024-11-05T23:07:36Z","content_type":"application/xhtml+xml","content_length":"10642","record_id":"<urn:uuid:c639e60a-55bc-4e2a-88fc-f05b8f22a4dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00726.warc.gz"}
|
how to calculate power supply for ball mill
For End Milling Application. These calculations are based upon theoretical values and are only intended for planning purposes. Actual results will vary. No responsibility from Kennametal is
assumed. Metric Inch.
WhatsApp: +86 18838072829
The voltage supplied to the LED (, 12V or 24V DC) Output Current. The current supplied to the LED in Amperes (A) Output Power. The total power output of the power supply in Watts (W) Efficiency.
The efficiency of the power supply, usually given as a percentage (%) Dimmable. Indicates whether the power supply is dimmable or not.
WhatsApp: +86 18838072829
Small Ball Mill Capacity Sizing Table Ball Mill Design/Power Calculation
WhatsApp: +86 18838072829
Maximum ball size (MBS) Please Enter / Stepto Input Values. Mill Feed Material Size F, mm. Specific weight of Feed SG, g/cm 3.
WhatsApp: +86 18838072829
We can calculate the steel charge volume of a ball or rod mill and express it as the % of the volume within the liners that is filled with grinding media. While the mill is stopped, the charge
volume can be gotten by measuring the diameter inside the liners and the distance from the top of the charge to the top of the mill. The % loading or ...
WhatsApp: +86 18838072829
Thus the power to drive the whole mill. = + = = 86 kW. From the published data, the measured power to the motor terminals is 103 kW, and so the power demand of 86 kW by the mill leads to a
combined efficiency of motor and transmission of 83%, which is reasonable.
WhatsApp: +86 18838072829
First, the weight loss of the ball leads to a decrease in its kinetic energy and a consequent reduction in energy transfer or milling efficiency. Second, the degree of filling of the mill is
raised so the balls mobility becomes more difficult and as a result the kinetic energy of the balls are reduced.
WhatsApp: +86 18838072829
The formula for critical speed is CS = 1/2π √ (g/ (Rr) where g is the gravitational constant, R is the inside diameter of the mill and r is the diameter of one piece of media. This reduced to CS
= /√ (Rr). Dry mills typically operate in the range of 50%70% of CS and most often between 60%65% of CS.
WhatsApp: +86 18838072829
Milling Horsepower Calculator. Calculate the horsepower required for a milling operation based on the feed rate and depth of cut, which are used to determine the material removal rate (or metal
removal rate). Also required is the unit power, which is a material property describing the amount of power required to cut that material.
WhatsApp: +86 18838072829
If so, half of the occupied volume would be filled by the charge, the other half by the balls real volume (BCVR = 1:1 v/v), and the total occupied volume would be 73% (27% left empty). The
maximum ...
WhatsApp: +86 18838072829
Ball Mill Motor/Power Sizing Calculation Ballshaped Mill Design/Sizing Calculator To capacity required to grind a material from a giving feed size the a gives product size can be estimated by
using that subsequent equal: find: W = power consumption declared to kWh/short to (HPhr/short ton = kWh/short ton)
WhatsApp: +86 18838072829
This can reduce the energy consumption and the use cost of the ball mill. To make the milling more efficient, we must first be acquainted with the factors. They are mainly the ball mill
structure, the rotation speed, the ball mill media, the lining plate, the material fed and the feeding speed, etc. In the following text, you will get some ...
WhatsApp: +86 18838072829
Typically Speed = Distance/time (which means / But here we also have to calculate the speed of the sound of the ball traveling when the ball hits the pins till it reaches the ears of the ...
WhatsApp: +86 18838072829
Here, you will find information about your computer's power supply. Another method to determine your computer's power supply is through thirdparty software called CPUZ. Download and install CPUZ
from their official website, then launch the program. Go to the "Mainboard" tab and look for information related to "Power Supply."
WhatsApp: +86 18838072829
When the filling rate of grinding medium is less than 35% in dry grinding operation, the power can be calculated by formula (17). n — mill speed, r/min; G" — Total grinding medium, T; η —
Mechanical efficiency, when the center drive, η = ; when the edge drive, η = Rotation Speed Calculation of Ball Mill
WhatsApp: +86 18838072829
The following equation is used to determine the power that wet grinding overflow ball mills should draw. For mills larger than meters (10 feet) diameter inside liners, the top size of the balls
used affects the power drawn by the mill. This is called the ball size factor S.
WhatsApp: +86 18838072829
A) Total Apparent Volumetric Charge Filling including balls and excess slurry on top of the ball charge, plus the interstitial voids in between the balls expressed as a percentage of the net
internal mill volume (inside liners). B) Overflow Discharge Mills operating at low ball fillings slurry may accumulate on top of the ball charge; causing, the Total Charge Filling Level to be ...
WhatsApp: +86 18838072829
A nice and pretty sexy online Ball Mill Powerdraw Calculator was put live by one of the great prophets of grinding. comes with all the related Online documentation prompting you to enter your
basic mill data.
WhatsApp: +86 18838072829
High temperature of the ball mill will affact the efficiency. 3 For every 1% increase in moisture, the output of the ball mill will be reduced by 8% 10%. 4 when the moisture is greater than 5%,
the ball mill will be unable to perform the grinding operation. 5. The bearing of the ball mill is overheated and the motor is overloaded.
WhatsApp: +86 18838072829
1. Closed Circuit = W 2. Open Circuit, Product Topsize not limited = W 3. Open Circuit, Product Topsize limited = W to W Open circuit grinding to a given surface area requires no more power than
closed circuit grinding to the same surface area provided there is no objection to the natural topsize.
WhatsApp: +86 18838072829
1. The material is brittle. 2. For this porpuse (research) I use small amount (20 gr. of fine powder in a 1 litter jar) 3. didn't thought about ratio, at this moment I use 20:1 (I believe it's
not ...
WhatsApp: +86 18838072829
The formula for calculating shaft power: P = QE Where: P = Shaft Power Q = Mill Capacity E = Specific Power of Mill Let's solve an example; Find the shaft power when the mill capacity is 20 and
the specific power of mill is 24. This implies that; Q = Mill Capacity = 20 E = Specific Power of Mill = 24 P = QE P = (20) (24) P = 480
WhatsApp: +86 18838072829
Rod mills speed should be limited to a maximum of 70% of critical speed and preferably should be in the 60 to 68 percent critical speed range. Pebble mills are usually run at speeds between 75
and 85 percent of critical speed. Ball Mill Critical Speed . The black dot in the imagery above represents the centre of gravity of the charge.
WhatsApp: +86 18838072829
Modern mills operate with load cells and mill optimizer's which synchronize the mill weight, power draw, ore feed and weight with density control. Where optimizers are not used, the rule of thumb
for a overflow ball mill is a ball load up to about 12 inches below the trunnion discharge this allows bed expansion by the slurry filling the voids ...
WhatsApp: +86 18838072829
Figure 3a and 3b gives the results of the computer calculation. The mill power at the pinionshaft for a 30% volume charge is the sum of: Figures 3a 3b give the power for an autogenous mill.
Figures 4a and 4b are for the same size mill with a ball charge of 6% of mill volume (290 lbs. per cubic foot).
WhatsApp: +86 18838072829
The generator power calculator takes the total current requirement of the devices in amperes (A) and the supply voltage rating in volts (V mathrm{V} V) to calculate the apparent power (k V A
mathrm{kVA} kVA), which is then used to calculate actual power based on the power the below section if you don't understand some of the terms we used here.
WhatsApp: +86 18838072829
Hard ore Work Index 16 = 100,000/65,000 = kwh/t. For the purposes of this example, we will hypothesize that the the crushing index of the hard ore with the increased energy input of kw/t reduces
the ball mill feed size to 6,500 micrometers. As a result, the mill output will increase with this reduced size to approximately 77,000 tons ...
WhatsApp: +86 18838072829
Ball Mill Power Calculation Example A wet grinding ball mill in closed circuit is to be fed 100 TPH of a material with a work index of 15 and a size distribution of 80% passing ¼ inch...
WhatsApp: +86 18838072829
Measure the length of the LED/COB/SMD 835 strip that you will be using in feet. Get the wattage per foot rating of your low voltage LED/COB/SMD 2835 strip (example /ft) Once you have the length
and wattage per foot, click the "Calculate" in the calculator. The Calculator will then Display the recommended upper and lower limit amount of ...
WhatsApp: +86 18838072829
From that "power draw" you calculate a shell size from 4 parameters: diameter, effective length, wet or dry? %loaded and % of critical speed. You also size an appropriate motor to turn the shell
from the original "power draw" you calculated.
WhatsApp: +86 18838072829
The rod mill motor power is in horsepower at the mill pinionshaft. For different length rod mills power varies directly as rod length. For difference between new and worn liners increase power
draw by 6%, and adjust for bulk density per Table A. Wet grinding rod mills are normally used in minerals processing plants.
WhatsApp: +86 18838072829
Units used in the above table: A p, A e, D, W mm or Inch; V f m/min or inch/min; V c m/min or feet/min (SFM); F n mm/rev or Inch/rev; MRR Metal Removal Rate CM 3 /min or Inch 3 /min Step 2
Obtaining the materials Specific Cutting Force () Each material has a Specific Cutting Force coefficient that expresses the force in the cutting direction, required to cut a chip ...
WhatsApp: +86 18838072829
Ball top size (bond formula): calculation of the top size grinding media (balls or cylpebs):Modification of the Ball Charge: This calculator analyses the granulometry of the material inside the
mill and proposes a modification of the ball charge in order to improve the mill efficiency:
WhatsApp: +86 18838072829
crusher in cement palnt power requirement for the crusher. power requirement for stone crusher aquabrand. Pf 1214 Coal Mill In Power Plant Crusher Mills, how to calculate power requirements for
grinders crushers. Get Price.
WhatsApp: +86 18838072829
How to Measure Grinding Efficiency. The first two Grinding Efficiency Measurement examples are given to show how to calculate Wio and Wioc for single stage ball mills. Figure 1. The first example
is a comparison of two parallel mills from a daily operating report. Mill size x (′ x 20′ with a ID of 16′).
WhatsApp: +86 18838072829
|
{"url":"https://panirecord.fr/how_to_calculate_power_supply_for_ball_mill/7636.html","timestamp":"2024-11-01T19:56:56Z","content_type":"application/xhtml+xml","content_length":"30601","record_id":"<urn:uuid:f03f27ba-c91a-4e96-9069-7868c1dac251>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00040.warc.gz"}
|
Papers with Code - Stéphane d'Ascoli
3 code implementations • 22 Apr 2022 • Pierre-Alexandre Kamienny, Stéphane d'Ascoli, Guillaume Lample, François Charton
Symbolic regression, the task of predicting the mathematical expression of a function from the observation of its values, is a difficult task which usually involves a two-step procedure: predicting
the "skeleton" of the expression up to the choice of numerical constants, then fitting the constants by optimizing a non-convex loss function.
regression Symbolic Regression
|
{"url":"https://paperswithcode.com/search?q=author%3ASt%C3%A9phane+d%27Ascoli","timestamp":"2024-11-15T03:20:12Z","content_type":"text/html","content_length":"154152","record_id":"<urn:uuid:6c8367e0-a4e9-438a-b580-2b54b0f9d60e>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00456.warc.gz"}
|
Yield to Maturity vs. Discount Rate — What's the Difference?
Yield to Maturity (YTM) represents the total return expected from a bond if held to maturity, whereas Discount Rate is the interest rate used to determine the present value of future cash flows.
Difference Between Yield to Maturity and Discount Rate
Key Differences
Yield to Maturity and Discount Rate are two pivotal financial concepts used in investment decision-making and analysis. Yield to Maturity, commonly abbreviated as YTM, is a measure that reveals the
total anticipated return on a bond if it's held until it matures. This takes into account periodic interest payments and the repayment of the bond's face value at maturity. Discount Rate, on the
other hand, is utilized to ascertain the present value of anticipated future cash flows. Essentially, it's the rate at which future cash flows are discounted back to the present.
Both Yield to Maturity and Discount Rate play a significant role in financial analysis. When evaluating a bond as an investment, YTM offers an insight into what one can expect as an annualized
return, assuming the bond is kept until its maturity date. Conversely, the Discount Rate is often used in discounted cash flow analysis, enabling investors to discern the current value of future
income streams, considering the time value of money.
While they may seem similar in nature, Yield to Maturity and Discount Rate serve distinct purposes. YTM exclusively pertains to bonds and gives a comprehensive perspective of potential returns,
including interest payments and the bond's face value. The Discount Rate is broader in application and can be used across various assets and investments, not confined solely to bonds.
Furthermore, the sources of these rates are different. Yield to Maturity is intrinsically tied to the bond's market price, coupon rate, and time to maturity. Meanwhile, the Discount Rate can be
derived from various sources, including the expected rate of return from alternative investments or the cost of capital.
Comparison Chart
Total return expected from a bond if held to maturity.
Interest rate to determine the present value of future cash flows.
Exclusively related to bonds.
Used across various assets and investments.
Comprehensive potential returns including interest and face value.
Current value of expected future income streams.
Derived from
Bond's market price, coupon rate, time to maturity.
Expected return on alternative investments or cost of capital.
Importance in analysis
Evaluates bond as an investment.
Evaluates the present value of future income streams.
Compare with Definitions
Yield to Maturity
YTM takes into account periodic interest and principal repayment.
Given its semi-annual interest payments, the Yield to Maturity gave a comprehensive view of potential earnings.
Discount Rate
Discount Rate is utilized in discounted cash flow analysis.
By applying the appropriate Discount Rate, she assessed the investment's worth over a 5-year span.
Yield to Maturity
Yield to Maturity is a bond's total anticipated return.
The bond's Yield to Maturity was 5%, making it an attractive investment.
Discount Rate
Different investments might have different Discount Rates.
Riskier projects often have a higher Discount Rate to account for uncertainty.
Yield to Maturity
YTM reflects the bond's current market price.
As bond prices fluctuate, so does the Yield to Maturity.
Discount Rate
The Discount Rate can be influenced by external economic factors.
Central banks' decisions can impact the prevalent Discount Rate in the economy.
Yield to Maturity
Yield to Maturity assumes the bond is held until it matures.
If sold early, the actual return might deviate from the projected Yield to Maturity.
Discount Rate
Discount Rate gauges the present value of future cash flows.
A higher Discount Rate reduces the present value of future earnings.
Yield to Maturity
YTM aids in comparing different bond investment opportunities.
Comparing the Yield to Maturity of various bonds helped her select the best option.
Discount Rate
The Discount Rate represents the time value of money.
Considering inflation, he used a 7% Discount Rate to evaluate his investment.
Common Curiosities
Can YTM be negative?
Yes, in some low-interest-rate environments, bonds can have a negative YTM.
How does the Federal Reserve's decisions impact the Discount Rate?
The Federal Reserve's decisions can influence interest rates in the economy, which in turn can affect the Discount Rate used by investors.
Why is the Discount Rate crucial in investment analysis?
It helps determine the present value of future cash flows, accounting for the time value of money.
Do Yield to Maturity and Discount Rate measure the same thing?
No, YTM measures total expected return on a bond, while Discount Rate evaluates the present value of future cash flows.
What factors influence a bond's Yield to Maturity?
Factors include the bond's market price, coupon rate, and time to maturity.
Why might two different projects have different Discount Rates?
Riskier projects usually have a higher Discount Rate to account for greater uncertainty.
Is Yield to Maturity the same as the bond's coupon rate?
No, while the coupon rate is a fixed interest rate on a bond, YTM considers the bond's current price and expected total returns.
Can the Yield to Maturity change over time?
Yes, as market conditions and bond prices fluctuate, YTM can also change.
How is the appropriate Discount Rate for a project determined?
It's often based on the expected return on alternative investments or the cost of capital, adjusted for the project's risk.
How do bond duration and Yield to Maturity relate?
Duration measures a bond's sensitivity to interest rate changes, and bonds with longer durations are often more sensitive to changes in YTM.
Share Your Discovery
Author Spotlight
Tayyaba Rehman is a distinguished writer, currently serving as a primary contributor to askdifference.com. As a researcher in semantics and etymology, Tayyaba's passion for the complexity of
languages and their distinctions has found a perfect home on the platform. Tayyaba delves into the intricacies of language, distinguishing between commonly confused words and phrases, thereby
providing clarity for readers worldwide.
|
{"url":"https://www.askdifference.com/yield-to-maturity-vs-discount-rate/","timestamp":"2024-11-14T07:23:21Z","content_type":"text/html","content_length":"134466","record_id":"<urn:uuid:098f289a-93e9-4e5d-bc9b-f1de9ba59af4>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00500.warc.gz"}
|
TDS Uncertainty Quantification Efforts
Efficient UQ Algorithms for Plasma Simulations (Preliminary results).
Bryan Reuter (SNL - FASTMATH), Gianluca Geraci (SNL- FASTMATH), Michael Crockatt(SNL), John Shadid (SNL), Tom Smith (SNL), Ari Rappaport (SNL)
SAND2022-3490 W. Approved for public release; distribution is unlimited.
In addressing the modeling and simulation challenges of plasma systems, it is important to account for the fact that uncertainty is endemic to computational simulation of complex multiphysics
systems. Numerical simulation of complex multiphysics plasma systems often requires the use of very large-scale computing resources to achieve even marginal accuracy. Thus, we are generally limited
to at most O(10 - 50) simulations of the highest fidelity models to approximate the probability distributions propagated through the multiphysics systems and to make predictive inferences. Likewise,
experimental, and observational data on complex plasma systems is expensive to obtain, strictly limited in scope and availability, and subject to significant uncertainty and experimental error.
Consequently, truly predictive computational analysis and design for these scientific and technology systems requires quantifying the effects of as many sources of uncertainty as feasible. In this
study we report on our initial efforts to apply Multifidelity Uncertainty Quantification (MF UQ) to a hierarchy of models and demonstrate the usefulness of a few sampling based MF UQ estimators to
improve the precision of statistics for plasma physics problems.
Multifidelity approaches recognize that the number of high-fidelity model evaluations will be limited and seek to use many function evaluations from a predictive low-fidelity model to reduce the
number of high-fidelity model evaluations required to compute high-fidelity statistics to a specified precision. In this context the high-fidelity model is, e.g., represented as a sum of the
low-fidelity model plus a discrepancy term: QHIGH = QLOW + (QHIGH − QLOW). When the low-fidelity model captures useful trends of the high-fidelity model, then the model discrepancy term, QHIGH −
QLOW, may have either lower complexity or lower variance, and thus requiring less computational effort to resolve its functional form than that required for the original high-fidelity model. Briefly,
in the context of the plasma physics systems considered above, we will explore employing a hierarchical approach to defining multifidelity models. We will consider the increasing complexity (and thus
computational costs) of these models, roughly ranked as (resistive MHD, extended MHD, multifluid / full-Maxwell, full kinetic PIC). In this way the low-fidelity model would be of lower rank (to the
left) and the high-fidelity (to the right).
Multilevel Monte Carlo sampling for uncertainty quantification (MLUQ).
Here we briefly present an initial study of applying multilevel Monte Carlo (MLMC) sampling in the context of assessing aspects of UQ for a 2D resistive MHD tearing mode problem. The method begins
with an inital exploratory log normal sample of a two-dimensional parameter space. The parameters are the viscosity and the resistivity that are used to compute the governing non-dimensional
parameters the Reynolds number and the Lundquist number. The initial exploratory sample computes the tearing mode growth rate in the for a sequence of higher-resolution discretizations. The images
below visualize the final single island plasma pressure distribution after the breaking of the thin current sheet. The algorithm uses these initial evaluations, or pilot study, to determine the
correlation of the models in the hierarchy for the prediction of the QoIs, and also to assess the relative cost on the models. This study uses various mesh resolutions and constant Alfven wave CFL
time steps.
The result of the MLUQ algorithm/analysis is a development of a sequence of computations, on various mesh levels, that significantly reduces the total CPU time expenditure to achieve a desired level
of estimated variance reduction.
The table compares the number of simulations scheduled by the MLMC algorithm for each mesh resolution to achieve an estimated standard deviation in the computation of the mean QoI. Clearly, the MLMC
algorithm has drastically reduced the number of highest fidelity computations, reducing the overall estimated cost ~30x times to obtain the desired statistical precision.
Multifidelity methods for uncertainty quantification (MFUQ)
Multifidelity uncertainty quantification allows for efficient and accurate propagation of uncertainty in complex, costly multiphysics plasma models needed to study key features of Tokamak fusion.
Initial studies of a model of an expanding core of high-density, fully-ionized plasma into a lower-density, fully-ionized background demonstrated that higher prediction accuracy for quantities of
interest can be achieved at a fraction of the cost (~1%) of single-fidelity approaches. The initial plasma state is considered uncertain and characterized probabilistically. Cheaper, lower-fidelity
models are generated by employing coarse meshes or low-order numerical methods. Since they can capture basic trends and low-order structure of the response of the quantities of interest (kinetic
energy, electromagnetic energy, and particle loss) to the uncertain inputs, these lower-fidelity models can be leveraged heavily to reduce the variance of the mean estimators. Briefly, MFMC
corresponds to a multifidelity Monte Carlo sampling algorithm, ACV-MF is a approximate control variate for multifidelity, and ACV-IS is an independent samples strategy. A useful reference for these
methods is Generalized ACV for multifidelity UQ.
We consider a plasma of ions and electrons with three uncertain parameters -- the core number density, the initial ion temperature, and the ratio of initial electron to initial ion temperatures.
Relevant solution profiles for the QoIs at the final simulation time for a representative sample of the uncertain IC.
Optimal mean estimators and 99.7% confidence intervals for the QoIs. All MF strategies show significant improvement over a single-fidelity estimator.
Reduced sampling UQ and sensitivities for Grad-Shafranov equation.
Howard Elman (University of Maryland)
The aim of this study is to reduce the computational costs of simulation of a parameter-dependent model of fusion. Using the Grad-Shafranov free boundary problem as an example, for which the solution
depends on the intensities of currents going through an array of external coils, we treat the current intensities as stochastic parameters and explore with Monte Carlo simulation how variability of
the parameters affects important features of the plasma boundary such as location of the x-point, the strike points, and shaping attributes such as triangularity and elongation. Costs of simulation
can be dramatically reduced by replacing the mathematical model, which requires multiple solutions of large nonlinear algebraic system of equations, with a surrogate function built on a sparse grid
in parameter space. The use of the surrogate function reduces the time required for the Monte Carlo simulations by factors that range between 7 and over 30. We expect this methodology to be broadly
applicable to more complex models.
Schematic diagram of the problem domain and solution of the forward problem. The scientific issue is the effect of multiple variations of current intensities on output, explored using Monte Carlo
simulation. The mathematical issue is the high cost of repeated solution of nonlinear algebraic equations.
Replace "direct solution" of algebraic system with surrogate approximation using sparse grid collocation (interpolation).
Average CPU times for evaluation of surrogate function and direct solution of the free boundary problem, and cost reductions obtained using surrogates. The details of the algorithmic approach and
results can be found in Surrogate approximation of the Grad-Shafranov free boundary problem via stochastic collocation on sparse grids.
|
{"url":"https://tds-scidac.github.io/uq/","timestamp":"2024-11-09T10:34:10Z","content_type":"text/html","content_length":"23203","record_id":"<urn:uuid:45c9a11c-f208-4892-ac78-b879472b7fce>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00566.warc.gz"}
|
zROC Space: Transforming ROC space
A z-transformed ROC space
In each of our graphs of ROC space up till now, hit rate and false alarm rate lie on a nice, evenly spaced grid, while our iso-sensitivity curve and iso-bias curve have been, well, beautifully
curvaceous. This is great for working with the rates, but it makes things messy and non-linear when working with the curves. What if there was a simple way to reverse this situation?
Well, it turns out there is! The key is to work in the native units of d′ and c, instead of in units of hit rate and false alarm rate. And the key to that is revealed by a quick look at our SDT model
to remind ourselves that the proportions of outcomes are determined by the areas under the distributions defined by the model parameters. Which brings us to the z-transformation (i.e. the inverse
cumulative distribution function of the normal distribution, Φ^−1).
Another path to the same conclusion is to note that while d′ and c have a complex non-linear relationship with hit rate and false alarm rate, they have simple additive relationships with their z
As a result, if we use z-transformed hit rate and false alarm rate, our iso-sensitivity curve and iso-bias curve are straight lines in what is called zROC space:
This example is set up for model exploration. You can flip back and forth between ROC space and zROC space with the zROC-ROC switch.
Play with it for a little while and see that changing the values of d′ and c move the iso-sensitivity curve and iso-bias curve around (i.e. their y-intercept changes), but their slope remains the
same (at one and negative one respectively).
Iso-contours in zROC space
Another way to visualize the effect of the transformation is to look at the iso-bias, iso-sensitivity, and iso-accuracy contours in zROC space as compared to ROC space:
The utility of zROC space will be become clearer on the next page when we discuss distributions with unequal variance. So let us proceed.
|
{"url":"https://decidables.github.io/detectable/zroc.html","timestamp":"2024-11-09T08:59:33Z","content_type":"text/html","content_length":"14716","record_id":"<urn:uuid:1715c72e-8fc3-410d-bd2c-53ad228f7e41>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00599.warc.gz"}
|
Fiber Optic Loss Budgets Calculator | Fiber Optic Systems Inc.
dB (Decibel) Conversions in Transmission (Tx) Systems
In the world of telecommunications, electronics, and particularly fiber optics, the decibel (dB) is a fundamental unit used to express ratios of power, voltage, or intensity. Understanding how to
convert between linear values (like power in watts) and decibels is crucial for designing, analyzing, and troubleshooting transmission systems.
This guide will walk you through the concepts of decibel calculations, provide interactive tools for quick conversions, and demonstrate how these principles apply to real-world scenarios in fiber
optic systems. Whether you're a seasoned engineer or a student just beginning to explore the field, mastering dB conversions will significantly enhance your ability to work with optical and
electrical systems effectively.
Let's dive into the world of decibels and discover how this logarithmic unit simplifies calculations and provides invaluable insights into signal strength, attenuation, and amplification in
transmission systems.
What is a Decibel (dB)?
A decibel (dB) is a logarithmic unit used to express the ratio between two values, typically power or amplitude. It's named after Alexander Graham Bell and is one-tenth of a bel (B), a unit that
proved too large for practical use.
Key characteristics of the decibel:
• It's a dimensionless unit, expressing a ratio rather than an absolute value.
• It uses a logarithmic scale, which allows for a wide range of values to be expressed in a manageable form.
• It's additive for power ratios, simplifying calculations involving multiple gains or losses.
Why Use Decibels?
1. Managing Large Variations: In transmission systems, signal levels can vary by factors of millions or billions. Decibels compress this huge range into more manageable numbers. For example, a power
ratio of 1,000,000:1 is simply 60 dB.
2. Simplifying Multiplicative Processes: In a transmission system, signals often undergo multiple stages of amplification or attenuation. With decibels, these multiplicative processes become simple
addition or subtraction.
3. Matching Human Perception: Our perception of sound and light intensity is roughly logarithmic. Decibels align well with how we perceive changes in signal strength.
4. Industry Standard: Using decibels allows for easy communication and comparison across different systems and technologies.
5. Ease of Calculation: When dealing with very large or very small numbers, decibel calculations can be easier and less error-prone than working with linear values.
By understanding and using decibels, engineers and technicians can more intuitively grasp signal changes, design systems more effectively, and communicate specifications more clearly.
dB Conversion Formulas
Power Ratio Conversions
1. Power Ratio to dB: dB=10×log10(PoutPin)\text{dB} = 10 \times \log_{10}\left(\frac{P_{\text{out}}}{P_{\text{in}}}\right)dB=10×log10(PinPout)
2. dB to Power Ratio: PoutPin=10(dB10)\frac{P_{\text{out}}}{P_{\text{in}}} = 10^{\left(\frac{\text{dB}}{10}\right)}PinPout=10(10dB)
Voltage or Amplitude Ratio Conversions
3. Voltage Ratio to dB: dB=20×log10(VoutVin)\text{dB} = 20 \times \log_{10}\left(\frac{V_{\text{out}}}{V_{\text{in}}}\right)dB=20×log10(VinVout)
4. dB to Voltage Ratio: VoutVin=10(dB20)\frac{V_{\text{out}}}{V_{\text{in}}} = 10^{\left(\frac{\text{dB}}{20}\right)}VinVout=10(20dB)
Note: The voltage/amplitude formulas assume equal impedances. For situations with different input and output impedances, additional calculations are necessary.
These formulas form the basis of our dB conversion calculators and are essential for analyzing signal levels in transmission systems.
Interactive dB Calculators
Our interactive dB calculators allow you to quickly convert between linear ratios and decibels for both power and voltage/amplitude. Here's how to use them:
Power Ratio Conversions
1. Power Ratio to dB:
□ Input: Enter the power ratio (P_out / P_in) in the first field.
□ Output: The calculator will display the corresponding dB value.
2. dB to Power Ratio:
□ Input: Enter the dB value in the second field.
□ Output: The calculator will display the corresponding power ratio.
Voltage Ratio Conversions
3. Voltage Ratio to dB:
□ Input: Enter the voltage ratio (V_out / V_in) in the third field.
□ Output: The calculator will display the corresponding dB value.
4. dB to Voltage Ratio:
□ Input: Enter the dB value in the fourth field.
□ Output: The calculator will display the corresponding voltage ratio.
Using the Calculators
• Enter values in any input field to see real-time results in the corresponding output field.
• For ratio inputs (power and voltage), use positive numbers only.
• For dB inputs, any real number is acceptable.
• Results are displayed with two decimal places for dB values and four decimal places for ratios.
Example Calculations
1. Power Ratio to dB:
□ Input: 2 (representing a doubling of power)
□ Output: 3.01 dB
2. dB to Power Ratio:
□ Input: -3 dB
□ Output: 0.5012 (representing approximately half power)
3. Voltage Ratio to dB:
□ Input: 2 (representing a doubling of voltage)
□ Output: 6.02 dB
4. dB to Voltage Ratio:
□ Input: -6 dB
□ Output: 0.5012 (representing approximately half voltage)
Interpreting Results
• Positive dB values indicate a gain or increase in power/voltage.
• Negative dB values indicate a loss or decrease in power/voltage.
• A 3 dB change represents approximately a doubling (gain) or halving (loss) of power.
• A 6 dB change represents approximately a doubling (gain) or halving (loss) of voltage.
Remember, these calculators assume equal impedances for voltage conversions. For more complex scenarios involving changing impedances, additional calculations may be necessary.
Practical Applications in Fiber Optics
Understanding dB conversions is crucial in fiber optic systems. Here are some practical applications:
1. Fiber Attenuation
Fiber optic cables experience signal loss over distance, typically measured in dB/km.
Example: A single-mode fiber has an attenuation of 0.3 dB/km at 1550 nm. For a 50 km link:
• Total attenuation = 0.3 dB/km × 50 km = 15 dB
• Power ratio = 10^(-15/10) ≈ 0.0316
This means only about 3.16% of the input power reaches the end of the 50 km fiber.
2. Optical Amplifier Gain
Optical amplifiers boost signal power, often specified in dB.
Example: An Erbium Doped Fiber Amplifier (EDFA) provides 20 dB gain.
• Power ratio = 10^(20/10) = 100
This amplifier increases the signal power by a factor of 100.
3. Connector and Splice Losses
Each connector or splice in a fiber system introduces some loss.
Example: A system has 6 connectors, each with a 0.5 dB loss.
• Total connector loss = 6 × 0.5 dB = 3 dB
• Power ratio = 10^(-3/10) ≈ 0.5012
About 50.12% of power is transmitted through all connectors.
4. Optical Power Budget
Calculating the power budget ensures sufficient signal strength at the receiver.
• Transmitter power: +3 dBm
• Fiber attenuation: -15 dB (from example 1)
• Connector losses: -3 dB (from example 3)
• Amplifier gain: +20 dB (from example 2)
• Receiver sensitivity: -20 dBm
Power budget calculation: 3 dBm - 15 dB - 3 dB + 20 dB = 5 dBm at receiver
Since 5 dBm > -20 dBm (receiver sensitivity), the link is viable.
5. Optical Return Loss (ORL)
ORL measures the total amount of light reflected back to the source.
Example: A fiber link has an ORL of 40 dB.
• Reflection ratio = 10^(-40/10) = 0.0001
Only 0.01% of the light is reflected back, indicating a good connection.
These examples demonstrate how dB calculations are integral to designing, troubleshooting, and optimizing fiber optic systems. By mastering these conversions, you can effectively analyze and improve
optical network performance.
Key Points to Remember About dB Conversions
1. Logarithmic Nature:
□ dB is a logarithmic unit, compressing large ranges into manageable numbers.
□ Small dB changes can represent significant linear changes.
2. Power vs. Voltage:
□ For power ratios, use 10 log₁₀(P₂/P₁)
□ For voltage ratios, use 20 log₁₀(V₂/V₁)
3. Gain vs. Loss:
□ Positive dB values indicate gain or increase.
□ Negative dB values indicate loss or decrease.
4. Common dB Values:
□ 3 dB ≈ doubling or halving of power
□ 6 dB ≈ doubling or halving of voltage
□ 10 dB = 10 times power change
□ 20 dB = 100 times power change
5. Addition vs. Multiplication:
□ In dB, gains/losses are added, not multiplied.
□ Example: 3 dB + 3 dB = 6 dB (4x power), not 9 dB
6. Absolute vs. Relative:
□ dB is a relative measure (ratio between two values).
□ dBm is an absolute measure (relative to 1 mW).
7. Fiber Optic Applications:
□ Attenuation often expressed in dB/km
□ Connector losses typically 0.3-0.5 dB each
□ Amplifier gains commonly 20-30 dB
8. Calculation Tips:
□ Use a calculator for precise values.
□ Round to 1 decimal place for most practical applications.
9. System Analysis:
□ Sum all gains and losses in dB for total system impact.
□ Compare final signal level to receiver sensitivity.
10. Conversion Accuracy:
□ When converting between dB and linear, maintain sufficient decimal places to avoid compounding rounding errors.
Remember, proficiency with dB conversions comes with practice. Regular use of these concepts in real-world scenarios will reinforce your understanding and intuition.
Mastering dB conversions is an essential skill for anyone working with transmission systems, especially in the field of fiber optics. The ability to quickly and accurately convert between linear
ratios and decibels allows you to:
1. Efficiently analyze signal levels throughout a system
2. Easily calculate total gains and losses
3. Communicate system specifications clearly and concisely
4. Make informed decisions about component selection and system design
As you've seen through the practical examples, dB calculations are integral to every aspect of fiber optic system design and maintenance, from calculating fiber attenuation to determining optical
power budgets.
The interactive calculators provided here serve as valuable tools for quick conversions, but remember that true proficiency comes with practice and application. We encourage you to use these concepts
regularly in your work or studies, and to explore further the many applications of dB calculations in your specific area of interest.
Whether you're designing a new fiber optic network, troubleshooting an existing system, or simply striving to deepen your understanding of transmission technologies, a solid grasp of dB conversions
will prove invaluable.
Continue to explore, practice, and apply these principles, and you'll find yourself navigating the world of signal levels and transmission systems with increasing confidence and expertise.
dB (Decibel) Conversions in Transmission (Tx) Systems
In the world of telecommunications, electronics, and particularly fiber optics, the decibel (dB) is a fundamental unit used to express ratios of power, voltage, or intensity. Understanding how to
convert between linear values (like power in watts) and decibels is crucial for designing, analyzing, and troubleshooting transmission systems.
This guide will walk you through the concepts of decibel calculations, provide interactive tools for quick conversions, and demonstrate how these principles apply to real-world scenarios in fiber
optic systems. Whether you're a seasoned engineer or a student just beginning to explore the field, mastering dB conversions will significantly enhance your ability to work with optical and
electrical systems effectively.
Let's dive into the world of decibels and discover how this logarithmic unit simplifies calculations and provides invaluable insights into signal strength, attenuation, and amplification in
transmission systems.
What is a Decibel (dB)?
A decibel (dB) is a logarithmic unit used to express the ratio between two values, typically power or amplitude. It's named after Alexander Graham Bell and is one-tenth of a bel (B), a unit that
proved too large for practical use.
Key characteristics of the decibel:
• It's a dimensionless unit, expressing a ratio rather than an absolute value.
• It uses a logarithmic scale, which allows for a wide range of values to be expressed in a manageable form.
• It's additive for power ratios, simplifying calculations involving multiple gains or losses.
Why Use Decibels?
1. Managing Large Variations: In transmission systems, signal levels can vary by factors of millions or billions. Decibels compress this huge range into more manageable numbers. For example, a power
ratio of 1,000,000:1 is simply 60 dB.
2. Simplifying Multiplicative Processes: In a transmission system, signals often undergo multiple stages of amplification or attenuation. With decibels, these multiplicative processes become simple
addition or subtraction.
3. Matching Human Perception: Our perception of sound and light intensity is roughly logarithmic. Decibels align well with how we perceive changes in signal strength.
4. Industry Standard: Using decibels allows for easy communication and comparison across different systems and technologies.
5. Ease of Calculation: When dealing with very large or very small numbers, decibel calculations can be easier and less error-prone than working with linear values.
By understanding and using decibels, engineers and technicians can more intuitively grasp signal changes, design systems more effectively, and communicate specifications more clearly.
dB Conversion Formulas
Power Ratio Conversions
1. Power Ratio to dB: dB=10×log10(PoutPin)\text{dB} = 10 \times \log_{10}\left(\frac{P_{\text{out}}}{P_{\text{in}}}\right)dB=10×log10(PinPout)
2. dB to Power Ratio: PoutPin=10(dB10)\frac{P_{\text{out}}}{P_{\text{in}}} = 10^{\left(\frac{\text{dB}}{10}\right)}PinPout=10(10dB)
Voltage or Amplitude Ratio Conversions
3. Voltage Ratio to dB: dB=20×log10(VoutVin)\text{dB} = 20 \times \log_{10}\left(\frac{V_{\text{out}}}{V_{\text{in}}}\right)dB=20×log10(VinVout)
4. dB to Voltage Ratio: VoutVin=10(dB20)\frac{V_{\text{out}}}{V_{\text{in}}} = 10^{\left(\frac{\text{dB}}{20}\right)}VinVout=10(20dB)
Note: The voltage/amplitude formulas assume equal impedances. For situations with different input and output impedances, additional calculations are necessary.
These formulas form the basis of our dB conversion calculators and are essential for analyzing signal levels in transmission systems.
Interactive dB Calculators
Our interactive dB calculators allow you to quickly convert between linear ratios and decibels for both power and voltage/amplitude. Here's how to use them:
Power Ratio Conversions
1. Power Ratio to dB:
□ Input: Enter the power ratio (P_out / P_in) in the first field.
□ Output: The calculator will display the corresponding dB value.
2. dB to Power Ratio:
□ Input: Enter the dB value in the second field.
□ Output: The calculator will display the corresponding power ratio.
Voltage Ratio Conversions
3. Voltage Ratio to dB:
□ Input: Enter the voltage ratio (V_out / V_in) in the third field.
□ Output: The calculator will display the corresponding dB value.
4. dB to Voltage Ratio:
□ Input: Enter the dB value in the fourth field.
□ Output: The calculator will display the corresponding voltage ratio.
Using the Calculators
• Enter values in any input field to see real-time results in the corresponding output field.
• For ratio inputs (power and voltage), use positive numbers only.
• For dB inputs, any real number is acceptable.
• Results are displayed with two decimal places for dB values and four decimal places for ratios.
Example Calculations
1. Power Ratio to dB:
□ Input: 2 (representing a doubling of power)
□ Output: 3.01 dB
2. dB to Power Ratio:
□ Input: -3 dB
□ Output: 0.5012 (representing approximately half power)
3. Voltage Ratio to dB:
□ Input: 2 (representing a doubling of voltage)
□ Output: 6.02 dB
4. dB to Voltage Ratio:
□ Input: -6 dB
□ Output: 0.5012 (representing approximately half voltage)
Interpreting Results
• Positive dB values indicate a gain or increase in power/voltage.
• Negative dB values indicate a loss or decrease in power/voltage.
• A 3 dB change represents approximately a doubling (gain) or halving (loss) of power.
• A 6 dB change represents approximately a doubling (gain) or halving (loss) of voltage.
Remember, these calculators assume equal impedances for voltage conversions. For more complex scenarios involving changing impedances, additional calculations may be necessary.
Practical Applications in Fiber Optics
Understanding dB conversions is crucial in fiber optic systems. Here are some practical applications:
1. Fiber Attenuation
Fiber optic cables experience signal loss over distance, typically measured in dB/km.
Example: A single-mode fiber has an attenuation of 0.3 dB/km at 1550 nm. For a 50 km link:
• Total attenuation = 0.3 dB/km × 50 km = 15 dB
• Power ratio = 10^(-15/10) ≈ 0.0316
This means only about 3.16% of the input power reaches the end of the 50 km fiber.
2. Optical Amplifier Gain
Optical amplifiers boost signal power, often specified in dB.
Example: An Erbium Doped Fiber Amplifier (EDFA) provides 20 dB gain.
• Power ratio = 10^(20/10) = 100
This amplifier increases the signal power by a factor of 100.
3. Connector and Splice Losses
Each connector or splice in a fiber system introduces some loss.
Example: A system has 6 connectors, each with a 0.5 dB loss.
• Total connector loss = 6 × 0.5 dB = 3 dB
• Power ratio = 10^(-3/10) ≈ 0.5012
About 50.12% of power is transmitted through all connectors.
4. Optical Power Budget
Calculating the power budget ensures sufficient signal strength at the receiver.
• Transmitter power: +3 dBm
• Fiber attenuation: -15 dB (from example 1)
• Connector losses: -3 dB (from example 3)
• Amplifier gain: +20 dB (from example 2)
• Receiver sensitivity: -20 dBm
Power budget calculation: 3 dBm - 15 dB - 3 dB + 20 dB = 5 dBm at receiver
Since 5 dBm > -20 dBm (receiver sensitivity), the link is viable.
5. Optical Return Loss (ORL)
ORL measures the total amount of light reflected back to the source.
Example: A fiber link has an ORL of 40 dB.
• Reflection ratio = 10^(-40/10) = 0.0001
Only 0.01% of the light is reflected back, indicating a good connection.
These examples demonstrate how dB calculations are integral to designing, troubleshooting, and optimizing fiber optic systems. By mastering these conversions, you can effectively analyze and improve
optical network performance.
Key Points to Remember About dB Conversions
1. Logarithmic Nature:
□ dB is a logarithmic unit, compressing large ranges into manageable numbers.
□ Small dB changes can represent significant linear changes.
2. Power vs. Voltage:
□ For power ratios, use 10 log₁₀(P₂/P₁)
□ For voltage ratios, use 20 log₁₀(V₂/V₁)
3. Gain vs. Loss:
□ Positive dB values indicate gain or increase.
□ Negative dB values indicate loss or decrease.
4. Common dB Values:
□ 3 dB ≈ doubling or halving of power
□ 6 dB ≈ doubling or halving of voltage
□ 10 dB = 10 times power change
□ 20 dB = 100 times power change
5. Addition vs. Multiplication:
□ In dB, gains/losses are added, not multiplied.
□ Example: 3 dB + 3 dB = 6 dB (4x power), not 9 dB
6. Absolute vs. Relative:
□ dB is a relative measure (ratio between two values).
□ dBm is an absolute measure (relative to 1 mW).
7. Fiber Optic Applications:
□ Attenuation often expressed in dB/km
□ Connector losses typically 0.3-0.5 dB each
□ Amplifier gains commonly 20-30 dB
8. Calculation Tips:
□ Use a calculator for precise values.
□ Round to 1 decimal place for most practical applications.
9. System Analysis:
□ Sum all gains and losses in dB for total system impact.
□ Compare final signal level to receiver sensitivity.
10. Conversion Accuracy:
□ When converting between dB and linear, maintain sufficient decimal places to avoid compounding rounding errors.
Remember, proficiency with dB conversions comes with practice. Regular use of these concepts in real-world scenarios will reinforce your understanding and intuition.
Mastering dB conversions is an essential skill for anyone working with transmission systems, especially in the field of fiber optics. The ability to quickly and accurately convert between linear
ratios and decibels allows you to:
1. Efficiently analyze signal levels throughout a system
2. Easily calculate total gains and losses
3. Communicate system specifications clearly and concisely
4. Make informed decisions about component selection and system design
As you've seen through the practical examples, dB calculations are integral to every aspect of fiber optic system design and maintenance, from calculating fiber attenuation to determining optical
power budgets.
The interactive calculators provided here serve as valuable tools for quick conversions, but remember that true proficiency comes with practice and application. We encourage you to use these concepts
regularly in your work or studies, and to explore further the many applications of dB calculations in your specific area of interest.
Whether you're designing a new fiber optic network, troubleshooting an existing system, or simply striving to deepen your understanding of transmission technologies, a solid grasp of dB conversions
will prove invaluable.
Continue to explore, practice, and apply these principles, and you'll find yourself navigating the world of signal levels and transmission systems with increasing confidence and expertise.
dB (Decibel) Conversions in Transmission (Tx) Systems
In the world of telecommunications, electronics, and particularly fiber optics, the decibel (dB) is a fundamental unit used to express ratios of power, voltage, or intensity. Understanding how to
convert between linear values (like power in watts) and decibels is crucial for designing, analyzing, and troubleshooting transmission systems.
This guide will walk you through the concepts of decibel calculations, provide interactive tools for quick conversions, and demonstrate how these principles apply to real-world scenarios in fiber
optic systems. Whether you're a seasoned engineer or a student just beginning to explore the field, mastering dB conversions will significantly enhance your ability to work with optical and
electrical systems effectively.
Let's dive into the world of decibels and discover how this logarithmic unit simplifies calculations and provides invaluable insights into signal strength, attenuation, and amplification in
transmission systems.
What is a Decibel (dB)?
A decibel (dB) is a logarithmic unit used to express the ratio between two values, typically power or amplitude. It's named after Alexander Graham Bell and is one-tenth of a bel (B), a unit that
proved too large for practical use.
Key characteristics of the decibel:
• It's a dimensionless unit, expressing a ratio rather than an absolute value.
• It uses a logarithmic scale, which allows for a wide range of values to be expressed in a manageable form.
• It's additive for power ratios, simplifying calculations involving multiple gains or losses.
Why Use Decibels?
1. Managing Large Variations: In transmission systems, signal levels can vary by factors of millions or billions. Decibels compress this huge range into more manageable numbers. For example, a power
ratio of 1,000,000:1 is simply 60 dB.
2. Simplifying Multiplicative Processes: In a transmission system, signals often undergo multiple stages of amplification or attenuation. With decibels, these multiplicative processes become simple
addition or subtraction.
3. Matching Human Perception: Our perception of sound and light intensity is roughly logarithmic. Decibels align well with how we perceive changes in signal strength.
4. Industry Standard: Using decibels allows for easy communication and comparison across different systems and technologies.
5. Ease of Calculation: When dealing with very large or very small numbers, decibel calculations can be easier and less error-prone than working with linear values.
By understanding and using decibels, engineers and technicians can more intuitively grasp signal changes, design systems more effectively, and communicate specifications more clearly.
dB Conversion Formulas
Power Ratio Conversions
1. Power Ratio to dB: dB=10×log10(PoutPin)\text{dB} = 10 \times \log_{10}\left(\frac{P_{\text{out}}}{P_{\text{in}}}\right)dB=10×log10(PinPout)
2. dB to Power Ratio: PoutPin=10(dB10)\frac{P_{\text{out}}}{P_{\text{in}}} = 10^{\left(\frac{\text{dB}}{10}\right)}PinPout=10(10dB)
Voltage or Amplitude Ratio Conversions
3. Voltage Ratio to dB: dB=20×log10(VoutVin)\text{dB} = 20 \times \log_{10}\left(\frac{V_{\text{out}}}{V_{\text{in}}}\right)dB=20×log10(VinVout)
4. dB to Voltage Ratio: VoutVin=10(dB20)\frac{V_{\text{out}}}{V_{\text{in}}} = 10^{\left(\frac{\text{dB}}{20}\right)}VinVout=10(20dB)
Note: The voltage/amplitude formulas assume equal impedances. For situations with different input and output impedances, additional calculations are necessary.
These formulas form the basis of our dB conversion calculators and are essential for analyzing signal levels in transmission systems.
Interactive dB Calculators
Our interactive dB calculators allow you to quickly convert between linear ratios and decibels for both power and voltage/amplitude. Here's how to use them:
Power Ratio Conversions
1. Power Ratio to dB:
□ Input: Enter the power ratio (P_out / P_in) in the first field.
□ Output: The calculator will display the corresponding dB value.
2. dB to Power Ratio:
□ Input: Enter the dB value in the second field.
□ Output: The calculator will display the corresponding power ratio.
Voltage Ratio Conversions
3. Voltage Ratio to dB:
□ Input: Enter the voltage ratio (V_out / V_in) in the third field.
□ Output: The calculator will display the corresponding dB value.
4. dB to Voltage Ratio:
□ Input: Enter the dB value in the fourth field.
□ Output: The calculator will display the corresponding voltage ratio.
Using the Calculators
• Enter values in any input field to see real-time results in the corresponding output field.
• For ratio inputs (power and voltage), use positive numbers only.
• For dB inputs, any real number is acceptable.
• Results are displayed with two decimal places for dB values and four decimal places for ratios.
Example Calculations
1. Power Ratio to dB:
□ Input: 2 (representing a doubling of power)
□ Output: 3.01 dB
2. dB to Power Ratio:
□ Input: -3 dB
□ Output: 0.5012 (representing approximately half power)
3. Voltage Ratio to dB:
□ Input: 2 (representing a doubling of voltage)
□ Output: 6.02 dB
4. dB to Voltage Ratio:
□ Input: -6 dB
□ Output: 0.5012 (representing approximately half voltage)
Interpreting Results
• Positive dB values indicate a gain or increase in power/voltage.
• Negative dB values indicate a loss or decrease in power/voltage.
• A 3 dB change represents approximately a doubling (gain) or halving (loss) of power.
• A 6 dB change represents approximately a doubling (gain) or halving (loss) of voltage.
Remember, these calculators assume equal impedances for voltage conversions. For more complex scenarios involving changing impedances, additional calculations may be necessary.
Practical Applications in Fiber Optics
Understanding dB conversions is crucial in fiber optic systems. Here are some practical applications:
1. Fiber Attenuation
Fiber optic cables experience signal loss over distance, typically measured in dB/km.
Example: A single-mode fiber has an attenuation of 0.3 dB/km at 1550 nm. For a 50 km link:
• Total attenuation = 0.3 dB/km × 50 km = 15 dB
• Power ratio = 10^(-15/10) ≈ 0.0316
This means only about 3.16% of the input power reaches the end of the 50 km fiber.
2. Optical Amplifier Gain
Optical amplifiers boost signal power, often specified in dB.
Example: An Erbium Doped Fiber Amplifier (EDFA) provides 20 dB gain.
• Power ratio = 10^(20/10) = 100
This amplifier increases the signal power by a factor of 100.
3. Connector and Splice Losses
Each connector or splice in a fiber system introduces some loss.
Example: A system has 6 connectors, each with a 0.5 dB loss.
• Total connector loss = 6 × 0.5 dB = 3 dB
• Power ratio = 10^(-3/10) ≈ 0.5012
About 50.12% of power is transmitted through all connectors.
4. Optical Power Budget
Calculating the power budget ensures sufficient signal strength at the receiver.
• Transmitter power: +3 dBm
• Fiber attenuation: -15 dB (from example 1)
• Connector losses: -3 dB (from example 3)
• Amplifier gain: +20 dB (from example 2)
• Receiver sensitivity: -20 dBm
Power budget calculation: 3 dBm - 15 dB - 3 dB + 20 dB = 5 dBm at receiver
Since 5 dBm > -20 dBm (receiver sensitivity), the link is viable.
5. Optical Return Loss (ORL)
ORL measures the total amount of light reflected back to the source.
Example: A fiber link has an ORL of 40 dB.
• Reflection ratio = 10^(-40/10) = 0.0001
Only 0.01% of the light is reflected back, indicating a good connection.
These examples demonstrate how dB calculations are integral to designing, troubleshooting, and optimizing fiber optic systems. By mastering these conversions, you can effectively analyze and improve
optical network performance.
Key Points to Remember About dB Conversions
1. Logarithmic Nature:
□ dB is a logarithmic unit, compressing large ranges into manageable numbers.
□ Small dB changes can represent significant linear changes.
2. Power vs. Voltage:
□ For power ratios, use 10 log₁₀(P₂/P₁)
□ For voltage ratios, use 20 log₁₀(V₂/V₁)
3. Gain vs. Loss:
□ Positive dB values indicate gain or increase.
□ Negative dB values indicate loss or decrease.
4. Common dB Values:
□ 3 dB ≈ doubling or halving of power
□ 6 dB ≈ doubling or halving of voltage
□ 10 dB = 10 times power change
□ 20 dB = 100 times power change
5. Addition vs. Multiplication:
□ In dB, gains/losses are added, not multiplied.
□ Example: 3 dB + 3 dB = 6 dB (4x power), not 9 dB
6. Absolute vs. Relative:
□ dB is a relative measure (ratio between two values).
□ dBm is an absolute measure (relative to 1 mW).
7. Fiber Optic Applications:
□ Attenuation often expressed in dB/km
□ Connector losses typically 0.3-0.5 dB each
□ Amplifier gains commonly 20-30 dB
8. Calculation Tips:
□ Use a calculator for precise values.
□ Round to 1 decimal place for most practical applications.
9. System Analysis:
□ Sum all gains and losses in dB for total system impact.
□ Compare final signal level to receiver sensitivity.
10. Conversion Accuracy:
□ When converting between dB and linear, maintain sufficient decimal places to avoid compounding rounding errors.
Remember, proficiency with dB conversions comes with practice. Regular use of these concepts in real-world scenarios will reinforce your understanding and intuition.
Mastering dB conversions is an essential skill for anyone working with transmission systems, especially in the field of fiber optics. The ability to quickly and accurately convert between linear
ratios and decibels allows you to:
1. Efficiently analyze signal levels throughout a system
2. Easily calculate total gains and losses
3. Communicate system specifications clearly and concisely
4. Make informed decisions about component selection and system design
As you've seen through the practical examples, dB calculations are integral to every aspect of fiber optic system design and maintenance, from calculating fiber attenuation to determining optical
power budgets.
The interactive calculators provided here serve as valuable tools for quick conversions, but remember that true proficiency comes with practice and application. We encourage you to use these concepts
regularly in your work or studies, and to explore further the many applications of dB calculations in your specific area of interest.
Whether you're designing a new fiber optic network, troubleshooting an existing system, or simply striving to deepen your understanding of transmission technologies, a solid grasp of dB conversions
will prove invaluable.
Continue to explore, practice, and apply these principles, and you'll find yourself navigating the world of signal levels and transmission systems with increasing confidence and expertise.
Ready to Revolutionize Your Fiber Optic Capabilities?
Whether you need a standard product or a fully customized solution, FSI has the expertise to meet your unique requirements.
Ready to Revolutionize Your Fiber Optic Capabilities?
Whether you need a standard product or a fully customized solution, FSI has the expertise to meet your unique requirements.
Ready to Revolutionize Your Fiber Optic Capabilities?
Whether you need a standard product or a fully customized solution, FSI has the expertise to meet your unique requirements.
|
{"url":"https://www.fiberopticsystems.com/resources/calculators/db-conversions","timestamp":"2024-11-14T18:23:54Z","content_type":"text/html","content_length":"474149","record_id":"<urn:uuid:ab4008ca-370a-4af7-b09c-29ad0999e7ff>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00586.warc.gz"}
|
Stata16新增功能有哪些? 满满干货拿走不谢
Stata16新增功能有哪些? 满满干货拿走不谢
所有计量经济圈方法论丛的程序文件, 微观数据库和各种软件都放在社群里.欢迎到计量经济圈社群交流访问.
差不多2年前,咱们引荐了Stata15版新功能,你竟然没有想到,一睹为快;且在1年前,咱们又引荐了正在改变世界的30个计量方法, 不学习你就淘汰了。为了适应快速变化的学术环境,Stata公司也在努力迭代出第16版本,不
然失去了与其他开源软件比如R, Python竞争的资本。下面的每一种新功能,咱们都附上了相应的materials,青年学者们可以参看每一篇文章。
Stata 16 is a big release, which our releases usually are. This one is broader than usual. It ranges from lasso to Python and from multiple datasets in memory to multiple chains in Bayesian analysis.
Stata 16新增特色如下:
对变量的最大数目进行了扩容。Oh, and in Stata/MP, Stata matrices can now be up to 65,534 x 65,534, meaning you can fit models with over 65,000 right-hand-side variables. Meanwhile, Mata matrices
remain limited only by memory.
Here are my comments on the highlights.
1. Lasso, both for prediction and for inference
a.回归方法深度剖析(OLS, RIDGE, ENET, LASSO, SCAD, MCP, QR)
b.高维回归方法: Ridge, Lasso, Elastic Net用了吗
There are two parts to our implementation of lasso: prediction and inference. I suspect inference will be of more interest to our users, but we needed prediction to implement inference. By the way,
when I say lasso, I mean lasso, elastic net, and square-root lasso, but if you want a features list, click the title.
Let’s start with lasso for prediction. If you type,用于预测的Lasso
. lasso linear y x1 x2 x3 ... x999
lasso will select the covariates from the x‘s specified and fit the model on them. lasso will be unlikely to choose the covariates that belong in the true model, but it will choose covariates that
are collinear with them, and that works a treat for prediction. If English is not your first language, by “works a treat”, I mean great. Anyway, the lasso command is for prediction, and standard
errors for the covariates it selects are not reported because they would be misleading.
Concerning inference, we provide four lasso-based methods: double selection, cross-fit partialing out, and two more. If you type,用于推断的Lasso
. dsregress y x1, controls(x2-x999)
then, conceptually but not actually, y will be fit on x1 and the variables lasso selects from x2-x999. That’s not how the calculation is made because the variables lasso selects are not identical to
the true variables that belong in the model. I said earlier that they are correlated with the true variables, and they are. Another way to think about selection is that lasso estimates the variables
to be selected and, as with all estimation, that is subject to error. Anyway, the inference calculations are robust to those errors. Reported will be the coefficient and its standard error for x1. I
specified one variable of special interest in the example, but you can specify however many you wish.
2. Reproducible and automatically updating reports
The inelegant title above is trying to say (1) reports that reproduce themselves just as they were originally and (2) reports that, when run again, update themselves by running the analysis on the
latest data. Stata has always been strong on both, and we have added more features. I don’t want to downplay the additions, but neither do I want to discuss them. Click the title to learn about them.
I think what’s important is another aspect of what we did. The real problem was that we never told you how to use the reporting features. Now we do in an all-new manual. We tell you and we show you,
with examples and workflows. Here’s a link to the manual so you can judge for yourself.
3. New meta-analysis suite
Stata is known for its community-contributed meta-analysis. Now there is an official StataCorp suite as well. It’s complete and easy to use. And yes, it has funnel plots and forest plots, and bubble
plots and L’Abbé plots.
4. Revamped and expanded choice modeling (marginsworks everywhere)
Choice modeling is jargon for conditional logit, mixed logit, multinomial probit, and other procedures that model the probability of individuals making a particular choice from the alternatives
available to each of them.
We added a new command to fit mixed logit models, and we rewrote all the rest. The new commands are easier to use and have new features. Old commands continue to work under version control.
margins can now be used after fitting any choice model. margins answers questions about counterfactuals and can even answer them for any one of the alternatives. You can finally obtain answers to
questions like, “How would a $10,000 increase in income affect the probability people take public transportation to work?”
The new commands are easier to use because you must first cmset your data. That may not sound like a simplification, but it simplifies the syntax of the remaining commands because it gets details out
of the way. And it has another advantage. It tells Stata what your data should look like so Stata can run consistency checks and flag potential problems.
Finally, we created a new [CM] Choice Modeling Manual. Everything you need to know about choice modeling can now be found in one place.
5. Integration of Python with Stata
If you don’t know what Python is, put down your quill pen, dig out your acoustic modem and plug it in, push your telephone handset firmly into the coupler, and visit Wikipedia. Python has become an
exceedingly popular programming language with extensive libraries for writing numerical, machine learning, and web scraping routines.
Stata’s new relationship with Python is the same as its relationship with Mata. You can use it interactively from the Stata prompt, in do-files, and in ado-files. You can even put Python subroutines
at the bottom of ado-files, just as you do Mata subroutines. Or put both. Stata’s flexible.
Python can access Stata results and post results back to Stata using the Stata Function Interface (sfi), the Python module that we provide.
6. Bayesian predictions, multiple chains, and more
d.贝叶斯因子及其在 JASP 中的实现,传说中的贝叶斯统计是什么?
We have lots of new Bayesian features.
We now have multiple chains. Has the MCMC converged? Estimate models using multiple chains, and reported will be the maximum of Gelman-Rubin convergence diagnostic. If it has not yet converged, do
more simulations. Still hasn’t converged? Now you can obtain the Gelman-Rubin convergence diagnostic for each parameter. If the same parameter turns up again and again as the culprit, you know where
the problem lies.
We now provide Bayesian predictions for outcomes and functions of them. Bayesian predictions are calculated from the simulations that were run to fit your model, so there are a lot of them. The
predictions will be saved in a separate dataset. Once you have the predictions, we provide commands so that you can graph summaries of them and perform hypothesis testing. And you can use them to
obtain posterior predictive p-values to check the fit of your model.
7. Extended regression models (ERMs) for panel data
d.面板数据里处理多重高维固定效应的神器, 还可用工具变量处理内生性
ERMs fits models with problems. These problems can be any combination of (1) endogenous and exogenous sample selection, (2) endogenous covariates, also known as unobserved confounders, and (3)
nonrandom treatment assignment.
What’s new is that ERMs can now be used to fit models with panel (2-level) data. Random effects are added to each equation. Correlations between the random effects are reported. You can test them,
jointly or singly. And you can suppress them, jointly or singly.
Ermistatas got a fourth antenna.
8. Importing of SAS and SPSS datasets
a.6张图掌握Stata软件的方方面面, 还有谁, 还有谁?
New command import sas imports .sas7bdat data files and .sas7bcat value-label files.
New command import spss imports IBM SPSS version 16 or higher .sav and .zsav files.
I recommend using them from their dialog boxes. You can preview the data and select the variables and observations you want to import.
9. Flexible nonparametric series regression
b.分位数回归, Oaxaca分解, Quaids模型, 非参数估计程序
New command npregress series fits models like
y = g(x[1], x[2], x[3]) + ε
No functional-form restrictions are placed on g(), but you can impose separability restrictions. The new command can fit
y = g[1](x[[1]]) + g[2](x[2], x[3]) + ε
y = g[1](x[1], x[2]) + g[3](x[3]) + ε
y = g[1](x[1], x[3]) + g[2](x[2]) + ε
and even fit
y = b[1]x[1] + g[2](x[2], x[3]) + ε
y = b[1]x[1] + b[2]x[2] + g[3](x[3]) + ε
I mentioned that lasso can perform inference in models like
. dsregress y x1, controls(x2-x999)
If you know that variables x12, x19, and x122 appear in the model, but do not know the functional form, you could use npregress series to obtain inference. The command
. npregress series y x12 x19 x122, asis(x1)
y = b[1]x[1] + g[2](x[12], x[19], x[122]) + ε
and, among other statistics, reports the coefficient and standard error of b1.
10. Multiple datasets in memory, meaning frames
I’m a sucker for data management commands. Even so, I do not think I’m exaggerating when I say that frames will change the way you work. If you are not interested, bear with me. I think I can change
your mind.
You can have multiple datasets in memory. Each is stored in a named frame. At any instant, one of the frames is the current frame. Most Stata commands operate on the data in the current frame. It’s
the commands that work across frames that will change the way you work, but before you can use them, you have to learn how to use frames. So here’s a bit of me using frames:
. use persons. frame create counties. frame counties: use counties. tabulate cntyid. frame counties: tabulate cntyid
Well, I’m thinking at this point, it appears I could merge persons.dta with counties.dta, except I’m not thinking about merging them. I’m thinking about linking them.
. frlink m:1 cntyid, frame(counties)
Linking is frame’s equivalent of merge. It does not change either dataset except to add one variable to the data in the current frame. New variable counties is created in this case. If I were to drop
the variable, I would eliminate the link, but I’m not going to do that. I’m curious whether the counties in which people reside in persons.dta were all found in counties.dta. I can find out by typing
. count if counties==.
If 1,000 were reported, I would now drop counties, and it would be as if I had never linked the two frames.
Let’s assume count reported 0. Or 4, which is a small enough number that I don’t care for this demonstration. Now watch this:
. generate relinc = income / frget(counties, medinc)
I just calculated each person’s income relative to the median income in the county in which he or she resides, and median income was in the counties dataset, not the persons dataset!
Next, I will copy to the current frame all the variables in counties that start with pop. The command that does this, frget, will use the link and copy the appropriate observations.
. frget pop*, from(counties). describe pop*. generate ln_pop18plus = ln(pop18plus). generate ln_income = ln(income). correlate ln_income ln_pop18plus
I hope I have convinced you that frames are of interest. If not, this is only one of the five ways frames will change how you work with Stata. Maybe one of the other four ways will convince you.
Visit the overview of frames page at stata.com.
11. Sample-size analysis for confidence intervals
The goal is to optimally allocate study resources when CIs are to be used for inference or, said differently, to estimate the sample size required to achieve the desired precision of a CI in a
planned study. One mean, two independent means, or two paired means. Or one variance.
12. Nonlinear DSGE models
g.2018年诺贝尔经济学奖: 诺德豪斯和罗默, 宏观经济学春天真的来了
DSGE stands for Dynamic Stochastic General Equilibrium. Stata previously fit linear DSGEs. Now it can fit nonlinear ones too.
I know this either interests you or does not, and if it does not, there will be no changing your mind. It interests me, and what makes the new feature spectacular is how easy models are to specify
and how readable the code is afterwards. You could almost teach from it. If this interests you, click through.
13. Multiple-group IRT
心理学和管理学领域使用者较多,以后与SEM,Latent growth model等一起讲解
IRT (Item Response Theory) is about the relationship between latent traits and the instruments designed to measure them. An IRT analysis might be about scholastic ability (the latent trait) and a
college admission test (the instrument).
Stata 16’s new IRT features produce results for data containing different groups of people. Do instruments measure latent traits in the same way for different populations?
Here is an example. Do students in urban and rural schools perform differently on a test intended to measure mathematical ability? Using Stata 16, you can fit a 2-parameter logistic model comparing
the groups by typing
. irt 2pl item1-item10, group(urbanrural)
What’s new is the group() option.
Does an instrument measuring depression perform the same today as it did five years ago? You can fit a graded-response model that compares the groups by typing
. irt grm item-item10, group(timecategory)
And IRT’s postestimation graphs have been updated to reveal the differences among groups when a group() model has been fit.
The examples I mentioned both concerned two groups, but IRT can handle any number of them.
14. Panel-data Heckman-selection models
a.面板数据中heckman方法和程序, 动态, 0-1面板和内生性选择都行
d.PSM, RDD, Heckman, Panel模型的操作程序, selective文章精华系列
Heckman selection adjusts for bias when some outcomes are missing not at random.
The classic example is economists’ modeling of wages. Wages are observed only for those who work, and whether you work is unlikely to be random. Think about it. Should I work or go to school? Should
I work or live off my meager savings? Should I work or retire? Few people would be willing to make those decisions by flipping a coin.
If you worry about such problems and are using panel data, the new xtheckman command is the solution.
15-22. Seven more new features
I will summarize the last seven features briefly. My briefness makes them no less important, especially if they interest you.
15. NLMEs with lags: multiple-dose pharmacokinetic models and more can now be fit by Stata’s menl command for fitting nonlinear mixed-effects regression. This includes fitting multiple-dose models.
16. Heteroskedastic ordered probit joins the ordered probit models that Stata already could fit.
17. Graph sizes in inches, centimeters, and printer points can now be specified. Specify 1in, 1.4cm, or 12pt.
f.6张图掌握Stata软件的方方面面, 还有谁, 还有谁?
18. Programmers: Mata’s new Quadrature class numerically integrates y = f(x) over the interval a to b, where a may be -∞ or finite and b may be finite or +∞.
19. Programmers: Mata’s new Linear programming class solves linear programs using an interior-point method. It minimizes or maximizes a linear objective function subject to linear constraints
(equality and inequality) and boundary conditions.
b.贬称编程Stata, 不可能后悔的10篇文章, 编程code和注解
20. Do-file Editor: Autocompletion and more. The editor now provides syntax highlighting for Python and Markdown. And it autocompletes Stata commands, quotes, parentheses, braces, and brackets. Last
but not least, spaces as well as tabs can be used for indentation.
21. Stata for Mac: Dark Mode and tabbed windows. Dark mode is a color scheme that darkens background windows and controls so that they do not cause eye strain or distract from what you are working
on. Stata now supports it. Meanwhile, tabbed windows conserve screen real estate. Stata has lots of windows. With the exception of the Results window, they come and go as they are needed. Now you can
combine all or some into one. Click the tab, change the window.
咱们社群有15版本的Stata MP, Stata SE, Stata for Mac
22. Panel data mixed logit
That’s it
The highlights are 58% of what’s new in Stata 16, measured by the number of text lines required to describe them. Here is a sampling of what else is new.
ranksum has new option exact to specify that exact p-values be computed for the Wilcoxon rank-sum test.
New setting set iterlog controls whether estimation commands display iteration logs.
menl has new option lrtest that reports a likelihood-ratio test comparing the nonlinear mixed-effects model with the model fit by ordinary nonlinear regression.
The bayes: prefix command now supports the new hetoprobit command so that you can fit Bayesian heteroskedastic ordered probits.
The svy: prefix works with more estimation commands, namely, existing command hetoprobit and new commands cmmixlogit and cmxtmixlogit.
New command export sasxport8 exports datasets to SAS XPORT Version 8 Transport format.
New command splitsample splits data into random samples. It can create simple random samples, clustered samples, and balanced random samples. Balance splitting can be used for matched-treatment
I could go on. Type help whatsnew15to16 when you get your copy of Stata 16 to find out all that’s new.
: , . Video Mini Program Like ,轻点两下取消赞 Wow ,轻点两下取消在看
|
{"url":"https://freewechat.com/a/MjM5OTMwODM1Mw==/2448065464/1","timestamp":"2024-11-03T09:22:34Z","content_type":"text/html","content_length":"149517","record_id":"<urn:uuid:c86bd189-ecc3-41f1-ae0c-b88b4dac116e>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00575.warc.gz"}
|
How do you find the derivative of y =sqrt(x)? | Socratic
How do you find the derivative of #y =sqrt(x)#?
1 Answer
Convert the square root to its exponential form and then use the power rule.
$y = \sqrt{x} = {x}^{\frac{1}{2}}$
Now bring the power of $\frac{1}{2}$ down as a coefficient and then subtract 1 from the current power of $\frac{1}{2}$. Evaluate the fractions and simplify. Manipulate exponents from negative to
$y ' = \frac{1}{2} {x}^{\left(\frac{1}{2} - 1\right)} = \frac{1}{2} {x}^{\left(\frac{1}{2} - \frac{2}{2}\right)} = \frac{1}{2} {x}^{\left(- \frac{1}{2}\right)} = \frac{1}{2 {x}^{\frac{1}{2}}} = \frac
{1}{2 \sqrt{x}}$
Impact of this question
73883 views around the world
|
{"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-find-the-derivative-of-y-sqrt-x","timestamp":"2024-11-12T22:36:28Z","content_type":"text/html","content_length":"32720","record_id":"<urn:uuid:8d793e2a-0a5d-4bff-be34-2920e96fe317>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00314.warc.gz"}
|
Toshkent farmatsevtika instituti fizika, matematika va axborot texnologiyalari kafedrasi
General Theory of Relativity:
Sometimes known as the Theory of General Relativity, this wasAlbert Einstein’s refinement (published in 1916) of his earlierSpecial Theory of Relativity and Sir Isaac Newton’s much earlierLaw of
Universal Gravitation. The theory holds that acceleration and gravity are indistinguishable - the Principle of Equivalence - and describes gravity as a property of the geometry (more specifically a
warpage) of space-time. Among other things, the theory predicts the existence of black holes, an expanding universe, time dilation, length contraction, gravitational light bending and the curvature
of space-time. Although classical physics can be considered a good approximation for everyday purposes, the predictions of general relativity differ significantly from those of classical physics.
They have become generally accepted in modern physics, however, and have been confirmed by all observations and experiments to date.
The shortest path between two points in curved space. It originally meant the shortest route between two points on the Earth's surface (namely a segment of a great circle) but, since its application
in general relativity, it has come to mean the generalization of the notion of a straight line as applied to all curved spaces. In non-curved three-dimensional space, the geodesic is a straight line.
In general relativity, a free falling body (on which only gravitational forces are acting) follows a geodesic in curved four-dimensional space-time.
|
{"url":"http://kompy.info/toshkent-farmatsevtika-instituti-fizika-matematika-va-axborot.html?page=39","timestamp":"2024-11-10T01:50:58Z","content_type":"text/html","content_length":"10520","record_id":"<urn:uuid:c0d8c5b2-5fee-4c42-8488-46b64242a61d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00177.warc.gz"}
|
Calculator with simplify mixed numbers
calculator with simplify mixed numbers Related topics: matrices solver
what are some examples from real life in which you might use polynomial division?
least common denominator worksheet lcd
Download Ebook Algebra Hungerford
arithmetic progression ti-84
radical form of square root of 226
download "interpreting engineering drawings" cd
Inverse Of Absolute Value
algebra radicals problems
free maths puzzles for 1 grade
adding and subtracting fractions on powerpoint that i can copy
hard algebra calculator
Author Message
ihkeook Posted: Monday 12th of Sep 08:26
Hello math wiz. I am in a real mess . I simply don’t know what to do . You know, I am having a tough time with my math and need immediate assistance with calculator with simplify
mixed numbers. Anyone you know whom I can contact with system of equations, radical expressions and distance of points? I tried hard to get a tutor, but failed . They are hard to
find and also cost a lot . It’s also difficult to find someone fast enough and I have this quiz coming up. Any advice on what I should do? I would very much appreciate a quick
From: holland
Back to top
nxu Posted: Tuesday 13th of Sep 08:18
Oh boy! You seem to be one of the best students in your class. Well, use Algebrator to solve those problems . The software will give you a detailed step by step solution. You can
read the explanation and understand the problems. Hopefully your calculator with simplify mixed numbers class will be the best one.
From: Siberia,
Russian Federation
Back to top
3Di Posted: Wednesday 14th of Sep 08:58
I had always struggled with math during my high school days and absolutely hated the subject until I came across Algebrator. This product is so awesome , it helped me improve my
grades drastically. It didn't just help me with my homework, it taught me how to solve the problems. You have nothing to lose and everything to gain by buying this brilliant
product .
From: 45°26' N,
09°10' E
Back to top
SanG Posted: Thursday 15th of Sep 14:07
Algebrator is the program that I have used through several algebra classes - Algebra 2, Basic Math and Intermediate algebra. It is a truly a great piece of algebra software. I
remember of going through difficulties with quadratic inequalities, percentages and like denominators. I would simply type in a problem from the workbook , click on Solve – and
step by step solution to my math homework. I highly recommend the program.
From: Beautiful
Northwest Lower
Back to top
TivErDeda Posted: Friday 16th of Sep 07:38
Thanks for the detailed information , this seems awesome. I needed something exactly like Algebrator, because I don't want a program which only solves the exercise and shows the
final result, I want something that can actually show me how the exercise has to be solved. That way I can understand it and after that solve it without any help , not just copy
the answers . Where can I find the program ?
Back to top
Sdefom Posted: Saturday 17th of Sep 19:48
Koopmansshab I guess you can find all details at this https://softmath.com/links-to-algebra.html. From what I understand Algebrator comes at a price but it has 100% money back guarantee. That’s
how I got it. I would advise you to try it out. Don’t think you will want to get your money back.
From: Woudenberg,
Back to top
|
{"url":"https://softmath.com/algebra-software/point-slope/calculator-with-simplify-mixed.html","timestamp":"2024-11-14T15:03:53Z","content_type":"text/html","content_length":"43738","record_id":"<urn:uuid:7d65282b-fd6f-463f-bfbe-d5052da57ab4>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00796.warc.gz"}
|
Customizing BAT loading screens and backgrounds - general thread
Random question - has anyone ever created any cat faced pilot skins? #askingforafriend
"A truth ceases to be true when more than one person believes in it." -Oscar Wilde
Excellent work. Thank you! Some more and we'll need another nation: The Kilrathi Empire.
"And here Alice began to get rather sleepy, and went on saying to herself, in a dreamy sort of way, 'Do cats eat bats? Do cats eat bats?' and sometimes, 'Do bats eat cats?' for, you see, as she
couldn't answer either question, it didn't much matter which way she put it.” Not sure about bats but some cats eat dogs for breakfast, seven of them at once.1 vs 7 dogcatfight!https://
|
{"url":"https://www.sas1946.com/main/index.php?topic=63365.12","timestamp":"2024-11-04T20:25:25Z","content_type":"application/xhtml+xml","content_length":"53840","record_id":"<urn:uuid:c7893853-31e2-4c48-a8e4-9470c3be4acd>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00593.warc.gz"}
|
Advanced Topics of Theoretical Physics II: The statistical properties of matter
by Peter E. Blöchl
Publisher: TU Clausthal 2014
Number of pages: 182
From the table of contents: Transition-state theory; Diffusion; Monte Carlo Method; Quantum Monte Carlo; Decoherence; Notes on the Interpretation of Quantum Mechanics; Irreversible Thermodynamics;
Transport; Interacting Systems and Phase Transitions; etc.
Download or read it online for free here:
Download link
(1.6MB, PDF)
Similar books
Statistical Physics
Michael Cross
CaltechThe author discusses using statistical mechanics to understand real systems, rather than ideal systems that can be solved exactly. In addition dynamics and fluctuations are considered. These
notes are an attempt to summarize the main points.
Non-Equilibrium Statistical Mechanics
Gunnar Pruessner
Imperial College LondonThis is an attempt to deliver, within a couple of hours, a few key-concepts of non-equilibrium statistical mechanics. The goal is to develop some ideas of contemporary
research. Many of the ideas are illustrated or even introduced by examples.
Homogeneous Boltzmann Equation in Quantum Relativistic Kinetic Theory
M. Escobedo, S. Mischler, M.A. Valle
American Mathematical SocietyWe consider some mathematical questions about Boltzmann equations for quantum particles, relativistic or non relativistic. Relevant cases such as Bose, Bose-Fermi, and
photon-electron gases are studied. We also consider some simplifications ...
Modern Statistical Mechanics
Paul Fendley
The University of VirginiaThis book is an attempt to cover the gap between what is taught in a conventional statistical mechanics class and between what is necessary to understand current research.
The aim is to introduce the basics of many-body physics to a wide audience.
|
{"url":"https://www.e-booksdirectory.com/details.php?ebook=10476","timestamp":"2024-11-05T07:22:14Z","content_type":"text/html","content_length":"11363","record_id":"<urn:uuid:c0664e1b-7961-4382-80a2-1a77ee929bbf>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00652.warc.gz"}
|
Excel Solver - Change Options for All Solving Methods
1. In the Solver Parameters dialog box, click Options.
2. In the Solver Options dialog box, on the All Methods tab, choose one or more of the following options:
Constraint precision
• In the Constraint Precision box, type the degree of precision that you want. For a constraint to be considered satisfied, the relationship between the Cell Reference and the Constraint value
cannot be violated by more than this amount. The smaller the number, the higher the precision
Use Automatic Scaling
• Select the Use Automatic Scaling check box to specify that Solver should internally rescale the values of variables, constraints and the objective to similar magnitudes, to reduce the impact of
extremely large or small values on the accuracy of the solution process. This box is selected by default.
Show Iteration Results
Solving with Integer Constraints
• Select the Ignore Integer Constraints check box to cause all integer, binary and alldifferent constraints to be ignored when you next click Solve. This is called solving the relaxation of the
integer programming problem.
• In the Integer Optimality % box, type the maximum percentage difference Solver should accept between the objective value of the best integer solution found and the best known bound on the true
optimal objective value before stopping.
The Integer Optimality % is sometimes called the (relative) “MIP gap”. The default value is 1%; set this to 0% to ensure that a proven optimal solution is found.
Solving Limits
1. In the Max Time (Seconds) box, type the number of seconds that you want to allow Solver to run.
2. In the Iterations box, type the maximum number of iterations that you want to allow Solver to perform.
The following limits apply only to problems that include integer restrictions on variables, or problems that use the Evolutionary Solving Method:
3. In the Max Subproblems box, type the maximum number of subproblems that you want to allow.
4. In the Max Feasible Solutions box, type the maximum number of feasible solutions that you want to allow. For problems with integer restrictions, this is the maximum number of integer
feasible solutions.
If the solution process reaches the maximum time, number of iterations, maximum subproblems, or maximum feasible solutions before Solver finds an optimal solution, Solver displays the Show Trial
Solution dialog box. See Show Solver trial solutions.
3. Click OK.
4. In the Solver Parameters dialog box, click Solve or Close.
NOTE You can click the Help button in the dialog box to get more information about other options.
|
{"url":"https://www.solver.com/excel-solver-change-options-all-solving-methods","timestamp":"2024-11-15T04:07:27Z","content_type":"text/html","content_length":"63082","record_id":"<urn:uuid:e3b4d75a-d5f5-4240-ae6e-59bbe65f428f>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00607.warc.gz"}
|
American Mathematical Society
The AR-property for Roberts’ example of a compact convex set with no extreme points Part 1: General result
HTML articles powered by AMS MathViewer
Proc. Amer. Math. Soc. 125 (1997), 3075-3087
DOI: https://doi.org/10.1090/S0002-9939-97-04020-3
PDF | Request permission
We prove that the original compact convex set with no extreme points, constructed by Roberts (1977) is an absolute retract, therefore is homeomorphic to the Hilbert cube. Our proof consists of two
parts. In this first part, we give a sufficient condition for a Roberts space to be an AR. In the second part of the paper, we shall apply this to show that the example of Roberts is an AR.
• C. Bessaga and T. Dobrowolski, Some open problems on the border of infinite dimensional topology and functional analysis, Proceedings of the international conference on geometric topology, PWN,
Warszawa 1980.
• Doug Curtis, Tadeusz Dobrowolski, and Jerzy Mogilski, Some applications of the topological characterizations of the sigma-compact spaces $l^{2}_{f}$ and $\Sigma$, Trans. Amer. Math. Soc. 284
(1984), no. 2, 837–846. MR 743748, DOI 10.1090/S0002-9947-1984-0743748-7
• Jan van Mill and George M. Reed (eds.), Open problems in topology, North-Holland Publishing Co., Amsterdam, 1990. MR 1078636
• Open problems in infinite-dimensional topology, Topology Proc. 4 (1979), no. 1, 287–338 (1980). MR 583711
• Victor Klee, Shrinkable neighborhoods in Hausdorff linear spaces, Math. Ann. 141 (1960), 281–285. MR 131149, DOI 10.1007/BF01360762
• Victor Klee, Leray-Schauder theory without local convexity, Math. Ann. 141 (1960), 286–296. MR 131150, DOI 10.1007/BF01360763
• Sergio Sispanov, Generalización del teorema de Laguerre, Bol. Mat. 12 (1939), 113–117 (Spanish). MR 3
• N. J. Kalton and N. T. Peck, A re-examination of the Roberts example of a compact convex set without extreme points, Math. Ann. 253 (1980), no. 2, 89–101. MR 597819, DOI 10.1007/BF01578905
• N. J. Kalton, N. T. Peck, and James W. Roberts, An $F$-space sampler, London Mathematical Society Lecture Note Series, vol. 89, Cambridge University Press, Cambridge, 1984. MR 808777, DOI 10.1017
• Nguyen To Nhu, Investigating the ANR-property of metric spaces, Fund. Math. 124 (1984), no. 3, 243–254. MR 774515, DOI 10.4064/fm-124-3-243-254
• Nguyen To Nhu, The finite dimensional approximation property and the AR-property in needle point spaces, J. London Math. Soc. (to appear).
• Nguyen To Nhu and Katsuro Sakai, The compact neighborhood extension property and local equi-connectedness, Proc. Amer. Math. Soc. 121 (1994), no. 1, 259–265. MR 1232141, DOI 10.1090/
• Nguyen To Nhu and Le Hoang Tri, Every needle point space contains a compact convex AR-set with no extreme points, Proc. Amer. Math. Soc. 120 (1994), no. 4, 1261–1265. MR 1152989, DOI 10.1090/
• Nguyen To Nhu and Le Hoang Tri, No Roberts space is a counterexample to Schauder’s conjecture, Topology 33 (1994), no. 2, 371–378. MR 1273789, DOI 10.1016/0040-9383(94)90018-3
• James W. Roberts, A compact convex set with no extreme points, Studia Math. 60 (1977), no. 3, 255–266. MR 470851, DOI 10.4064/sm-60-3-255-266
• J. W. Roberts, Pathological compact convex sets in the spaces $L_p,\ 0\leq p<1$, The Altgeld Book, University of Illinois, 1976.
• Stefan Rolewicz, Metric linear spaces, Monografie Matematyczne, Tom 56. [Mathematical Monographs, Vol. 56], PWN—Polish Scientific Publishers, Warsaw, 1972. MR 0438074
Similar Articles
• Retrieve articles in Proceedings of the American Mathematical Society with MSC (1991): 54C55, 54D45
• Retrieve articles in all journals with MSC (1991): 54C55, 54D45
Bibliographic Information
• Nguyen To Nhu
• Affiliation: Institute of Mathematics, P.O. Box 631, Bo Ho, Hanoi, Vietnam
• Address at time of publication: Department of Mathematical Sciences, New Mexico State University, Las Cruces, New Mexico 88003-8001
• Email: nnguyen@nmsu.edu
• Jose M. R. Sanjurjo
• Affiliation: Departamento de Geometria y Topologia, Facultad de Matematicas, Universidad Complutense de Madrid, 280 40 Madrid, Spain
• Email: sanjurjo@sungt1.mat.ucm.es
• Tran Van An
• Affiliation: Department of Mathematics, University of Vinh, Nghe An, Vietnam
• Received by editor(s): December 17, 1992
• Received by editor(s) in revised form: April 1, 1996
• Additional Notes: The first author was supported by the Complutense University of Madrid.
• Communicated by: James West
• © Copyright 1997 American Mathematical Society
• Journal: Proc. Amer. Math. Soc. 125 (1997), 3075-3087
• MSC (1991): Primary 54C55; Secondary 54D45
• DOI: https://doi.org/10.1090/S0002-9939-97-04020-3
• MathSciNet review: 1415357
|
{"url":"https://www.ams.org/journals/proc/1997-125-10/S0002-9939-97-04020-3/?active=current","timestamp":"2024-11-09T13:07:58Z","content_type":"text/html","content_length":"68686","record_id":"<urn:uuid:2d17c731-7b46-4e1f-882b-f5c41fe1372b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00527.warc.gz"}
|
Two Asset Binary Options definition for beginners
Binary options with two assets are binary structures with two underlying instruments.
The correlation coefficient, Binary Spreads, Double Binary Options, Cubes, and Eachway Cubes will be examined in this section.
The extra variable in asset 2 introduces a new variable, the correlation coefficient of the two assets’ prices*. This necessitates a far more complex examination of the sensitivity analyses, as each
strategy, for example, has two deltas, two gammas, and so on. The approach in this sector has been to keep the price of asset 2 constant while the delta of asset 1 is illustrated and analysed, but
this also necessitates a constant Rho.
As a result, the reader will encounter a plethora of three-dimensional graphics on the following pages, despite the fact that three dimensions are insufficient.
Binary Spreads and Eachway Spreads are the first two asset binary options to consider. The Binary Spread is the simplest of the two asset binary options to understand because it only has one strike
price. The Binary Spread’s simplicity makes it an obvious candidate for retail traders who are already engaged in outright spread trading. The Eachway Spread has two strikes, but they work together,
and this strategy is simple to understand.
Because there are now at least two strikes, Double Binary Options require more thought. Double Calls, Double Puts, CallPuts, and PutCalls allow traders to speculate on the independent performance of
two assets. When trading the reverse yield spread, for example, a CallPut or PutCall could be the ideal limited risk instrument.
Finally, the final two asset binary options are Cubes and Eachway Cubes, which require the buyer to correctly assess the price ranges that the two assets‘ prices will be in at the option’s expiry. In
contrast to the Binary Spread’s relative performance, the focus is once again on the asset’s absolute price.
The author’s experience trading conventional short-term interest rate options suggests that these strategies would be popular in the STIR market.
The additional variables in the aforementioned strategies increase the risk to the market-maker in both pricing and risk management; as a result, a sufficiently wide bid/ask spread would be required
to compensate for the extra risk load.
* This correlation coefficient is the variable Rho, which can be problematic because the first differential of an option with respect to the interest rate is also known as Rho. However, because the
variables interest rate and asset yield have been studiously avoided in both this book and the precursor, Rho will be limited to the correlation coefficient in this section.
Other important articles can be found in my glossary.
|
{"url":"https://syntheticindices.net/glossary/two-asset-binary-options-definition-for-beginners/","timestamp":"2024-11-05T07:28:09Z","content_type":"text/html","content_length":"74763","record_id":"<urn:uuid:42e464cd-9c3f-48ba-be5f-4f857c61e4d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00282.warc.gz"}
|
How do you find period in future value?
Home Helpful tips How do you find period in future value?
How do you find period in future value?
Helpful tips Alice Hardie 26/04/2020 comments off
How do you find period in future value?
Solving for the number of periods can be achieved by dividing FV/P, the future value divided by the payment. This result can be found in the “middle section” of the table matched with the rate to
find the number of periods, n.
How do you calculate time duration?
1. Convert both times to 24 hour format, adding 12 to any pm hours. 8:55am becomes 8:55 hours (start time)
2. If the start minutes are greater than the end minutes…
3. Subtract end time minutes from start time minutes…
4. Subtract the hours…
5. Put(not add) the hours and minutes together – 6:45 (6 hours and 45 minutes)
What is the formula of time period of pendulum?
The period of a simple pendulum is T=2π√Lg T = 2 π L g , where L is the length of the string and g is the acceleration due to gravity.
How many min are in 2 hours?
120 Minutes
Hours to Minutes Conversion Table
Hours Minutes
1 Hour 60 Minutes
2 Hours 120 Minutes
3 Hours 180 Minutes
4 Hours 240 Minutes
What is the period formula?
each complete oscillation, called the period, is constant. The formula for the period T of a pendulum is T = 2π Square root of√L/g, where L is the length of the pendulum and g is the acceleration due
to gravity.
What is the SI unit of time period?
the second
A period is the time to complete one cycle of periodic motion. The symbol for period is T (italic capital t). The SI unit of period is the second [s].
How is LIC maturity amount calculated?
The exact Maturity Value cannot be calculated but one can calculate a close estimate of the value to get an idea of the benefit at the end of the term. The basic format is Sum Assured + Bonuses +
Final Additional Bonus (if declared).
Is 2.5 hours 2 hours and 30 minutes?
How many minutes in 2.5 hours? – 2.5 hours is equal to 150 minutes.
How many hours is 1 hour 30 minutes?
1.5 hours
1.5 hours is therefore 1 hour and 30 minutes.
How many is 2 hour?
In 2 h there are 120 min . Which is the same to say that 2 hours is 120 minutes.
What unit is period measured in?
Period refers to the time for something to happen and is measured in seconds/cycle. In this case, there are 11 seconds per 33 vibrational cycles.
|
{"url":"https://www.nosubjectlosangeles.com/how-do-you-find-period-in-future-value/","timestamp":"2024-11-12T09:01:24Z","content_type":"text/html","content_length":"61289","record_id":"<urn:uuid:e1d94772-80dc-47c1-91aa-fecf8b64e66f>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00698.warc.gz"}
|
Quantum bit escrow
Unconditionally secure bit commitment and coin flipping are known to be impossible in the classical world. Bit commitment is known to be impossible also in the quantum world. We introduce a related
new primitive - quantum bit escrow. In this primitive Alice commits to a bit b to Bob. The commitment is bindingin the sense that if Alice is asked to reveal the bit, Alice can not bias her
commitment without having a good probability of being detected cheating. The commitment is sealing in the sense that if Bob learns information about the encoded bit, then if later on he is asked to
prove he was playing honestly, he is detected cheating with a good probability. Rigorously proving the correctness of quantum cryptographic protocols has proved to be a difficult task. We develop
techniques to prove quantitative statements about the binding and sealing properties of the quantum bit escrow protocol. A related primitive we construct is a quantum biased coin flipping protocol
where no player can control the game, i.e., even an all-powerful cheating player must lose with some constant probability, which stands in sharp contrast to the classical world where such protocols
are impossible.
Original language English
Title of host publication Proceedings of the 32nd Annual ACM Symposium on Theory of Computing, STOC 2000
Pages 705-714
Number of pages 10
State Published - 2000
Externally published Yes
Event 32nd Annual ACM Symposium on Theory of Computing, STOC 2000 - Portland, OR, United States
Duration: 21 May 2000 → 23 May 2000
Publication series
Name Proceedings of the Annual ACM Symposium on Theory of Computing
ISSN (Print) 0737-8017
Conference 32nd Annual ACM Symposium on Theory of Computing, STOC 2000
Country/Territory United States
City Portland, OR
Period 21/05/00 → 23/05/00
• quantum bit commitment
• quantum coin tossing
• quantum cryptography
Dive into the research topics of 'Quantum bit escrow'. Together they form a unique fingerprint.
|
{"url":"https://cris.huji.ac.il/en/publications/quantum-bit-escrow","timestamp":"2024-11-06T04:47:58Z","content_type":"text/html","content_length":"49609","record_id":"<urn:uuid:ad76ae9e-ea67-49d8-bb29-4580a642ccb2>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00406.warc.gz"}
|
Dirac point in the photon dispersion relation of a negative/zero/positive-index plasmonic metamaterial
July 21st, 2011
V. Yannopapas and A. Vanakaras, Phys. Rev. B., 84, 045128, (2011).
We report on the emergence of a Dirac point in the dispersion relation of a plasmonic metamaterial. It is realized as a three-dimensional crystal (cubic or orthorhombic) whose lattice sites are
decorated by aggregates of gold nanoparticles embedded in a high-index dielectric material. The Dirac-type dispersion lines of the photon modes are not a result of diffraction as in photonic crystals
but due to subwavelength features and emerge from the gapless transition from a negative to a positive index band. The Dirac point is manifested as a dip in the spectrum of light transmittance
through a finite slab of the metamaterial; however, transmittance does not decrease diffusively but exponentially due to the inherent losses of gold in the given spectral regime. ©2011 American
Physical Society
URL: http://link.aps.org/doi/10.1103/PhysRevB.84.045128
DOI: 10.1103/PhysRevB.84.045128
|
{"url":"http://softmat.upatras.gr/?p=716","timestamp":"2024-11-11T03:54:02Z","content_type":"application/xhtml+xml","content_length":"30612","record_id":"<urn:uuid:bff5e490-d84c-4f46-9187-24499a1731b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00643.warc.gz"}
|
How To Convert MG To MEQ
A milligram, abbreviated mg, is a metric unit of mass or weight defined as one thousandth of a gram. A milliequivalent is a measure of the quantity of ions in an electrolyte fluid. One
milliequivalent is one thousandth of one mole of charges and is represented by the symbol mEq. The ions of different elements vary in mass, so it is necessary to know the atomic or molecular weight
of the ions and their valence before a you calculate a conversion.
1. Find the Valances of Ions
Establish the valence of the relevant ions by consulting a table of valence values. Multiply this value by the mass expressed in milligrams. For example, 20 mg of Al ^+++, that has a valence of
three, produces a result of 60: 3 x 20 = 60.
2. Look up Atomic Masses
Look up the atomic or molecular mass of the ions, and then divide it by the result from the previous step. The result is the milliequivalent value of the ions.
Aluminum, used in the previous example, is a pure element so establish its atomic mass. This is 27. The valence multiplied by the example mass is 60, so divide 27 by 60. The result, 0.45 is the
milliequivalent value of the example mass.
3. Check Your Work
Check the result for errors by reversing the calculations. Divide the mEq value by the atomic or molecular mass multiplied by the valence. If the result is not the original mass in mg, then there was
an error in your calculations. Repeat them until the answer is correct.
Things Needed
• Table of atomic weights and valencies
• Calculator
TL;DR (Too Long; Didn't Read)
To convert milligrams to milliequivalents use the formula: mEq = (mg x valence) / atomic or molecular weight.
One thousand milliequivalents equals one equivalent.
In the U.S. electrolyte concentration is measured in mEq. However, Europe and the rest of the world use millimoles per liter or micromoles per liter.
Cite This Article
Robinson, David. "How To Convert MG To MEQ" sciencing.com, https://www.sciencing.com/convert-mg-meq-8552484/. 24 April 2018.
Robinson, David. (2018, April 24). How To Convert MG To MEQ. sciencing.com. Retrieved from https://www.sciencing.com/convert-mg-meq-8552484/
Robinson, David. How To Convert MG To MEQ last modified March 24, 2022. https://www.sciencing.com/convert-mg-meq-8552484/
|
{"url":"https://www.sciencing.com:443/convert-mg-meq-8552484/","timestamp":"2024-11-11T00:13:58Z","content_type":"application/xhtml+xml","content_length":"80530","record_id":"<urn:uuid:22678042-d8bd-4b65-a81d-496a9e3faf8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00217.warc.gz"}
|
How to calculate Material savings from an LED lighting retrofit project - PKK Lighting Inc.
How to calculate Material savings from an LED lighting retrofit project
This is the fifth blog post in the series to calculate the total Return on Investment (ROI) that a business will generate from their LED lighting retrofit project. Previously, we published the
following posts in this series:
1. Energy Savings from an LED Lighting Retrofit project – March 2019
2. Labor Savings from an LED Lighting Retrofit project – February 2019
3. What will you save from an LED Lighting Retrofit Project? – January 2019
This post focuses on the Material savings gained by a lighting retrofit project and the long-life LED products incorporated into the project. Most of the products you’ll be replacing during the
project, like incandescent, CFL or halogen bulbs, have life-spans of 1,500 to 15,000 hours. That means that if you operate your lights 12 hours a day for 250 days a year (3,000 hours), you’ll be
replacing burned out bulbs up to 2 times per year. Those costs can add up quickly, especially if you have hundreds, or even thousands of fixtures in your facilities.
LED lights, on the other hand, have life spans of 35,000 to 100,000 hours. At 3,000 hours per year, that’s 11 – 33 years! An even then the bulb may not burn out but may just lose some of its’ output
capacity, so your life spans could exceed the standard rated hours.
We will continue our example from the series in that we are replacing 90 watt PAR 38 bulbs with 14 watt LED PAR 38 bulbs that operate 3,000 hours per year.
Step 1: Gather Data – the following information will be needed to start the calculations:
• Rated life for the current lights: for this calculation we will make an assumption that you are using a PAR 38 with a rated life of 1,500 hours.
• Rated life for the new lights: for this calculation we will make an assumption that the replacement lights are LED PAR 38 with a rated life of 50,000 hours and come with a 5-year warranty.
• Replacement cost for the current lights: for this calculation we will make an assumption that the replacement cost for a 90W Halogen PAR 38 is $7.50.
• Replacement cost for the new lights: for this calculation we will make an assumption that the replacement cost for a 14W LED PAR 38 is $18.50.
• Total Annual Running time: in this example the annual run time is 3,000 hours.
Step 2: Calculate the number of times you will need to replace the current lights per year, which is dependent on the rated life and run time per year:
• Total Run Time per year: 3,000 hours
• Divided by: Rated life of current lights: 1,500 hours
• Equals: Number of times per year you’ll replace each light: 2 replacements per year
Step 3: Cost of replacing existing lights:
• Annual replacements from above: 2 per year
• Times: Cost to replace from above: $7.50
• Equals: Annual replacement costs per year: $15.00
Step 4: Calculate the number of times you will need to replace the new LED lights per year, which is dependent on the rated life and run time per year:
• Total Run Time per year: 3,000 hours
• Divided by: Rated life of current lights: 50,000 hours
• Equals: Number of times per year you’ll replace each light: .06 replacements per year
Step 5: Cost of replacing new LED lights:
• Annual replacements from above: .06 per year
• Times: Cost to replace from above: $18.50
• Equals: Annual replacement costs per year: $1.11
Step 6: Calculate your savings per year:
• Annual replacements per year for current lights: $15.00 per year
• Minus: Annual replacements per year for new LED lights: $1.11 per year
• Equals: Annual material savings per light: $13.89
(Note that this calculation does not take into effect the normal 5 year warranty provided by the LED light manufacturer, so during years 1-5 of the project your replacement cost for the LED lights
would be zero and you’d have an even larger annual material savings per year.)
Using the annual material cost savings per light ($13.89 in this example) you can determine the total material savings based upon the number of lights you’ll be replacing during your LED lighting
retrofit project. Multiplied over hundreds or thousands of replacements and the savings add up quickly.
The professional lighting installation experts at PKK Lighting would be delighted to talk to you about your LED retrofit project. We can help you evaluate the opportunity, manage the entire project,
and provide ongoing lighting maintenance of your system.
|
{"url":"https://www.pkklighting.com/how-to-calculate-material-savings-from-an-led-lighting-retrofit-project/","timestamp":"2024-11-03T19:42:15Z","content_type":"text/html","content_length":"73949","record_id":"<urn:uuid:6ba06f9f-49b2-4a5f-99e2-c26b605406f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00215.warc.gz"}
|
Mosaic dividers, divisions to 100. - english.vanjufmarjan.nl
Mosaic dividers, divisions to 100.
With these mosaic divisors you practice the division sums while puzzling.
To practice part sums you can use these mosaic divisors very well.
In fact, kids often ask if I have any more with other mosaics because they just love being able to color them in.
There are two versions of the mosaic dividers.
A version with divisions that fall within the tables of 2 to 10 and a version with divisions with large numbers up to 100 that must be divided by a number up to 9. So answers that can be greater than
An example is 94 : 2 = 47.
Purpose of these mosaic dividers?
On the practice sheet are grid sections with colored boxes with a division sum next to it.
The students can calculate this sum and write the answer next to the sum.
At the bottom right is a grid area with all numbers in it.
The aim is now to find the answer of the sum in this grid.
Once they have found this, they can color in the example area next to the sum in the answer area.
By calculating and coloring all division sums in this way, a beautiful mosaic appears.
For the mosaic dividers 1, there are no double numbers in the grid. So answer 47 only occurs once.
For the mosaic dividers 2 with division sums that fall within the tables of 2 to 10, double answers occur.
So there are several possible answers 9 and so on.
These duplicate numbers all have the same color box. So students only need to find an answer box with the correct number and color it.
If they come across such an answer later, they simply look for the next color-in box with the same number and color it in again.
Well, I think you’ll figure it out.
Answer sheet.
Of course I have also included an answer sheet for both exercises in the PDF file.
Students can then check for themselves whether they have corrected all the answers and whether they have colored the mosaic correctly.
Have fun calculating.
The file:
Mosaic dividers, divisions to 100.
27 Downloads
Geef een reactie
Deze site gebruikt Akismet om spam te verminderen. Bekijk hoe je reactie-gegevens worden verwerkt.
|
{"url":"https://english.vanjufmarjan.nl/2021/08/25/mosaic-dividers-divisions-to-100/","timestamp":"2024-11-14T00:05:31Z","content_type":"text/html","content_length":"167799","record_id":"<urn:uuid:48a37be8-f117-4f4c-b9dc-a2bb7fd5ffff>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00152.warc.gz"}
|
Download Factors - Mathematics Form 1 Notes.
Why download?
• To read offline at any time.
• To Print at your convinience
• Share Easily with Friends/Students
How to pay
1. Go to Lipa na MPesa, Buy Goods and Services option
2. Enter Till Number 738874
3. Enter the exact amount 50/-. Don't pay more or less, the system will reject
4. Enter your MPesa pin and send
5. The mpesa hakikisha will show payment to Lemfundo Technologies
6. You will receive an SMS from M-Pesa with a confirmation code eg PI98O3P8RQ
7. After you receive the confimation sms from MPesa, enter the phone number you used to pay and mpesa confirmation code below. (The one below is an example, enter the one mpesa has sent to your
8. Click on the submit button
9. You will be able to instantly download the file you have paid for.
10. Experiencing difficulties? Call/Whatsapp +254 703 165 909, or email us to info@easyelimu.com
|
{"url":"https://www.easyelimu.com/component/donation/?view=donation&layout=singledownload&tmpl=component&catid=92&fileid=662&filename=Factors%20-%20Mathematics%20Form%201%20Notes&utm_source=youmayalsolike&utm_medium=mainsite","timestamp":"2024-11-11T04:38:08Z","content_type":"text/html","content_length":"21250","record_id":"<urn:uuid:1cdcb91a-def3-4dc8-9c63-74fa54cb0370>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00000.warc.gz"}
|
Effects of gravity in folding
Effects of gravity on buckle folding are studied using a Newtonian fluid finite element model of a single layer embedded between two thicker less viscous layers. The methods allow arbitrary density
jumps, surface tension coefficients, resistance to slip at the interfaces, and tracking of fold growth to a large amplitudes. When density increases downward in two equal jumps, a layer buckles less
and thickens more than with uniform density. When density increases upward in two equal jumps, it buckles more and thickens less. A low density layer with periodic thickness variations buckles more,
sometimes explosively. Thickness variations form, even if not present initially. These effects are greater with; smaller viscosities, larger density jump, larger length scale, and slower shortening
rate. They also depend on wavelength and amplitude, and these dependencies are described in detail. The model is applied to the explosive growth of the salt anticlines of the Paradox Basin, Colorado
and Utah. There, shale (higher density) overlies salt (lower density). Methods for simulating realistic earth surface erosion and deposition conditions are introduced. Growth rates increase both with
ease of slip at the salt-shale interface, and when earth surface relief stays low due to erosion and deposition. Model anticlines grow explosively, attaining growth rates and amplitudes close to
those of the field examples. Fastest growing wavelengths are the same as seen in the field. It is concluded that a combination of partial-slip at the salt-shale interface, with reasonable earth
surface conditions, promotes sufficiently fast buckling of the salt-shale interface due to density inversion alone. Neither basement faulting, nor tectonic shortening is required to account for the
observed structures. Of fundamental importance is the strong tendency of gravity to promote buckling in low density layers with thickness variations. These develop, even if not present initially.
<Because of this, folds both initiate faster, and grow to much higher amplitude than if density is uniform. &Low density layers are ubiquitous in the crust, so these results shed considerable new
light on crustal dynamics.
Ph.D. Thesis
Pub Date:
□ Anticlines;
□ Buckling;
□ Earth Crust;
□ Finite Element Method;
□ Folding;
□ Geodynamics;
□ Gravitational Effects;
□ Mathematical Models;
□ Newtonian Fluids;
□ Earth Surface;
□ Erosion;
□ Inversions;
□ Simulation;
□ Geophysics
|
{"url":"https://ui.adsabs.harvard.edu/abs/1991PhDT........35M/abstract","timestamp":"2024-11-03T17:37:12Z","content_type":"text/html","content_length":"40693","record_id":"<urn:uuid:72838973-4227-4172-91a1-fed143479936>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00006.warc.gz"}
|
Prove G contains a cycle of length at least k+1
• Thread starter cxc001
• Start date
In summary, the conversation is about proving that a simple graph, G, with minimum degree k (where k>=2), contains a cycle of length at least k+1. The use of induction is suggested and the discussion
includes considering a path of maximum length and constructing a cycle with a degree of at least k+1. However, it is acknowledged that this approach may not be successful and it may be more effective
to prove that a cycle containing a specific vertex set exists.
This is a graph theory related question.
Let G be a simple graph with min. degree k, where k>=2. Prove that G contains a cycle of length at least k+1.
Am I suppose to use induction to prove G has a path length at least k first, then try to prove that G has a cycle of length at least k+1? Or should I go directly use induction to prove G contains a
cycle of length at least k+1?
Science Advisor
Homework Helper
Gold Member
Consider one of the vertices, [tex]\nu_k[/tex] of degree [tex]k[/tex]. Let the set of vertices which are connected to this one be [tex]\{ \nu_i^{(k)} \}[/tex]. What is the minimum path length of a
cycle [tex]C^{(k)}[/tex] that connects all of the [tex]\nu_i^{(k)}[/tex]? Can you see a way to construct a cycle [tex]C'[/tex] of degree [tex]\text{deg}\, C^{(k)}+1[/tex]?
Tell me if I'm not on the right track.
Use induction on k.
Pick a path P of maximum length, and suppose vertex vi is a vertex on this path, which has degree at least k, with a set of adjacent vertices {w1,w2,…,wj}, the adjacent vertex set must be on the
The minimum path length of a cycle Ci that connected all of vertex set {w1,w2,…,wj } is k.
Then extend the path to a cycle by adding the edge wjvi, so the resulting cycle has length at least k+1.
We're done! Does this induction make sense?
with a set of adjacent vertices {w1,w2,…,wj}, the adjacent vertex set must be on the path.
This is not likely to be true
Science Advisor
Homework Helper
Gold Member
Your path P is too arbitrary and it's unlikely that you can prove very much given the constraints. However, it should be possible to argue that a simple graph contains a cycle that actually contains
the vertex set w[i] and consider the path length of a closely related cycle that includes v.
FAQ: Prove G contains a cycle of length at least k+1
1. What does it mean for G to contain a cycle?
A cycle in a graph is a sequence of vertices that are connected by edges, where the first and last vertices are the same. In other words, it is a closed path in the graph.
2. How can I prove that G contains a cycle of length at least k+1?
One approach is to use the induction principle. First, show that G contains a cycle of length k+1. Then, assume that G contains a cycle of length n (where n > k+1) and use this assumption to prove
that G contains a cycle of length n+1. This will prove that G contains a cycle of length at least k+1.
3. Is there a specific algorithm for finding a cycle of length at least k+1 in G?
Yes, there are several algorithms that can be used to find a cycle of a certain length in a graph. For example, the depth-first search algorithm can be used to find a cycle of length k+1 in G.
However, these algorithms may not always find the shortest cycle of length k+1 in G.
4. Can G contain multiple cycles of length at least k+1?
Yes, it is possible for G to contain multiple cycles of length at least k+1. In fact, it is possible for G to contain an infinite number of cycles, of varying lengths.
5. What is the significance of proving that G contains a cycle of length at least k+1?
Proving that G contains a cycle of length at least k+1 can have various implications in different fields. In graph theory, it can help in understanding the structure and properties of the graph. In
computer science, it can be used in the design and analysis of algorithms. In real-life applications, it can have implications in network design and optimization problems.
|
{"url":"https://www.physicsforums.com/threads/prove-g-contains-a-cycle-of-length-at-least-k-1.433402/","timestamp":"2024-11-11T13:48:11Z","content_type":"text/html","content_length":"91043","record_id":"<urn:uuid:9c2af06f-80d4-4a4d-948f-ec372779386b>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00391.warc.gz"}
|
panel chart – Drawing with Numbers
I recently got a request for help with a wrapping challenge. Not holiday presents, instead text wrapping in Tableau. Here’s a demo view illustrating the problem:
There’s a whole bunch of ellipses (…) where there’s too much data to display. The only options Tableau gives us in this case are to:
• Change the order of the text by sorting the Product Name dimension.
• Manually change the size of each pane by resizing the headers.
• Use the Fit options to dynamically Fit Width, Fit Height, or Entire View.
The manual sizing is problematic because it won’t dynamically adjust to the number of marks, and in the case of views with lots of marks like this one it takes a lot of effort to figure out what size
will get all the marks, never mind that the list of values is really hard to read:
And while the Fit options are great at ensuring a view with only a few marks takes up the available space when there are many marks it ends up either not displaying values or creating overlapping
values depending on the settings.
Controlling X + Y
In this view the mark layout–either no pills or only discrete (blue) pills on Rows and Columns– is generating a pane for each distinct combination of header values and then Tableau’s mark stacking
algorithm is laying out the text. So at this point we’re stuck and can’t do anything about what Tableau is up to. This is where we need to keep in mind one of the master Tableau concepts: everything
is a scatterplot. If Tableau won’t place the marks where we want them then we can generate our own X + Y coordinates whether by adding data or creating our own calculations. This is the approach
taken by the tile maps introduced to Tableau by Brittany Fong or or as a model of the solar system that I made awhile back: . More recently Ken Flerlage did a great introduction on his Beyond Show Me
series of blog posts.
Therefore “all” we need to do is figure out where to place the marks. More on that in a moment, there are two more details I want to go into:
Green Panes vs. Blue Panes: Pane Sizing
Tableau’s logic around what generates a pane vs. marks in a pane is a little complicated so I’m going to keep this focused on three key elements, here are the first two:
1. All panes created by a given combination of pills are the same size.
2. (corollary to #1) If we resize one header or axis then all the other panes for that header or axis will resize as well. Tableau does this because it’s easier to visually parse (read) a view that
has consistent sizing of elements.
Here’s a view with COUNTD(Product Names) on Text & Color with just discrete (blue) pills on Rows and Columns:
Somehow we need to fit 509 Product Names into the pane for Q4 2015/Office Supplies. If I resize Office Supplies to be taller then both Furniture & Technology change as well:
The same goes if I’ve got a continuous axis to place X/Y coordinates on. In this view I’ve simply put MIN(0) and MIN(1) on Columns & MIN(0) Rows and we can see a set of axes:
If I resize MIN(1) on Columns then to make it wider then all of the panes for MIN(0) and MIN(1) on Columns are resized.
So we can’t really dynamically resize panes to fit the data, all we can do is fit more or less into a pane. Therefore the desired solution can’t involve resizing panes, instead we will need to be
generating more or fewer panes, and that leads to the next point around panes.
Green Panes vs. Blue Panes: Number of Panes
The third key element around green panes and blue panes is this:
3. a) Continuous pills generate an axis for every discrete pill header. b) Discrete pills generate a header for every value of the pill.
We can see a) in action in the continuous views above, with MIN(0) and MIN(1) on Columns we get two axes for each quarter/year combination. So to add more axes we’d need to add more continuous pills
but we can’t dynamically add them, and the number of axes ultimately depends on the discrete pills anyway so discrete is the way to go.
We can see b) in the discrete views above, there’s a header for each quarter in each year. Where this gets a little more interesting (and more useful in our case) is when the data is sparse, as in
this case where the Avery products are not sold in every customer Segment:
Avery 5 is only sold in one segment so there is only a single header for Consumer, whereas Avery 494 is sold in all three segments so there’s a header for each.
So how this comes together is that in reating X/Y coordinates for positioning the text in our desired view we’re going to use discrete headers that can give us just enough headers (and no more) for
the task, here’s a pic of the desired view with those headers:
Packing the Marks: the Algorithm
I experimented with some different layouts and looked at the following factors:
• In each pane there’s a list of 0 or more values (marks).
• At least in English when we’re reading lists we tend to make them top to bottom and when more is needed we add another column to the right.
• There’s a balance in readability between too many columns vs. too tall columns. When there are many columns already then adding more columns for the list makes the view harder to read; in other
words, a “tall” view with fewer columns is easier to read than a “wide” view.
• When the panes in a row or panes in a column have different numbers of marks it’s important to efficiently stack the marks: too much white space can make the view harder to read.
• A stacking layout that is closer to a differently-sized squares is easier to read than one that ends up with differently-sized rectangles.
The algorithm I came up with is a variation on the panel chart layout I used in Waffle Charts and Unit Charts on Maps that uses table calculations. The algorithm does the following:
• Calculates the index for each mark in a pane using INDEX() and the number of marks in a pane using SIZE(). These calculations are used in the following calculations.
• Counts the number of mark columns needed for each pane where there’s a Max # of Mark Columns. parameter to set a “no more than” value to prevent views from getting too wide. Then a nested
calculation counts the maximum number of mark columns in each column.
• Once we have the number of mark columns then the algorithm computes the number of mark rows for each pane, and then gets the maximum number of mark rows for each row.
• Finally the mark row position and mark column position can be computed based on the index for each mark in the pane and the available number of rows and columns.
I numbered the calculations so they can be each brought into a workout view in order with their compute using set and validated before moving to the next calc. Calcs 1 & 2 require a compute using on
the dimension to be used on Text and Calcs 4 & 6 have nested compute usings, see the comments on the calcs for details.
Here’s the workout view:
One complication is that the date dimensions are on Detail with custom dates with the ATTR() aggregation on Rows. This is a method to prevent unwanted data densification.
Once the workout view is built and validated then it’s possible to duplicate the view and rearrange pills, here’s that view:
There’s still a bit of manual resizing required, in this case it’s just to have enough size in each of the panes created by the column and row position table calculations to display the text. Once
that is done those headers can be hidden for the final view:
We’re not limited to a text display, for example here’s a highlight table that only took a couple more clicks:
Here’s a view to play with where you can adjust the Max # of Columns parameter and the number of states (which is a proxy for how many products are displayed). Click the image to open the text
wrapping in pane view on Tableau Public:
The key concept to keep in mind is that when Tableau won’t plot marks where we want we can add to the data source to get the necessary X&Y coordinates via joins, blends, and/or writing calculations.
Since Tableau was designed as a tool to support interactive visual analytics tasks like making giant text tables with the desired text wrapping can take more effort than we might like, however given
Tableau’s flexibility we can get the job done.
A New Compact (Mostly) Filter Layout for Showing All the Options
I’m preparing my Tableau Customer Conference 2013 session and there is way too much material to present in the time available, so I’m killing my darlings with a couple of posts in the next few weeks.
I had a situation where my users needed to select from a basket of hospital quality measures on a dashboard, and since some of the measures were new I wanted the filter to show all of the available
options, and based on the dashboard layout there wasn’t really space on either side of the view for a vertically oriented filter.
What I really wanted was to show all the options at the top of the dashboard, I ended up getting a little creative and coming up with a new filter layout. Read on to find out how!
Continue reading
|
{"url":"https://drawingwithnumbers.artisart.org/tag/panel-chart/","timestamp":"2024-11-13T01:35:44Z","content_type":"text/html","content_length":"129246","record_id":"<urn:uuid:3a01df9e-ad61-4820-a365-9257b3e828f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00874.warc.gz"}
|
CDS & AFCAT 1 2025 Exam Maths Profit & Loss Class 2
In the context of competitive exams like the Combined Defence Services (CDS) and Air Force Common Admission Test (AFCAT), the mathematics section frequently includes questions on profit and loss. One
of the more intricate parts of this topic involves concepts like marked price, mark-up percentage, discounts, cost price, profit percentage, and successive discounts. These elements are key in
understanding how sellers price goods, offer discounts, and calculate overall profitability.
This blog will walk through the details of these sub-topics and present strategies for mastering them for the CDS and AFCAT exams.
Overview of the Topic
The concepts of profit and loss go beyond simply calculating the difference between selling price and cost price. Sellers often use marked price and discounts to attract customers, which complicates
the calculations. In competitive exams, understanding how these concepts interrelate is essential for solving profit and loss problems efficiently.
Here’s a breakdown of the major sub-topics discussed in the class :
1. Marked Price (MP):
The marked price is the initial price tagged on a product by the seller before any discount is applied. It’s the price that buyers first see when they consider purchasing an item. Sellers often
mark up the cost price (CP) of an item to set the marked price, which provides room for offering discounts and still making a profit.
2. Mark-Up Percentage:
Mark-up refers to the amount added to the cost price to determine the marked price. The mark-up percentage is the ratio of the increase from the cost price to the cost price itself. This
percentage shows how much the seller expects to gain before offering any discount.
3. Discount:
Discounts are reductions from the marked price given to customers. These are often used to incentivize purchases and clear out stock. In profit and loss questions, you’re typically asked to
calculate the effective selling price after applying a discount or successive discounts.
4. Cost Price (CP):
Cost price refers to the original price paid by the seller to acquire a product before selling it to customers. The difference between the cost price and selling price, after accounting for any
discounts, determines the profit or loss.
5. Profit Percentage:
Profit percentage is the amount of profit made on the cost price, expressed as a percentage. When discounts are applied, it’s important to calculate how they affect the final selling price and
ultimately the profit made by the seller.
6. Discount Percentage:
Discount percentage is the reduction offered on the marked price. It’s important to know how to calculate the effective price after applying a discount, which in turn influences the seller’s
profit or loss.
7. Successive Discounts and Overall Discount Percentage:
Successive discounts involve offering more than one discount on the marked price. For example, a seller may offer a 10% discount followed by an additional 5% discount. Understanding how to
calculate the overall discount and its effect on the selling price is crucial, as successive discounts are common in exam questions.
Key Sub-Topics Explained
1. Marked Price and Mark-Up Percentage
Marked price is usually higher than the cost price because sellers mark up the cost to ensure profitability. The mark-up percentage indicates how much higher the marked price is compared to the
cost price. For example, a 20% mark-up on a product costing $100 results in a marked price of $120. This marked price is then used as a base for offering discounts. Understanding the mark-up
concept is essential for solving questions where you need to calculate the marked price or find out the profit based on the marked price after offering discounts.
2. Discount and Discount Percentage
A discount is a reduction from the marked price, usually expressed as a percentage. If the marked price of a product is $200 and a discount of 10% is offered, the buyer pays $180. Discounts are a
frequent part of exam questions, and it’s important to calculate how they affect the final price paid and the seller’s profit.
3. Successive Discounts and Overall Discount
Sometimes, a seller offers more than one discount in succession. For example, a product might be discounted by 20% first, and then an additional 10% discount might be applied to the reduced
price. Successive discounts are not simply added together. Instead, the second discount is applied to the price after the first discount. This can be a tricky area for students, but with
practice, calculating successive discounts becomes easier.
4. Profit, Loss, and Their Relation to Marked Price and Discounts
When a seller offers discounts, it reduces the selling price, which impacts the overall profit. If the selling price after discounts is still higher than the cost price, the seller makes a
profit. Conversely, if the selling price drops below the cost price due to discounts, the seller incurs a loss. Exam questions will often provide details about the cost price, marked price, and
discounts, and ask you to calculate the profit or loss percentage. Understanding the relationship between these variables is critical to solving such problems accurately.
Strategies to Master Profit and Loss for CDS and AFCAT Exams
To effectively prepare for the topic of profit and loss in these exams, consider the following strategies:
1. Understand the Relationships Between MP, CP, SP, and Discounts
One of the most important strategies is to understand how marked price, cost price, and selling price are connected through discounts and mark-ups. Practice different types of questions where you
calculate selling price after applying one or more discounts, and how these affect profit margins.
2. Practice Successive Discounts Thoroughly
Successive discounts are a common area of confusion. Since successive discounts require applying the second discount to the reduced price rather than the marked price, it’s essential to understand
this process well. Solve as many problems as you can to become familiar with the concept.
3. Memorize Common Discount and Profit Percentages
Just as you would memorize common percentage-fraction conversions, memorizing common profit and discount percentages helps solve problems quickly. Being able to instantly recognize the effect of,
say, a 10% discount or a 25% profit on cost price will save time during the exam.
4. Work on Speed and Accuracy with Discounts
In exams like CDS and AFCAT, time management is crucial. Develop the ability to perform quick mental calculations when working with discounts and mark-ups. Practice with timed quizzes and past exam
papers to improve your speed and accuracy.
5. Understand Real-Life Applications
Many profit and loss problems are framed in real-world contexts. Practicing word problems related to shopping discounts, product mark-ups, and successive discounts will help you understand the
practical applications of these concepts. This, in turn, will make the exam problems more intuitive to solve.
6. Break Down Complex Problems Step by Step
Complex problems involving multiple steps, such as calculating profit after applying successive discounts, can seem daunting. Break them down into smaller steps, and solve each part one at a time.
Focus on the logical flow: first find the selling price after discounts, then compare it to the cost price to determine profit or loss.
7. Practice with Past Papers and Mock Tests
Solving past papers and mock tests is one of the best ways to familiarize yourself with the type of profit and loss questions that appear in CDS and AFCAT exams. It also helps you identify any weak
areas where you need to improve. Timed practice tests are especially useful for improving both speed and accuracy.
Profit and loss, especially with the added complexity of marked price, discounts, and successive reductions, is a critical topic for the mathematics section of the CDS and AFCAT exams. By
understanding the relationships between cost price, selling price, and discounts, and by mastering the calculation of successive discounts and overall profit or loss percentages, you can solve these
questions confidently.
The key to success lies in regular practice, understanding real-world applications, and working on time management. By following the strategies outlined here, you’ll be well on your way to
mastering profit and loss questions for these competitive exams. Good luck with your preparation!
Leave Your Comment
|
{"url":"https://ssbcrackexams.com/cds-afcat-1-2025-exam-maths-profit-loss-class-2/","timestamp":"2024-11-05T20:12:12Z","content_type":"text/html","content_length":"343688","record_id":"<urn:uuid:90e64c14-4192-496e-b8ec-09ef91223913>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00782.warc.gz"}
|
SIEVE ANALYSIS OF FINE AGGREGATESSIEVE ANALYSIS OF FINE AGGREGATES - All About Civil Engineering
Code of Practice Used : (IS: 2386(PART 1)-1963) AND (IS: 383-1970)
To determine the grain size distribution of fine aggregates and to find grading zone of fine aggregates.
Sieve analysis is the method of dividing a sample of aggregates into various fractions each consisting of particles of same size. The sieve analysis is carried out to determine the particle size
distribution in a sample of aggregate, which we call Gradation. The aggregate fraction from 4.75 to 75 micron is referred to as fine aggregates. Fine aggregate is the sand used in mortars, coarse
aggregates is the broken stone or gravel, and all in one aggregate is the combination of fine and coarse aggregates. The coarse aggregates unless mixed with fine aggregates do not produce good
quality of concrete.
Grading pattern of a sample is found out by sieving a sample successively through the entire sieve set mounted one over the other in order of size, with largest sieve on the top. The material
retained on each sieve after shaking, represents the fraction of aggregates coarser than the below sieve. Sieving can be done either manually or mechanically. Fineness modulus is just a numerical
index value of fineness giving some idea of the mean size of particles in the entire body of aggregates. Determination of fineness modulus may be considered as a method of standardization of the
grading of aggregates. It is calculated by sieving a known mass of given aggregates on a set of standard sieves and by adding the cumulative percentages of mass of material retained on all the sieves
and divide the total percentage by 100.
Type of aggregate Max. Size of aggregate Fineness Modulus
in mm
Minimum Maximum
Fine aggregate 4.75 2.00 3.50
20 6.00 6.90
Coarse aggregate 40 6.90 7.50
75 7.50 8.00
20 4.70 5.10
25 5.00 5.50
All in aggregate 30 5.20 5.70
40 5.40 5.90
75 5.80 6.30
Limits of Fineness modulus for different aggregates
1. Fine aggregates
2. IS Sieve set from 4.75 mm to 75 micron.
3. Mechanical sieve shaker
4. Weighing balance
5. Scoop
1. Take 1000 Gms of fine aggregates sample.
2. Arrange the sieve set from top to bottom as follows 4.75 mm, 2.36 mm, 1.18 mm, 600 micron, 300 micron, 150 micron, 75 micron and Pan.
3. After weighing the sample, transfer the sample to 4.75 mm sieve and cover it with lid.
4. Place the assembly of sieve set on mechanical sieve shaker and sieve it for five to ten minutes.
5. Remove the assembly from mechanical sieve shaker and weigh the sample retained on each sieve, simultaneously note down in observation sheet.
Total Weight = 1000 grams
IS Sieve Size Weight of Fine aggregate Percent Retained Cumulative Percent Retained Percent passing
4.75 mm
2.36 mm
1.18 mm
600 micron
300 micron
150 micron
75 micron
∑ F
Fineness modulus of fine aggregates = ∑ F/100
1. What are fine aggregates, coarse aggregates and all in aggregates?
2. Define fineness modulus and state its objective.
3. What is the significance of sieve analysis?
4. Write the size of fine and coarse aggregates.
5. Which is the common sieve which appears for both types of aggregates?
6. What precautions are necessary while performing sieve analysis?
|
{"url":"https://allaboutcivilengg.com/sieve-analysis-of-fine-aggregates-2/","timestamp":"2024-11-04T08:53:48Z","content_type":"text/html","content_length":"128946","record_id":"<urn:uuid:6e9c6656-01b4-421c-9d37-2d7b9783468a>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00607.warc.gz"}
|
CAGR, Compound Annual Growth Rate, formula and calculator
CAGR is a useful measure of investment growth over several periods of time, especially if the value of your investment fluctuated greatly during the period under review.
Calculator CAGR
For calculate the CAGR, enter the initial value, the final value, and the number of periods during which the investment has grown.
CAGR calculation Formula
CAGR, or total annual growth rate, is the average rate at which investments grow over time, assuming that they are reinvested annually (periodically), i.e. given a compound interest rate. CAGR has
nothing to do with the value of investments in the intervening years, as it depends only on the value in the first year and the last year of investment ownership.
CAGR = (EV / BV) ^1/n - 1
• BV – Initial value, BV (starting value)
• EV – End value, EV (ending value)
• n – number of periods
If your investment has grown from 100,000 $ to 250,000 $ over the past five years, the total annual growth rate of your investment was 20.11% per year. The CAGR calculator can also be used to
determine the growth rate that you will need in the future to achieve the investment goals set today. For example, if you have $ 1,000 today, and in five years you want your investment to be $ 2,500,
you will need to find ways to invest that are expected to yield 20.11% per year.
Where the CAGR calculator is used
the average Annual growth rate is applied in different areas of personal Finance. It is often used to calculate the average growth of individual investments over a period. CAGR can be used when
comparing the return on equity with bonds or deposits. In addition, it can be used to compare the performance of two companies and predict their future growth based on their historical data.
CAGR Restriction
CAGR does not account for volatility. It only calculates the average return percentage, so CAGR values should never be considered as the only tool to estimate return on investment.
Why CAGR is so important
Although average annual return is generally accepted for mutual funds, CAGR is still the best measure of return on investment over time.
For example, we have made a hypothetical investment of 1000 $ in some Fund or in something (where it does not matter). Two years passed. At the end of the first year, the value of the portfolio fell
from 1000 to 750 $, i.e. the yield is -25% [(750-1000) / 1000]. And then, by the end of the second year, the value of the portfolio increased by + 33% [(1000 - 750) / 750].
Averaging the results for the 1st and 2nd year for two years gives us the average yield 4% [(-25 + 33) / 2], but it doesn't exactly reflect what really happened. We started with 1000 $ and also
finished at 1000 $, which means our yield is 0%.
I.e. once more. In this example, the average annual yield: 4%, and CAGR is 0%, which is certainly correct.
|
{"url":"https://a2-finance.com/en/calculators/all/cagr-compound-annual-growth-rate","timestamp":"2024-11-06T20:34:29Z","content_type":"text/html","content_length":"37441","record_id":"<urn:uuid:32df49ca-ce02-4d23-b3d2-42823509e8c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00882.warc.gz"}
|
Denoising in Early Warning of Rainfall-Induced Landslides Based on Elastic Wave SignalDenoising in Early Warning of Rainfall-Induced Landslides Based on Elastic Wave Signal
The accuracy of the elastic wave signal is a key factor of elastic wave-induced landslide warning. There is too much noise in the early warning of rainfall-induced landslides, in which bending
element-type piezoelectric sensors were used. At present, the mainstream method is the superposition method, which superposes multiple tested waveform data to obtain a clear waveform. However, the
superposition method is limited by the number of elastic wave signals in the actual warning process, and the denoised waveform still contains high-frequency noise. A combination method combining with
superposition and wavelet threshold is proposed in this paper, to improve the accuracy of the elastic waveform signal. Denoising simulation tests based on the elastic waveform signals, which
collected by the bending element type piezoelectric sensor were designed to verify the combination method. The results of tests show that the combination method can effectively remove high-frequency
noise and display clear waveforms, which have significant advantages in the process of rainfall-induced landslide warning using elastic waves.
|
{"url":"https://www.researchsquare.com/article/rs-1318476/v1","timestamp":"2024-11-11T01:40:40Z","content_type":"text/html","content_length":"142968","record_id":"<urn:uuid:7ecfac1f-819e-4ba4-b88b-8235f42a687a>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00874.warc.gz"}
|
Copenhagen Interpretation
Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Philosophy Index: Aesthetics · Epistemology · Ethics · Logic · Metaphysics · Consciousness · Philosophy of Language · Philosophy of Mind · Philosophy of Science · Social and Political philosophy ·
Philosophies · Philosophers · List of lists
The Copenhagen interpretation is an interpretation of quantum mechanics formulated by Niels Bohr and Werner Heisenberg while collaborating in Copenhagen around 1927. Bohr and Heisenberg extended the
probabilistic interpretation of the wave function, proposed by Max Born. Their interpretation attempts to answer some perplexing questions which arise as a result of the quantum mechanics, such as
wave-particle duality and the measurement problem.
The meaning of the wave function[]
There is no quantum world. There is only an abstract physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about
-- Aage Petersen paraphrasing Niels Bohr, Quantum Reality by Nick Herbert
There is no definitive statement of the Copenhagen Interpretation ^[1]since it consists of the views developed by a number of scientists and philosophers at the turn of the 20th Century. The
following have been associated with the Copenhagen interpretation
1. A system is completely described by a wave function ${\displaystyle \psi}$, which represents an observer's knowledge of the system. (Heisenberg)
2. The description of nature is essentially probabilistic. The probability of an event is related to the square of the amplitude of the wave function. (Max Born)
3. Heisenberg's uncertainty principle ensures that it is not possible to know the values of all of the properties of the system at the same time; those properties that are not known with precision
must be described by probabilities.
4. (Complementary Principle) Matter exhibits a wave-particle duality. An experiment can show the particle like properties of matter, or wave-like properties, but not both at the same time.(Niels
5. Measuring devices are essentially classical devices, and measure classical properties such as position and momentum.
6. The Correspondence Principle of Bohr and Heisenberg. The quantum mechanical description of large systems should closely approximate to the classical description.
The Copenhagen Interpretation denies that the wave function is real, it is a mathematical tool for calculating probabilities of specific experiments. The concept of collapse of a "real" wave function
was introduced by John Von Neumann and was not part of the original formulation of the Copenhagen Interpretation^[How to reference and link to summary or text]. There are some who say that there are
variants of the Copenhagen Interpretation that allow for a "real" wave function ^[2];, but it is questionable whether that view is really consistent with Positivism and some of Bohr's statements.
Niels Bohr emphasized that Science is concerned with the predictions of experiments, additional questions are not scientific but rather meta-physical. Bohr was heavily influenced by Positivism.
Acceptance among physicists[]
According to a poll at a Quantum Mechanics workshop in 1997, the Copenhagen interpretation is the most widely-accepted specific interpretation of quantum mechanics, followed by the Many-worlds
interpretation.[1] Although current trends show substantial competition from alternative interpretations, throughout much of the twentieth century the Copenhagen interpretation had strong acceptance
among physicists.
The nature of the Copenhagen Interpretation is exposed by considering a number of experiments and paradoxes.
1. Schrödinger's Cat - A cat is put in a box with a radioactive source and a radiation detector. There is a 50-50 chance that a particle will be emitted and detected by the detector. If a particle is
detected, a poisonous gas will be released and the cat killed. The wave function is in a 50-50 mixture of alive cat and dead cat. How can the cat be both alive and dead?
The Copenhagen Interpretation: The wave function reflects our knowledge of the system. The wave function ${\displaystyle (|dead\rangle +|alive\rangle )/{\sqrt {2}}}$ simply means that there is a
50-50 chance that the cat is alive or dead.
2. Wigner's Friend - Wigner puts his friend in with the cat. The external observer believes the system is in the state ${\displaystyle (|dead\rangle +|alive\rangle )/{\sqrt {2}}}$. His friend however
is convinced that cat is alive. I.e. for him, the cat is in the state ${\displaystyle |alive>}$. How can Wigner and his friend see different wave functions?
The Copenhagen Interpretation: Wigner's friend highlights the subject nature of probability. Each observer (Wigner and his friend) have different information and therefore different wave functions.
The distinction between the "objective" nature of reality and the subjective nature of probability has lead to a great deal of controversy. C.f. Bayesian versus Frequentist interpretations of
3. Double Slit Diffraction - Light passes through double slits and onto a screen resulting in a diffraction pattern. Is light a particle or a wave?
The Copenhagen Interpretation: Light is neither. A particular experiment can demonstrate particle (photon) or wave properties, but not both at the same time (Bohr's Complementary Principle).
The same experiment can in theory be performed with electrons, protons, atoms, molecules, viruses, bacteria, cats, humans, elephants and planets. In practice it has been performed for light,
electrons, buckminsterfullerene, and some atoms. Matter in general exhibits both particle and wave behaviors.
4. EPR paradox. Entangled "particles" are emitted by a common source. Conservation laws ensure that the measured spin of one particle is the opposite of the measured spin of the other. The spin of
one particle is measured. The spin of the other particle is now instantaneously known. (If the waveform is real, then one observer has caused the waveform to collapse instanteously).
The Copenhagen Interpretation: Assuming wave functions are not real, wave function collapse is interpreted subjectively. The moment one observer measures the spin of one particle, he knows the spin
of the other. However another observer cannot benefit until the results of that measurement have been relayed to him, at less than or equal to the speed of light.
Copenhagenists claim that interpretations of quantum mechanics where the wave funtion is regarded as real have problems with EPR-type effects, since they imply that the laws of physics allow for
influences to propagate at speeds greater than the speed of light. However, proponents of Many worlds ^[3] and the Transactional interpretation ^[4] ^[5] dispute that their theories are fatally
The completeness of quantum mechanics (thesis 1) was attacked by the Einstein-Podolsky-Rosen thought experiment which was intended to show that quantum physics could not be a complete theory.
Experimental tests of Bell's inequality using entangled particles have supported the predictions of quantum mechanics.
The Copenhagen Interpretation gives special status to measurement processes without cleanly defining them or explaining their peculiar effects. In his article entitled "Criticism and Counterproposals
to the Copenhagen Interpretation of Quantum Theory," countering the view of Alexandrov that (in Heisenberg's paraphrase) "the wave function in configuration space characterizes the objective state of
the electron." Heisenberg says,
Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather,
only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the
transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory.
-- Heisenberg, Physics and Philosophy, p. 137
Many physicists and philosophers have objected to the Copenhagen interpretation, both on the grounds that it is non-deterministic and that it includes an undefined measurement process that converts
probability functions into non-probabilistic measurements. Einstein's comments "I, at any rate, am convinced that He (God) does not throw dice."^[6] and "Do you really think the moon isn't there if
you aren't looking at it?" exemplify this. Bohr, in response, said "Einstein, don't tell God what to do". Erwin Schrödinger devised the Schrödinger's cat experiment.
Steven Weinberg in "Einstein's Mistakes", Physics Today, November 2005, page 31, said:
All this familiar story is true, but it leaves out an irony. Bohr's version of quantum mechanics was deeply flawed, but not for the reason Einstein thought. The Copenhagen interpretation
describes what happens when an observer makes a measurement, but the observer and the act of measurement are themselves treated classically. This is surely wrong: Physicists and their apparatus
must be governed by the same quantum mechanical rules that govern everything else in the universe. But these rules are expressed in terms of a wave function (or, more precisely, a state vector)
that evolves in a perfectly deterministic way. So where do the probabilistic rules of the Copenhagen interpretation come from?
Considerable progress has been made in recent years toward the resolution of the problem, which I cannot go into here. It is enough to say that neither Bohr nor Einstein had focused on the real
problem with quantum mechanics. The Copenhagen rules clearly work, so they have to be accepted. But this leaves the task of explaining them by applying the deterministic equation for the
evolution of the wave function, the Schrödinger equation, to observers and their apparatus.
The Ensemble Interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "copenhagen done
right". Consciousness causes collapse is often confused with the Copenhagen interpretation.
If the wave function is regarded as ontologically real, and collapse is entirely rejected, a many worlds theory results. If wave function collapse is regarded as ontologically real as well, an
objective collapse theory is obtained. Dropping the principle that the wave function is a complete description results in a hidden variable theory.
Many physicists have subscribed to the null interpretation of quantum mechanics summarized by Paul Dirac's famous dictum "Shut up and calculate!" (often attributed to Richard Feynman).^[7]
A list of alternatives can be found at Interpretation of quantum mechanics.
1. ↑ 'In fact Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics, and none of them ever used the term “the Copenhagen interpretation” as a
joint name for their ideas. In fact, Bohr once distanced himself from what he considered to be Heisenberg's more subjective interpretation Stanford Encyclopedia of Philosophy
2. ↑ 'While participating in a colloquium at Cambridge, von Weizsäcker (1971) denied that the CI asserted: "What cannot be observed does not exist". He suggested instead that the CI follows the
principle: "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes."'John Cramer on the Copenhagen
See also[]
• Afshar experiment
• Bohr-Einstein debates
• Consistent Histories
• Ensemble Interpretation
• Interpretation of quantum mechanics
• Philosophical interpretation of classical physics
Further reading[]
• G. Weihs et al., Phys. Rev. Lett. 81 (1998) 5039
• M. Rowe et al., Nature 409 (2001) 791.
• J.A. Wheeler & W.H. Zurek (eds) , Quantum Theory and Measurement, Princeton University Press 1983
• A. Petersen, Quantum Physics and the Philosophical Tradition, MIT Press 1968
• H. Margeneau, The Nature of Physical Reality, McGraw-Hill 1950
Video Demonstration[]
External links[]
|
{"url":"https://psychology.fandom.com/wiki/Copenhagen_Interpretation","timestamp":"2024-11-04T11:02:52Z","content_type":"text/html","content_length":"194106","record_id":"<urn:uuid:e5b30516-de62-47dd-ac68-36c84bf99bde>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00250.warc.gz"}
|
Question #dd886 | Socratic
Question #dd886
1 Answer
In order to solve this problem, you need to use the ideal gas law: $\text{PV}$ = $\text{nRT}$, solving for n. Once you know n, you can convert the moles of ${\text{CH}}_{4}$ by multiplying n times
its molar mass Temperature must be in Kelvins. K = Celsius + 273.15. If R = 0.0821 L atm/mol K, given pressure needs to be converted from torr to atm.
$\text{V = 2.00L}$
$\text{T = 50.0"^"o""C" + 273.15 = 323.15"K}$
$\text{P = 697 torr = 0.91711 atm}$http://www.theunitconverter.com
$\text{R = 0.0821 L atm/mole K}$
${\text{molar mass CH}}_{4}$ = $\text{16.04g/mol}$http://en.wikipedia.org/wiki/Methane
number of moles, n
mass in grams of methane, $\text{CH"_4}$
Equation :
$\text{PV}$ = $\text{nRT}$
Divide both sides of the equation by RT to isolate n. Solve for n.
$\text{n}$ = $\text{PV"/"RT}$ = $\text{(0.91711 atm)(2.00L)"/"(0.0821 L atm/mole K)(323.15K)}$= ${\text{0.0691 mol CH}}_{4}$
Multiply mol $\text{CH"_4}$ times its molar mass.
${\text{0.0691 mol CH}}_{4}$ x $\text{16.04g CH4"/"1 mol CH4}$ = $\text{1.11g CH"_4}$
Answer :
The mass of methane gas that occupies 2.00L, at 50.0 C, and 697 torr, is 1.11g.
Impact of this question
4290 views around the world
|
{"url":"https://socratic.org/questions/5483cd06581e2a13f19dd886","timestamp":"2024-11-14T03:45:48Z","content_type":"text/html","content_length":"35610","record_id":"<urn:uuid:93985180-17d6-432d-b907-9aeeb41a0d5d>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00360.warc.gz"}
|
Please Login to Read More...
What are you made up of? Cells make up all living things, including your own body. This picture shows a typical group of cells. But not all cells look alike. Cells can differ in shape and sizes. And
the different shapes usually means different functions.
Cells are basic units of structure and function for all living organisms and the structural order in cells forms the basis for properties of life including interaction with environment, movement,
energy processing, growth, reproduction and evolution.
Some organisms such as amoebas and most bacteria are single cells while plants and animals are multi-cellular. Humans have an estimated 100 trillion cells and each cell can carry out specialized
functions with its own set of instructions stored within the cell for carrying out various activities.
There are two types of cells: eukaryotic and prokaryotic. Prokaryotic cells are usually independent, while eukaryotic cells are usually found in multicellular organisms. Prokaryotes are distinguished
from eukaryotes on the basis of nuclear organization, specifically their lack of a nuclear membrane. Nucleus which houses the cell’s chromosomes and is the place where almost all DNA replication and
RNA synthesis occurs, gives the eukaryote its name, which means "true nucleus".
Prokaryotes also lack most of the intracellular organelles and structures such as mitochondria, chloroplasts and the Golgi apparatus that are characteristic of eukaryotic cells. All cells have a
membrane that envelops the cell, separates its interior from its environment, regulates what moves in and out (selectively permeable) and maintains the electric potential of the cell. All cells
possess DNA, the hereditary material of genes; RNA containing the information necessary to build various proteins; and enzymes the cell’s primary machinery. Cells also have a set of "little organs",
called organelles, that are adapted and/or specialized for carrying out one or more vital functions. Mitochondria are self–replicating organelles which play a critical role in generating energy in
the cell by the process of respiration.
Cell metabolism is the process by which individual cells process nutrient molecules. Metabolism has two distinct divisions: catabolism, in which the cell breaks down complex molecules to produce
energy and anabolism in which the cell uses energy to construct complex molecules and perform other biological functions. Cells are capable of synthesizing new proteins, which are essential for the
modulation and maintenance of cellular activities. This process involves the formation of new protein molecules from amino acid building blocks based on information encoded in DNA/RNA.
|
{"url":"https://www.simply.science:443/index.php/biology/cell-biology","timestamp":"2024-11-03T01:30:47Z","content_type":"text/html","content_length":"46976","record_id":"<urn:uuid:8ec2e443-f339-4e9a-966a-9e80cea61489>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00806.warc.gz"}
|
Velocity Calculator – Calculate Speed Instantly Online
Use this velocity calculator to quickly determine the speed of an object in a specific direction.
Velocity Calculator
How to Use the Calculator
Enter the distance covered in meters and the time taken in seconds in their respective fields and then click the “Calculate” button. The calculator will display the velocity in meters per second (m/
How It Calculates Results
The calculator uses the formula velocity = distance / time to determine the velocity of an object in motion. It divides the input distance value by the input time value and outputs the result in
meters per second.
This velocity calculator assumes a constant speed over the given distance and time. It does not account for variables such as acceleration or deceleration. Additionally, it is not suited for
computing velocities at which relativistic effects become significant, as these require Einstein’s theory of relativity to calculate accurately.
|
{"url":"https://madecalculators.com/velocity-calculator/","timestamp":"2024-11-08T15:37:34Z","content_type":"text/html","content_length":"141902","record_id":"<urn:uuid:ea0e1ac6-6591-472b-9bfc-622cb6717b91>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00281.warc.gz"}
|
Max Inf/NaN
Max of +inf.0 and +nan.0
R6RS requires that (max +inf.0 x) for any real x return +inf.0; it is silent about (max x +inf.0), but I'd think that was entailed. I tested (max +inf.0 +nan.0) against the test suite.
Racket, Gauche, MIT, Chicken (with and without the numbers egg), Scheme48, Guile, Ypsilon, Mosh, IronScheme, STklos, Elk, VX return +inf.0.
Gambit, Bigloo, Kawa, SISC, Chibi, Chez, Vicare, Larceny, NexJ, UMB, Spark, FemtoLisp, Sagittarius return +nan.0.
My other Schemes throw errors, either because they don't like inexact numbers, they don't like division by 0.0, or they produce cockeyed values of (/ 1.0 0.0) and/or (/ 0.0 0.0).
Note that the six R6RS implementations are split 3-3 (or 4-3 if Guile, which does not fully implement R6RS, is included), and that all the Java ones prefer +nan.0, as Java does.
Max of +nan.0 and 0
IEEE says that (max +nan.0 0.0) should return 0.0, and R7RS says (max +nan.0 0) has to return an inexact number, but is silent about which number. The latter is tested here.
Racket, Gauche, Gambit, Chicken with the numbers egg, Scheme48, Guile, Chez, Vicare, Ypsilon, Mosh, IronScheme, NexJ, STklos, RScheme, BDC, Elk, Sagittarius return +nan.0.
Bigloo, scsh, Kawa, SISC, Larceny, Scheme 9, UMB return 0.0.
Chibi, FemtoLisp return 0.
Plain Chicken, KSi, S7, XLisp, Rep, Schemik, Oaklisp, SXM, Sizzle, Dfsch, Inlab, Owl Lisp raise a division by zero error.
SigScheme, Dream do not support inexact numbers.
Shoe, TinyScheme, Llava do not support max.
|
{"url":"https://docs.scheme.org/surveys/max-inf-nan/","timestamp":"2024-11-10T22:10:20Z","content_type":"text/html","content_length":"3433","record_id":"<urn:uuid:e5d6021a-73d1-42d4-ab63-79acba31941a>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00296.warc.gz"}
|
A fair coin with 1 marked on one face and 6 on the other and a fair die are both tossed. find the probability that the sum of numbers that turn up is 3.
- Wired Faculty
A fair coin with 1 marked on one face and 6 on the other and a fair die are both tossed. find the probability that the sum of numbers that turn up is 3.
The coin has 1 marked on on face and 6 on the other. Let H = 1 or T = 6. The coin and die are tossed together.
Event A: A total of 3 is obtained.
|
{"url":"https://www.wiredfaculty.com/question/UTBKVFJVVk9UVUV4TVRBeE5EWTRNQT09","timestamp":"2024-11-07T07:34:36Z","content_type":"text/html","content_length":"426561","record_id":"<urn:uuid:0b1d0002-eb61-4224-b30e-9558b9331bf2>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00466.warc.gz"}
|
TR14-102 | 4th August 2014 20:27
Non-Malleable Codes Against Constant Split-State Tampering
Non-malleable codes were introduced by Dziembowski, Pietrzak and Wichs \cite{DPW10} as an elegant generalization of the classical notions of error detection, where the corruption of a codeword is
viewed as a tampering function acting on it. Informally, a non-malleable code with respect to a family of tampering functions $\mathcal{F}$ consists of a randomized encoding function $\Enc$ and a
deterministic decoding function $\Dec$ such that for any $m$, $\Dec(\Enc(m))=m$. Further, for any tampering function $f \in \mathcal{F}$ and any message $m$, $\Dec(f(\Enc(m)))$ is either $m$ or is $\
epsilon$-close to a distribution $D_f$ independent of $m$, where $\epsilon$ is called the error.
Of particular importance are non-malleable codes in the $C$-split-state model. In this model, the codeword is partitioned into $C$ equal sized blocks and the tampering function family consists of
functions $(f_1,\ldots,f_C)$ such that $f_i$ acts on the $i^{th}$ block. For $C=1$ there cannot exist non-malleable codes. For $C=2$, the best known explicit construction is by Aggarwal, Dodis and
Lovett \cite{ADL13} who achieve rate $= \Omega(n^{-6/7})$ and error $=2^{-\Omega(n^{-1/7})}$, where $n$ is the block length of the code.
In our main result, we construct efficient non-malleable codes in the $C$-split-state model for $C=10$ that achieve constant rate and error $=2^{-\Omega(n)}$. These are the first explicit codes of
constant rate in the $C$-split-state model for any $C=o(n)$, that do not rely on any unproven assumptions. We also improve the error in the explicit non-malleable codes constructed in the bit
tampering model by Cheraghchi and Guruswami \cite{CG14b}.
Our constructions use an elegant connection found between seedless non-malleable extractors and non-malleable codes by Cheraghchi and Guruswami \cite{CG14b}. We explicitly construct such seedless
non-malleable extractors for $10$ independent sources and deduce our results on non-malleable codes based on this connection. Our constructions of extractors use encodings and a new variant of the
sum-product theorem.
|
{"url":"https://eccc.weizmann.ac.il/report/2014/102/","timestamp":"2024-11-12T01:09:20Z","content_type":"application/xhtml+xml","content_length":"22362","record_id":"<urn:uuid:e5a5d928-8704-441e-9910-49ab1b361c64>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00594.warc.gz"}
|
Logic number puzzle: Factors (inshi-no-heya).
│Download Factors desktop application (Windows 64bit) │Download Factors source code (Lazarus/Free Pascal)│
Description: Puzzle game with numbers to be placed onto a grid. The rules of Factors are as follows:
1. In each field of the NxN grid, write numbers from 1 to N.
2. In each row and each column (and if selected, in each diagonal), each number must be written exactly once.
3. The product of the numbers in each area must equal the number given for this area.
The game is a typical logic game: In order to find the correct solution of the puzzle, you have to explore the product values, and by logical thinking (concluding from what you know, that a given
field must be a given number, or on the contrary, can't be a given number), determine, to which fields which numbers have to be written. Factors is essentially the same game as Sumdoku, except that
with Sumdoku, the sums (and not the products) of the areas are displayed.
The game is played exclusively with the mouse: Click the field, where you want to write a number, then click one of the number buttons to write the corresponding number or Clear to remove the number
from the actually selected field.
To do: The unique numbers on diagonals rule is not implemented in the actual version of the puzzle.
Free Pascal features: Changing the color of shapes and the caption of static-texts and labels during runtime. Creating objects during runtime. Two-dimensional arrays and arrays indexed by array
elements (classic Pascal).
If you like this application, please, support me and this website by signing my guestbook.
|
{"url":"https://www.streetinfo.lu/computing/lazarus/doc/Factors.html","timestamp":"2024-11-06T20:16:23Z","content_type":"text/html","content_length":"9610","record_id":"<urn:uuid:a0f209dd-a471-4afd-8fb2-d26af9affd31>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00354.warc.gz"}
|
The calculation of the chemical equilibrium of a reactive system is in many cases used as a first approach to obtain information about the expected chemical composition and temperature of a real
application. Although such a calculation does not provide information about the time scale to reach this state, the estimation is often used for preliminary work on the design of reactive systems.
In order to use the calculator, first select a species set that is to be considered, then provide information on the thermodynamic state (pressure and temperature) and the initial composition (fuel,
oxidizer and "air factor" which is the inverse of equivalence ratio). Select the calculation mode (which quantities to fix at a constant value) and then press the "Calculate" button. The oxidator is
always a mixture of O[2] and N[2] (i.e. air) but can vary by its mole fraction of O[2].
Symbol Unit
Pressure p bar
Temperature T K
Air factor λ -
O[2] mole fraction in oxydizer X[O2,ox]
│ │ │Fuel│Air│Mixture │react.Mix. │
│T │K │ │ │ │ │
│ℳ │g/mol │ │ │ │ │
│ρ │kg/m^3│ │ │ │ │
│λ │W/m/K │ │ │ │ │
│μ │Pa s │ │ │ │ │
│c[p]│J/kg/K│ │ │ │ │
T:P - constant temperature and pressure,
T:V - constant temperature and volume,
P:V - constant pressure and volume,
P:H - constant pressure and enthalpy,
V:U - constant volume and internal energy,
V:H - constant volume and enthalpy.
|
{"url":"https://vbt.ebi.kit.edu/english/412.php","timestamp":"2024-11-14T03:46:50Z","content_type":"text/html","content_length":"42595","record_id":"<urn:uuid:1ff06977-8373-4cb5-a58e-d083630972b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00879.warc.gz"}
|
Dinner Party Problem
How many people must you have at dinner to ensure that there are a subset of 3 people who all either mutual acquaintances, or mutual strangers?
We can model this using graph theory; for background see the Fun Fact Six Degrees of Separation. Let people be vertices, and draw a red edge between two people if they know each other, and a blue
edge if they do not. So every pair of people are connected by either a red or blue edge.
The question then becomes, what is the least number of vertices for which every complete red-blue graph on those vertices has either a red or blue triangle?
Answer: 6 people! First we show that 6 people are enough: take any vertex, Fred. Fred has 5 edges emanating from him; at least 3 are the same color. Suppose without loss of generality it is red. If
any of those 3 people that Fred knows know each other, then we have a red triangle! If none of those 3 know each other, we have a blue triangle!
Now, using red-blue graphs, can you draw a red-blue complete graph on 5 vertices with no blue nor red triangle? (This shows that 5 people are not enough.) See the Figure for a solution (it appears
after several seconds).
Presentation Suggestions:
Bring colored chalk into the class; draw pictures as you are giving the proof. It doesn’t matter if students don’t follow the exact argument in class— they will probably go home and want to think
about it anyways— but they will draw the pictures you draw and it will help them construct the argument later.
The Math Behind the Fact:
Generalizations of this problem, involving the least number of people you have to invite to ensure subgraphs of other types, fall under the category of something called Ramsey Theory.
How to Cite this Page:
Su, Francis E., et al. “Dinner Party Problem.” Math Fun Facts. <https://www.math.hmc.edu/funfacts>.
Fun Fact suggested by:
Francis Su
|
{"url":"https://math.hmc.edu/funfacts/dinner-party-problem/","timestamp":"2024-11-03T09:06:18Z","content_type":"text/html","content_length":"68428","record_id":"<urn:uuid:741d3db9-a485-48c1-ab2c-88d25ea8c88f>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00258.warc.gz"}
|
Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or
something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com, follow @NodePit on Twitter or botsin.space/@nodepit on Mastodon.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.
|
{"url":"https://nodepit.com/node/org.knime.base.node.mine.treeensemble2.node.gradientboosting.learner.regression.GradientBoostingRegressionLearnerNodeFactory2","timestamp":"2024-11-06T23:16:49Z","content_type":"text/html","content_length":"63473","record_id":"<urn:uuid:696e8ffd-6e80-46a2-8088-359af42df109>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00737.warc.gz"}
|
Convert recorded points to jump movement
The goal
To re-create Super Mario Maker 2’s player jump in Unity3D using recorded position points in game that were collected from multiple recordings I made.
The peak of the player’s jump is 5.4 units high, and is reached on the 33rd frame when the game is running at 60fps.
Current Progress
I am able to re-create part of this by following Sebastian Lague’s YouTube tutorial series on how to create a 2D Platformer Controller, which shows the following formulas needed for solving for the
values of gravity and jump velocity to be able to reach the 5.4 units peak at frame 33 /60fps:
gravity = (2 * jumpHeight) / (timeToJumpPeak^2)
jumpVelocity = gravity * timeToJumpPeak
I uploaded a copy of my current project folder here if needed. The project is setup to run at 60fps and has the player as a white square jump one frame after it detects the floor. This is for letting
us skip frame by frame in the editor to make sure the position matches the SMM2 position recorded. Raycasts are used for collision detection for custom physics.
My Findings
I do not believe the jump arc follows a common parabola. It better matches a polynomial when given a 4th order, but I do not know how to convert a polynomial to code. I simply used Excel to draw the
jump arc using the recorded position coordinates of the jump arc and found the polynomial option with 4th order lined up with the positions really well:
I found a GDC video titled “Math for Game Programmers: Building a Better Jump” where they mention using Runge-Kutta (RK4) which looked promising, but I could not find how to relate that technique to
coding a jump. I do think the jump is made up of two parts though. When rising the player follows one path (parabola), and when falling the math for gravity pulling the player changes (follows a
different parabola).
The Y Positions
Pasted y positions here if needed:
Any guidance in the right direction would be helpful, as I have spent a few months trying to find a way to replicate the jump on my own with no luck so far. Thank you
While I can’t speak for Mario Maker's implementation of Mario Bros. physics, I came across a handy, very-tall image which provides some excellent detail on the forces behind Mario’s physics (jumping
included) in the original Super Mario Bros. game.
Namely, the values 0x4000, 0x0200, and 0x0700 (standing still). Converted to decimal, they’re 16384, 512, and 1792 respectively.
First, a bit of analysis: 16384 / 512 = 32 (the same ratio applies to a running jump, despite different values) – On the first frame, the velocity is 16384. After reducing that value for 32 frames,
you’re down to 0 on the dot. Therefore, frame 33 will have a vertical speed of 0, putting Mario at the peak of his jump.
With that information in mind, here’s a simple conceptual implementation of them:
// Just an example; implement something smarter than this, please
void Update()
velocity.y = 16384;
velocity.y = 0;
else if(Input.GetButton("Jump") && velocity.y >= 0)
velocity -= 512;
else // Not grounded, not holding jump + moving up
velocity -= 1792;
transform.position += velocity;
At any rate, one of the key points to be made is that an analysis of this movement would find that, despite being simple to implement as an algorithm, it can’t as easily be broken down from a
mathematics standpoint because the “gravity” of the jump varies so greatly based on the current state of input.
|
{"url":"https://discussions.unity.com/t/convert-recorded-points-to-jump-movement/240808","timestamp":"2024-11-12T00:01:27Z","content_type":"text/html","content_length":"32905","record_id":"<urn:uuid:b591bc54-65da-4e34-b400-e430f32dab24>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00829.warc.gz"}
|
Heat transferred along two paths
JEE Advanced 2014 Paper 1, Question 20
A thermodynamic system is taken from an initial state
The problem gives us partial information about internal energies, heat transferred, and work done at various points in the PV diagram. We can complete the picture by using the first law of
thermodynamics (see note)
where by the system. We apply this first to the path
Similarly, along the path
Finally, along
Therefore, the ratio
You must be logged in to post a comment.
|
{"url":"https://www.jeefirst.com/heat-transferred-along-two-paths/","timestamp":"2024-11-11T10:57:32Z","content_type":"text/html","content_length":"56454","record_id":"<urn:uuid:abde98dc-a121-4614-851c-9af81421a4be>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00703.warc.gz"}
|
Understanding Ceiling Projections for NFL DFS Tournaments | FantasyLabs
Understanding Ceiling Projections for NFL DFS Tournaments12>
Ceiling Projections: Understanding the Math
A good set of projections is absolutely crucial for success at DFS. It would be a Sisyphean task to make a full slate of projections on your own, and you’d be unlikely to do better than ours. The
next step towards DFS success is understanding the difference between floor, median, and ceiling projections, all of which can be accessed through our NFL Player Models.
Especially in large-field GPPs, you need to have a roster with a high enough ceiling, should everything go right, that you can win the tournament. Even in cash, you’ll need a few high scores to
profit most weeks. It also doesn’t hurt if you can get your ceiling players to correlate with each other, which you can see by using our Correlation Dashboard.
These terms vary slightly across the industry, but our definition defines Ceiling Projections as the top 15% of scores in the player’s range of outcomes. This means that if the slate was played 100
times, a given player would meet or surpass his ceiling 15 times (and score under his floor 15 times).
However, while every player should theoretically have an equal chance of hitting their ceiling, that can be misleading. Based on the distribution of a player’s scoring, this could mean a few
different things. Some players could fall right around their ceiling, give or take a few points, a large percentage of the time. Some players may only reach their ceiling 15% of the time, but some of
those games far exceed the projection listed in our models.
It all depends on the distribution of their fantasy scores. Let’s take a look at graphs for two wide receivers (a high-variance position) with similar 85th percentile ceilings over the past two
seasons as an example.
Tyreek Hill
Note: all data is using only full games played over the last two regular seasons (DraftKings scoring).
Over the last two seasons, Hill’s median (most likely) score has been 17.9 points, with a mean of 20.39 and an 85th percentile score of 29.5 DraftKings points:
But notice the outlier to the right (as well as the two scores in the 36 point range). Those scores, especially the 60-point effort, are the kinds of games we need to take down GPPs. Of course,
they’re few and far between, but I’d be happy only winning the Milly Maker every other season.
Thanks to these outliers, the average score within Hill’s 85th percentile is a monstrous 38.5 points. This is exactly the kind of totals we need at Hill’s usual salary range of around $8,000.
We need to think about GPP lineups in terms of conditional probability. If an event, in this case, a Tyreek Hill ceiling game, happens, what is the probability it helps us win a tournament? While all
players have the same odds of a ceiling game (15 percent), the odds of it helping us take down GPPs vary considerably.
Deandre Hopkins
Hopkins’ most frequent scores are a toss-up between the 27-29 point range and the 15-17 point range. He has a similar mean (18.75) and 85th percentile score (28.6) as Hill:
Here’s where things get interesting. Despite having a ceiling outcome within a point of Hill’s, the average within his ceiling range is only 30.52, more than eight points lower than Tyreek’s! With
generally similar price points, it’s entirely possible to get a ceiling game from Hopkins and have it still lower your chances of winning a large tournament.
At an $8,000 DraftKings salary, his average ceiling game would put you on a 190.75 point pace, rarely enough to win anything. Contrast that to Hill, who, at the same $8,000 salary, gets you on pace
for over 240 points.
(note: using “salary multiplier” isn’t perfect, getting 3x an expensive player’s salary is more valuable than 3x a minimum priced tight end. However, it still serves an illustrative purpose here.)
Again, both players have the same chances of getting to their ceiling at 15%. They even have similar ceilings, but what’s different is the probability if they get there that it will help you win a
How to Use this Information
It would obviously be impractical to do this kind of analysis on every single NFL player. However, it helps to have a general idea of the range of outcomes of the players on your roster. What I plan
on doing moving forward is taking a quick eyeball at the game logs (found by clicking a player’s name in our models) for anybody you plan on playing in tournaments. This isn’t perfect for younger
players, but for established veterans, it serves a purpose.
You’ll want to look for players with a game or two that far exceeds his current Ceiling Projection, or at least a few games in the neighborhood of 4x (points per $1,000 in salary) their current
price. Players like Hopkins and Hill have similar reputations (alpha wide receiver, deep-ball threat, high ceilings), but Hopkins has two 40-point games in our seven tracked seasons. Hill has five in
five seasons.
While all Ceiling Projections are created equal, what happens when a player gets there is not. Understanding that could be the difference between winning a tournament and finishing 100th.
Ceiling Projections: Understanding the Math
A good set of projections is absolutely crucial for success at DFS. It would be a Sisyphean task to make a full slate of projections on your own, and you’d be unlikely to do better than ours. The
next step towards DFS success is understanding the difference between floor, median, and ceiling projections, all of which can be accessed through our NFL Player Models.
Especially in large-field GPPs, you need to have a roster with a high enough ceiling, should everything go right, that you can win the tournament. Even in cash, you’ll need a few high scores to
profit most weeks. It also doesn’t hurt if you can get your ceiling players to correlate with each other, which you can see by using our Correlation Dashboard.
These terms vary slightly across the industry, but our definition defines Ceiling Projections as the top 15% of scores in the player’s range of outcomes. This means that if the slate was played 100
times, a given player would meet or surpass his ceiling 15 times (and score under his floor 15 times).
However, while every player should theoretically have an equal chance of hitting their ceiling, that can be misleading. Based on the distribution of a player’s scoring, this could mean a few
different things. Some players could fall right around their ceiling, give or take a few points, a large percentage of the time. Some players may only reach their ceiling 15% of the time, but some of
those games far exceed the projection listed in our models.
It all depends on the distribution of their fantasy scores. Let’s take a look at graphs for two wide receivers (a high-variance position) with similar 85th percentile ceilings over the past two
seasons as an example.
Tyreek Hill
Note: all data is using only full games played over the last two regular seasons (DraftKings scoring).
Over the last two seasons, Hill’s median (most likely) score has been 17.9 points, with a mean of 20.39 and an 85th percentile score of 29.5 DraftKings points:
But notice the outlier to the right (as well as the two scores in the 36 point range). Those scores, especially the 60-point effort, are the kinds of games we need to take down GPPs. Of course,
they’re few and far between, but I’d be happy only winning the Milly Maker every other season.
Thanks to these outliers, the average score within Hill’s 85th percentile is a monstrous 38.5 points. This is exactly the kind of totals we need at Hill’s usual salary range of around $8,000.
We need to think about GPP lineups in terms of conditional probability. If an event, in this case, a Tyreek Hill ceiling game, happens, what is the probability it helps us win a tournament? While all
players have the same odds of a ceiling game (15 percent), the odds of it helping us take down GPPs vary considerably.
Deandre Hopkins
Hopkins’ most frequent scores are a toss-up between the 27-29 point range and the 15-17 point range. He has a similar mean (18.75) and 85th percentile score (28.6) as Hill:
Here’s where things get interesting. Despite having a ceiling outcome within a point of Hill’s, the average within his ceiling range is only 30.52, more than eight points lower than Tyreek’s! With
generally similar price points, it’s entirely possible to get a ceiling game from Hopkins and have it still lower your chances of winning a large tournament.
At an $8,000 DraftKings salary, his average ceiling game would put you on a 190.75 point pace, rarely enough to win anything. Contrast that to Hill, who, at the same $8,000 salary, gets you on pace
for over 240 points.
(note: using “salary multiplier” isn’t perfect, getting 3x an expensive player’s salary is more valuable than 3x a minimum priced tight end. However, it still serves an illustrative purpose here.)
Again, both players have the same chances of getting to their ceiling at 15%. They even have similar ceilings, but what’s different is the probability if they get there that it will help you win a
How to Use this Information
It would obviously be impractical to do this kind of analysis on every single NFL player. However, it helps to have a general idea of the range of outcomes of the players on your roster. What I plan
on doing moving forward is taking a quick eyeball at the game logs (found by clicking a player’s name in our models) for anybody you plan on playing in tournaments. This isn’t perfect for younger
players, but for established veterans, it serves a purpose.
You’ll want to look for players with a game or two that far exceeds his current Ceiling Projection, or at least a few games in the neighborhood of 4x (points per $1,000 in salary) their current
price. Players like Hopkins and Hill have similar reputations (alpha wide receiver, deep-ball threat, high ceilings), but Hopkins has two 40-point games in our seven tracked seasons. Hill has five in
five seasons.
While all Ceiling Projections are created equal, what happens when a player gets there is not. Understanding that could be the difference between winning a tournament and finishing 100th.
About the Author
|
{"url":"https://www.fantasylabs.com/articles/understanding-ceiling-projections-gpps/","timestamp":"2024-11-09T00:45:30Z","content_type":"text/html","content_length":"106815","record_id":"<urn:uuid:d6f2bb07-0d09-4331-8771-c604b0b4793f>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00378.warc.gz"}
|
Why all triangles are equilateral?
Why all triangles are equilateral?
Every equilateral triangle is also an isosceles triangle, so any two sides that are equal have equal opposite angles. Therefore, since all three sides of an equilateral triangle are equal, all three
angles are equal, too. Hence, every equilateral triangle is also equiangular.
Are all triangles equilateral?
In geometry, an equilateral triangle is a triangle in which all three sides have the same length. In the familiar Euclidean geometry, an equilateral triangle is also equiangular; that is, all three
internal angles are also congruent to each other and are each 60°….
Equilateral triangle
Internal angle (degrees) 60°
Are all equilateral triangles similar True or false?
Similarity. A property of equilateral triangles includes that all of their angles are equal to 60 degrees. Since every equilateral triangle’s angles are 60 degrees, every equilateral triangle is
similar to one another due to this AAA Postulate.
Are all triangles isosceles?
Every triangle is isosceles.
What do all triangles equal to?
180 degrees
A piece of trivia that is true for all triangles: The sum of the three angles of any triangle is equal to 180 degrees.
How many kinds of triangles are there?
To learn about and construct the seven types of triangles that exist in the world: equilateral, right isosceles, obtuse isosceles, acute isosceles, right scalene, obtuse scalene, and acute scalene.
What are the 3 main types of triangles?
There are different names for the types of triangles. A triangle’s type depends on the length of its sides and the size of its angles (corners). There are three types of triangle based on the length
of the sides: equilateral, isosceles, and scalene.
Are any two equilateral triangles are similar?
For two triangles to be similar the angles in one triangle must have the same values as the angles in the other triangle. For the equilateral triangles since they always have 3 angles that are each
60° , any equilateral triangles will be similar.
How do we know if two polygons are similar?
Two polygons are similar if their corresponding angles are congruent and the corresponding sides have a constant ratio (in other words, if they are proportional). Typically, problems with similar
polygons ask for missing sides.
Why are all triangles not isosceles?
No. Isosceles triangles are those that have two sides to be of equal length, while equilateral triangles are those that have all three sides of equal length.
Do triangles always equal 180?
The answer is yes! To mathematically prove that the angles of a triangle will always add up to 180 degrees, we need to establish some basic facts about angles. The first fact we need to review is the
definition of a straight angle. A straight angle is just a straight line, which is where it gets its name.
What makes an equilateral triangle an equiangular triangle?
So then we get angle ABC is congruent to angle ACB, which is congruent to angle CAB. And that pretty much gives us all of the angles. So if you have an equilateral triangle, it’s actually an
equiangular triangle as well. All of the angles are going to be the same.
Are there two possible cases for a triangle?
The triangle on the left is the one as shown in the video, without the (wrong) perpendicular bisectors. By the restrictions on triangles ABD and ACD, that is the two common lengths they share and the
angle α, we see that there is only two possible cases for the triangles, which is shown on the left figure.
Are there 60 degree angles in an equilateral triangle?
So in an equilateral triangle, not only are they all the same angles, but they’re all equal to exactly– they’re all 60 degree angles. Now let’s think about it the other way around. Let’s say I have a
triangle. Let’s say we’ve got ourselves a triangle where all of the angles are the same.
Can a triangle be drawn with a perpendicular bisector?
The triangles drawn using the perpendicular bisector wont both be on the outside of the main triangle, one will bisect inside the main triangle while one will bisect outside. What was drawn is that
both bisects outside, hence allowing the “proof”. Edit: Here is a pictorial proof since someone asked for it.
|
{"url":"https://www.pursuantmedia.com/2019/10/15/why-all-triangles-are-equilateral/","timestamp":"2024-11-13T04:35:29Z","content_type":"text/html","content_length":"59958","record_id":"<urn:uuid:134a06c2-1382-4b38-860d-9df6605b1e6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00153.warc.gz"}
|
Modelling of a Controller for Effective Flow Monitoring in Process Industry
Volume 13, Issue 09 (September 2024)
Modelling of a Controller for Effective Flow Monitoring in Process Industry
DOI : 10.17577/IJERTV13IS090092
Download Full-Text PDF Cite this Publication
E. O. Nwele, M. Olubiwe, C. K. Agubor, J. S. Ndulaka, 2024, Modelling of a Controller for Effective Flow Monitoring in Process Industry, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY
(IJERT) Volume 13, Issue 09 (September 2024),
• Open Access
• Authors : E. O. Nwele, M. Olubiwe, C. K. Agubor, J. S. Ndulaka
• Paper ID : IJERTV13IS090092
• Volume & Issue : Volume 13, Issue 09 (September 2024)
• Published (First Online): 04-10-2024
• ISSN (Online) : 2278-0181
• Publisher Name : IJERT
• License: This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version
Modelling of a Controller for Effective Flow Monitoring in Process Industry
E. O. Nwele, M. Olubiwe, C. K. Agubor, J. S. Ndulaka
Department of Electrical & Electronic Engineering, School of Electrical System Engineering and Technology, Federal University of Technology, Owerri, Nigeria.
Abstract Flow in a process industry is characterized with issue of frictional force in the medium the fluid is being transported, the flow condition often become turbulent instead of being laminar
which is easier to control, thereby creating pressure loss which leads to loss of time and output products. In this research, Direct Synthesis (DS) method is used, while the tools used include:
Proteus version 7.6, MatLab/Simulink software, LabVIEW Control Design and Simulation Module. A Proportional Integral Derivative (PID) and Distributed Control System (DCS) controllers are utilized in
order to control the flow rate of the process fluids by controlling the loss in pressure due to frictional forces in the transport medium to ensure that the flow remains in laminar condition.
Firstly, the PID was modeled, and the PID controller was designed and configured into the flow system, then it was simulated using MatLab/Simulink software and the performance showed that when the
PID was independently controlling the flow system, there was over shoot which was overcome after some wasteful time before achieving stability. Secondly, DCS was used independently in the flow loop
and was simulated using Proteus software. The performance was observed in the software environment and it showed that when DCS was used independently, it had a good control efficiency but some
percentage of transient and overshoot was noticed from the output result. Thirdly, the DCS and the PID were installed together and simulated using LabVIEW and the results obtained showed that the
cascading of the two controllers reduced the over shoot drastically, attained stability faster and the problems of drag and pressure loss were eliminated in a very short time. The research work
achieved 99 percent of the results anticipated as all the specific objectives were achieved. The research was validated using Routh- Hurwitz stability criterion.
Keywords: Process Flow control, PID controller, DCS controller, Modelling, Direct synthesis method.
1. INTRODUCTION
The ability to control a flow field to achieve a desired result is of great importance, and to actualize optimum output and stability when fluid is being transported, adequate and reliable flow
control becomes necessary. Maintaining proper flow of fluids in a process system, is essential to maintain correct supply of raw materials to reactors, correct supply of water or steam for
cooling or heating purpose etc [1].
The factors affecting flow measurement are conductivity, temperature, pressure, and viscosity. It can affect certain types of flow meters. How clean or dirty the fluid may be, it could also
impact on the type and style of meter [2]. In choosing a flow meter, one advice is to thoroughly understand the characteristics of the flow to be measured. Types of Flow
Meters include Coriolis, DP Meters, Magnetic Meters, Multiphase Meters, Ultrasonic Meters, and Vortex Meters [3]. The dynamic behavior of industrial plants heavily depends on disturbances and in
particular on changes in operating point [4]. In many industrial processes, control of liquid flow or temperature control is required. Classic PID approaches as well as controllers are updated
and expanded over the years, from the primary controllers on the basis of the relays as well as synchronous electric motors or pneumatic or hydraulic systems to current microprocessors [5].
Currently, many techniques for the tuning as well as design of PI and PID controllers are proposed [6]. The method proposed by [7] is the most widely utilized PID parameter tuning methodology in
chemical industry and is considered as a conventional technique. Flow management encompasses a big selection of application in method industries. In 90% of method management, applications have a
tendency to manipulate the flow to get desired output. Flow is dynamic parameter and has totally different standardization strategies for standardization [8].
There are three tuning parameters for PID and it is also known as a three-term controller. The three parameters are proportional gain, integral gain and derivative gain. These three parameters
affect in different ways the stability of system. So, the controller has to maintain the process variable close to desired set point, [9]. Currently, there is a special linear structure deployed
with sensors on pipelines to monitor and regulate flow and pressure. They demonstrated a multilayer communication scheme that ensures the effective routing of data among the sensors but there is
no consideration of control valve and proper controllers among the sensor based remote communication networks which involves manual operation to achieve desired performance of pipeline
transportation [10].
Summary of review of related works
1. The authors actually worked on the process flow control but emphasis was on vibration in the pipe which could be harmful or alter the stability of the system. No attention was given to other
numerous problems associated with flow of a process system.
2. From their analysis and conclusion, they worked on using PID to control flow in multiple tank staking. Emphasis on how different errors were articulated was not shown.
3. The authors attributed that Flow control subsumes all types of technical flow control including laminar flow control, mixing enhancement, separated flow control, vortex control, turbulence
control, heat transfer control, favorable wave interference, designer fluids and much more. Their work was generic in approach, in process flow control, there are specific errors that need to
be eradicated to have robust flow void of losses.
To obtain the flow rate, equation (3) is applied
4. Their work basically depicts using DCS AND MATLAB
2(12) × 2
to actualize the set objective. Even when DCS is used in the
control system, agitations are bound to be there as a result of errors emanating due to disturbance from the system flow
5. Their work was not streamlined to a particular area of process. Process flow has so many dimensions either flow in pipes, tunnels, tanks, bottles, cylinders etc. There was no particular
emphasis to know where the mitigation approach is channeled to.
2. METHODS
The results obtained is crucial to process industries and as a result, validation of stability is required. Routh Hurwitz
1( )
Where: = Flow rate, 1 = Upstream pressure, 2 = Downstream pressure, = Discharge coefficient (This is the point at which the flow turns turbulent in the medium), 1 = Area of the orifice at
upstream, 2 = Area of the orifice at downstream
Determination of flow profile in the downstream is cumbersome, therefore substitution of and the area with becomes necessary. Thus
stability criterion shall be used to validate the design.
= 2
Modeling of Flow Measuring Device
In considering measuring device, the average velocity of the medium has to be known. The value of the average velocity Vavg at some stream wise cross-section is determined from the requirements
of the conservationof mass principle. Equation
(1) shows the expression for the mass flow rate, according to [16]
= = () (1)
Where: = Mass flow rate, = Density of the fluid, = Cross Sectional Area, () = Velocity Profile.
To account for the outlet loss, equation (5) is applied
Q = CDVA (5)
where: Q = flow rate, V = average velocity, A = cross-sectional area of the pipe and CD is the discharge coefficient that is dependent on the shape and size of the orifice.
Frictional losses. They are losses from liquid flow in a pipe due to friction between the flowing liquid and the restraining walls of the container. These frictional losses are given by:
The average velocity for flow in a circular pipe of radius R
according to [16] can be expressed as:
Where: hL = head loss, f = friction factor, L = length of pipe, D
= diameter of pipe, V = average fluid velocity, g = gravitation
= = 0
= 2 ()
Therefore, when we know the flow rate or the velocity profile, the average velocity can be determined easily.
Orifice Meter
Form drag is the impact force exerted on devices protruding into a pipe due to fluid flow. The force depends on the shape of the insert and can be calculated from
For this research, Orifice meter was considered. This flow measuring device is created by inserting an obstructing plate,
usually with a round hole in the middle, into the pipe and measuring the pressure on each side of the orifice. Pressure taps on each flange allow you to easily measure the pressure differential
across the plate. This pressure differential, along with the dimensions of the plates are combined with certain fluid properties to determine the flow through the pipe.
Figure 1 shows samples of the several types of orifice plate used for flow measurement.
Fig. 1: Samples of several types of orifice plate [3]
Where: F = force on the object, = drag coefficient, g =
specific weight, g = acceleration due to gravity, A = cross- sectional area of obstruction, V = average fluid velocity
Modelling of Flow Controllers
The methodology of internal mode principle is utilized in order to extract the gains of PID and PI controllers. Exhaustive investigation from literatures revealed that the outcomes of P control
are very sensitive to the sensing location as well as the quantity of phase shift. By suitable selections of these variables, the P control can be completely efficient in annihilating the vortex
shedding or minimizing its strength.
In order to implement the control law, the primary step is to determine a desired output response of a particular system to an arbitrary input over a time interval that can be carried out by
system identification. Generally, it is feasible to generate a model on the basis of a complete physical illustration of the system.
PID electronic controller
In any electronic setup, there are two basic blocks which are analogue and digital. Digital systems are built on the basis and foundation of analogue blocks. So first we consider the analogue.
Figure 2 shows the block diagram of an analogue PID controller. The measured variable from the sensor is compared to the set point in the first unity gain comparator; its output is the difference
between the two signals or the error signal. This signal is fed to the integrator via an inverting unity gain buffer and to the proportional amplifier and differentiator via a second inverting
unity gain comparator, which compares the error signal to the integrator output. Initially, with no error signal the output of the integrator is zero so that the zero-error signal is also present
at the output of the second comparator.
When there is a change in the measured variable, the error signal is passed through the second comparator to the proportional amplifier and the differentiator where it is amplified in the
proportional amplifier, added to the differential signal in a summing circuit, and fed to the actuator to change the input variable. Although the integrator sees the error signal, it is slow to
react and so its output does not change immediately, but starts to integrate the error signal. If the error signal is present for an extended period of time, the integrator will supply the
correction signal via the summing circuit to the actuator and input the correction signal to the second comparator. This will reduce the effective error signal to the proportional amplifier to
zero, when the integrator is supplying the full correction signal to the actuator. Any new change in the error signal will still be passed through the second comparator as the integrator is only
supplying an offset to correct for the first long-term error signal. The proportional and differential amplifiers can then correct for any new changes in the error signal.
Fig. 2: Block diagram of a PID analogue electronic controller.
The circuit implementation of the PID controller is shown in figure 3. This is a complex circuit because all the amplifier blocks are shown doing a single function to give a direct comparison to
the block diagram. In practice there are a large number of circuit component combinations that can be used to produce PID action.
Fig. 3: Circuit of a PID action electronic controller
A single amplifier can also be used to perform several functions which would greatly reduce the circuit complexity. Such a circuit is shown in Figure. 4 where feedback from the actuator position
is used as the proportional band adjustment.
Fig. 4: Circuit of a PID electronic controller with feedback from the actuator position
The major key component of the proposed process is the proportional-integral-derivative controller (PID controller) control loop mechanism, which is widely used in industrial control systems, to
mitigate faults by adjusting the process control inputs. Examples of such systems are the ones where the temperature, pressure, or the flow rate, need to be controlled. In such scenarios, the PID
controller aims at detecting the possibility of a fault far enough in advance so that an action can be performed to prevent it from happening.
Figure 5 shows the general PID control system loop. The set point is the desired or command value for the process variable. The control system algorithm uses the difference between the output
(process variable) and the setpoint to determine the desired actuator input to drive the system.
Fig. 5: PID control system loop
Inter Phase Friction Coefficient:
In modeling the PID, certain variables need to be considered. The first is the interface fiction coefficient. The two phases, gas and liquid, slip with respect to each other resulting in the
inter phase-friction force, as stated in equation (8)
= (8)
Where: = Density of the gas phase, = Drag coefficient,
= The resultant slip velocity, = The total projected droplet area in the cell given by
= (1.5 )
Where: d is the droplet diameter, V is the volume of the cell, and Re is the particle Reynolds number given by
controlled variable. The disturbance transfer function Ga represents the effect of the disturbance variable on the controlled variable for the flow channel.
Fig. 6: Block diagram for a process fluid feedback control system. Based on Direct Synthesis (DS) method
The block diagrams considered so far have been specifically developed for the fluid storage system in a process plant.
Where: Y =controlled variable, U = manipulated variable, D = disturbance variable (also referred to as the load variable), P =
Where: is the laminar viscosity of the gas, the drag coefficient Dc is evaluated as follows;
= [.42 24 (1 + 0.150.68) + 0.42 ]
controller output, E = error signal, Ym = measured value of Y , Ysp = set point p, = internal set point (used by the controller), Yu = change in Y due to U, Yd = change in Y due to D, Gc =
controller transfer function, Gv = transfer function for the final control element, GP = process transfer function, Gd = disturbance transfer function, Gm = transfer function for sensor
Mass Transfer Coefficient for Fluid Inter Phase:
As cooling of heated areas deals with evaporating fluids due to friction, the loss of mass of the fluid droplets which is the second variable needs to be calculated, see equation (12).
add transmitter, Km = steady-state gain for Gm.
From figure 6, assuming that no disturbance change occurred, it can now be said that D = 0, this follows that:
= (1 + ) (12)
Where: = The specific heat, which is assumed to be constant for both phases, D = The initial droplet diameter, = The thermal conductivity of the fluid droplets, L = The latent heat of
evaporation, = The temperature of the gas, = The temperature at the surface of the droplet, A = The interface surface area per cell given by
= + (15)
= = 0 ( = 0) (16)
= (17)
Combining gives
= (18)
= (19)
where R2 is the liquid volume fraction, V is the cell volume, and d is the droplet diameter.
PID Models
The control signal u(t) (output) is defined as follows:
= (20)
= (21)
= (22)
() = () + () +
= (23)
Where Kp is the proportional gain constant, Ki is the integral gain constant, Kd is the derivative gain constant, and e is the error defined as the difference between the setpoint and the process
variable value. For a process fluid, consider the loop in figure 7, where each variable is the Laplace transform of a
Combining the above equations gives
= = = ( ) =
( ) (24)
Rearranging gives the desired closed-loop transfer function,
deviation variable. To simplify the notation, the primes and s dependence have been omitted; thus, Y is used rather than Y'(s). Because the final control element is often a control valve, its
= 1+
transfer function is denoted by Gv. The process transfer function
Gp indicates the effect of the manipulated variable on the
In both the numerator and denominator of Equation. (25) the
transfer functions have been rearranged to follow the order in which they are encountered in the feedback control loop. This convention makes it easy to determine which transfer functions are
present or missing in analyzing subsequent problems.
Design of the Flow Controller PID
Direct Synthesis Method: In the Direct Synthesis (DS) method, the controller design is based on a process model and a desired closed loop transfer function.The DS approach provides valuable
insight into the relationship between the process model and the resulting controller.
As a starting point for the analysis, consider the block diagram of a feedback control system in Figure. 6 The closed-loop transfer function for set-point changes was derived in equation (25).
Let and assume that = (26) Then equation (25) becomes:
Fig. 8: Flow control action when PID is cascaded with DCS as a controller
For simulation and comparison, the flow system, actuator,
Rearranging and solving for Gc gives an expression for the ideal feedback controller:
valve, flow sensors are mathematically modeled and another experimental data is added. The experimental process data are as follows:
1 ( ) (28)
1. Process response to the fluid flow gain -400C/Kg/Sec
2. Time constants -25 sec
3. Actuator response to variation of process fluid flow gain -2.50C/Kg/Sec
Desired Closed-Loop Transfer Function
The performance of the controller in Equation (29) strongly depends on the specification of the desired closed-loop transfer function, A practical design equation can be derived by replacing
the unknown G by G, and Y/Ysp by a desired closed- loop transfer function, (Y/Ysp)d: (Y/Ysp)d· Ideally, (Y/Ysp)d = 1 so that the controlled variable tracks set-point changes instantaneously
without any error
4. Sensors response to variation of process pressure Control valve capacity for fluid flow -1.8 Kg/Sec
5. Time constant of control valve – 3 Sec
6. Time constant of flow sensor -25 Sec
From the experimental data, the characteristic equation and the gains are obtained as shown in equation (32):
() = 3 + 202 + 30 + 50 (32)
3. RESULTS AND DISCUSSION
( ) (29)
1( )
Test for Stability of the Control System for Validation
Applying Routh- Hurwitz criterion to equation (32), the characteristic equation was obtained as:
( )
3 + 202 + 30 + 50 = 0 (33)
Where c is the desired closed-loop time constant.
By substituting equation (30) into equation (29), and solving for Gc, the controller design equation becomes
From equation (32), we form the Routh Hurwitz array as
= 1 1
1 (20)(30)50 0
PID Hybridized with Distributed Control System (DCS) Controller
Figure 7 illustrates the flow control action when PID is used as a controller while Figure 8 shows the flow control action when PID is cascaded with DCS as a controller.
For the system to be stable, all the coefficient in the first column of the RouthHurwitz tabulation must have the same sign. This leads to the following conditions:
(20)(30)50 = 60050 > 0 , and 50 > 0
Solving for value of K,
Fig. 7: Flow control action when PID is used as a controller
let 60050 = 0, Therefore, 600 50 = 0, = 600 = 12
> 0, If we let K = 12,
We will have to find the roots from the auxiliary equation taken from s2 row of Routh- Hurwitz tabulation
A(s) = 20s2 + 50k = 0 20s2 + 50×12 = 0
20s2 + 600 = 0
20s2 = – 600
2 = 600 , = (±30) = ± 5.5
= ±5.5
The corresponding value of K at the points of s above is the critical value for stability.
Thus, the system is said to be stable. Simulation of the Designed Controllers
Simulation of the PID controller was done using
MATLAB/SIMULINK software. This was done using the designed control variables and the models shown in sections 3.1, 3.2 and 3.3. Lab-view software and Proteus software were used when the PID and
DCS were cascaded for better performance in the Internal Model control method used for the flow control. From figure 9, it could be seen that the overshoot was very high and there was no
controller in the system in as much as there were sensors and actuators. There were issues of transients after the overshoot and steady state was achieved after several times were wasted.
Fig. 9: Simulation results of flow when there was no controller
Figure 10 illustrates a SIMULINK representation of a closed loop control system when the PID controller was configured into the system.
Fig. 10: Closed loop control system with PID controller.
To achieve better tability, PID controller was integrated into the system and the result displayed in figure 11. This brought significant improvement but it could be seen that there were still
some overshoots and drag from the beginning of the simulation.
Fig. 11: System response when PID controller was integrated.
In figure 12, the simulation study shows the performance of two controllers which are used for virtual buildup of control activity of flow. The controllers (DCS and PID) were used independently
to see their performance in terms of control activity. When they are used independently, the PID controller had a high over shoot and rise time before overcoming the disturbance to attain
stability. Also, the DCS also had some overshoot when used alone to control the flow before attaining stability.
When the two controllers were incorporated in the loop, the result became fantastic as the overshoot was reduced to minimum level and stability was achieved within a very short time. It also gave
faster disturbance rejection with the time duration of few seconds and smaller overshoot. This improvement brought a new window that errors of drag under laminar and turbulence could be overcome
using cascaded brand of controllers.
Fig. 12: Performance analysis of the two controllers in different scenarios.
4. CONCLUSION
In this research work, maintenance of constant flow rate of fluids was achieved through the use of combination of controllers. The research work reviewed how the adverse effect of friction could
cause devastating effect on flow of fluids.
Orifice plate was specifically considered and modeled for use as the flow measuring device. PID flow controller was modelled for use in the control of the flow, the performance was monitored and
simulated and it was found that in as much as it tried to mitigate the problems associated with flow, there were still issues of disturbance but not as much as when there were only sensors, actuators
and pumps. Direct Synthesis (DS) methodology was applied in all the processes. In these, mass transfer coefficient was considered. Design of the flow controller was done to see how the modeled PID
could effectively communicate with other devices in the control loop.
Cascading of the PID with DCS controller was done to see if a better result could be achieved. It was discovered that the combination yielded fantastic result as the over shoot was drastically
reduced to a level that it became inconsequential in the output result obtained comparing it with the reference point already set out.
1. Process Instrumentation: Flow Measurement, pipingenineer.org, para. 1. [Online]. Available: https://www.pipingengineer.org/process-instrum entation-flow-measurement.
2. Technical Article: Six top factors to consider when selecting a flow meter, mccrometer.com, para. 3. Jun. 15, 2023. [Online]. Available: https://www.mccrometer.com/technical-articles/
six-top-factors-to- consider.
3. Automation and Measurement Instrumentation: Flow measurement – Types of flow meters, emerson.com, para. 2. [Online]. Available: https://www.emerson.com/en-us/automation/measurement-
4. Ertugrul Cam lhan Kocaarslan, Load-Frequency Control in Two Area Power System, TEKNOLOJ, Volume 7, (2004), Issue 2, 197-203, 2004 Available: https://jestech.karabuk.edu.tr/arsiv/1302-0056/2004/
Cilt(7)/S ayi(2)/197-203.pdf.
5. S. Rasvarz, C. Vargas-Jarillo, R. Jafari, & A. Gegov, Flow Control of Fluid in Pipelines Using PID Controller, IEEE Access, 7. Pp. 25673- 25680. Digital Object Identifier 10.1109/
ACCESS.2019.2897992 ISSN: 2169-3536 Available: https://eprints.whiterose.ac.uk/ 156071.
6. K. J. Astrom and B. Wittenmark, Computer-controlled Systems: Theory and Design, 2nd Edition Englewood Cliffs, New Jersey, USA: Prentice- Hall, 1990.
7. J. G. Ziegler and N. B. Nichols, "Optimal settings for automatic controllers," Transactions of the ASME, vol. 64, no. 1, pp. 759-768. 1942.
8. N. Rohit, B. Abhiraj, G. Shubhangi, B. Vijaykumar and D. Prasad, Tuning of flow control loop using DeltaV DCS and MATLAB. International Journal of Engineering Research & Technology (IJERT) ISSN:
2278-0181 http://www.ijert.org IJERTV9IS 050796 Published by
: www.ijert.org Vol. 9 Issue 05, May-2020.
9. J. C. Basilio, J. A. Silva Jr, L. G. B. Rolim, and M. V. Moreira, The design of rotor flux-oriented current-controlled induction motor drives: speed control, noise attenuation and stability
robustness," IET Control Theory and Applications, vol. 4, p. 24912505.
10. E. B. Priyanka, C. Maheswari and B. Meenakshipriya, Parameter monitoring and control during petrol transportation using PLC based PID controller, Journal of Applied Research and Technology 14 pp
11. R. Jafari, S. Razvarz, C. Vargas-Jarillo and W. Yu, Control of Flow Rate in Pipeline Using PID Controller IEEE 16th International Conference on Networking, Sensing and Control (ICNSC), May 09-11,
2019, DOI: 10.1109/ICNSC.20 19. 8743311.
12. E. G. Kumar, B. Mithunchakravarthi, and N. Dhivya, Enhancement of PID Controller Performance for a Quadruple Tank Process with Minimum and Non-Minimum Phase Behaviors. 2nd International
Conference on Innovations in Automation and Mechatronics Engineering, ICIAME 2014. Procedia Technology 14 (2014) 480 489, doi: 10.1016/j.protcy.2014.08.061.
13. D. M. Bushnell and I. Wygnanski, Flow Control Applications, NASA Langley Research Center Hampton, VA 23681-2199, Tech. Memo. NASA-TM-2020-220436. Availability: NASA STI Program (757) 864- 9658.
Jan. 2020.
14. R. Nagvekar, N. A. Bhalerao, S. Gajdhane, V. Bhanuse and P. Diofode, Tuning of flow control loop using DeltaV DCS and MATLAB. International Journal of Engineering Research & Technology (IJERT)
Vol. 9 Issue 05 ISSN: 2278-0181 http://www.ijert.org
IJERTV9IS050796, May 2020.
15. S. A. Jagnade, R. A. Pandit and A. R. Bagde, Modeling, Simulation and Control of Flow Tank System International Journal of Science and Research (IJSR) ISSN (Online): 2319-7064 Volume 4 Issue 2,
February 2015, Index Copernicus Value (2013): 6.14 | Impact Factor (2013): 4.438 Available: https://www.ijsr.net/archive/v4i2/SUB151274.pdf.
16. Woolf J. Peter, Chemical Process Dynamics and Controls Book, University of Michigan, Chemical Engineering Process Dynamics and Controls Department, Ann Abor, MI, 2009 Available: https://open.u
|
{"url":"https://www.ijert.org/modelling-of-a-controller-for-effective-flow-monitoring-in-process-industry","timestamp":"2024-11-05T19:37:21Z","content_type":"text/html","content_length":"93223","record_id":"<urn:uuid:581bfc8d-d6ff-4bfb-8cf0-9c8003019405>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00837.warc.gz"}
|
Quantum Entanglement and the EPR Paradox
The chancy (probabilistic) nature of wave function collapse is just one of numerous puzzling aspects of quantum mechanics. Another deep mystery associated with microphysical systems is a phenomenon
called quantum entanglement. According to quantum mechanics, physical systems can be related to each other in a way that makes it impossible to represent their states with separate wave functions.
Two systems are said to be entangled when the state of one system is correlated with the state of the other, so that their behaviors cannot be predicted independently. In other words, the two systems
cannot be adequately represented using separate wave functions, but must be described using the same wave function, as though they were a single system. If something happens to one system, its
entangled partner is simultaneously affected.
What makes this especially puzzling is that the two systems don’t have to be located close together in order to be entangled. Quantum entanglement can occur with systems arbitrarily far away from
each other. This seems to violate Einstein’s special theory of relativity, as illustrated by the following example.
The Mysterious Case of the Conspiring Photons
According to quantum mechanics, two photons (“particles” of light) can become entangled so that the polarization of one photon is always orthogonal to the polarization of the other, even if the
photons are far apart. If the polarization of one photon changes, the other photon’s polarization will change too, no matter how far away it is.
For example, whenever one photon has horizontal polarization, the other has vertical polarization, and vice versa. Now recall that horizontally or vertically polarized photons are in a superposition
of 45° and -45° polarization states. If the horizontally polarized photon hits a 45° polarizing filter, it has a 50% chance of collapsing to a 45° polarization (in which case it will pass through the
filter) and a 50% chance of collapsing to a -45° polarazition (in which case it will be blocked).
But here’s the really surprising thing: if the horizontally polarized photon collapses to a 45° polarization, its entangled partner (which had vertical polarization) will collapse to a -45°
polarization so that their polarizations remain orthogonal. And this will happen no matter how far apart the two photons are, and no matter which collapse happens first. In fact, the photons will
maintain opposite polarizations even if the two collapse events have spacelike separation, which means—according to special relativity—that there is no fact of the matter which collapse happened
In 1935, Einstein and two of his colleagues, Boris Podolsky and Nathan Rosen, co-authored a famous paper criticizing the theory of quantum mechanics.The paper, entitled “Can Quantum-Mechanical
Description of Physical Reality be Considered Complete?”, is available here. According to the traditional way of understanding Born’s rule, the collapse of a wave function is a fundamentally chancy
event, as explained previously. When a photon collapses from a superposition to a definite polarization state, the outcome of this collapse is not predetermined, but is simply a chance event that
occurs when the photon’s polarization is “measured” by the polarizing filter. But if the polarization of a photon really does change in a fundamentally chancy way, as the theory claims, then
entangled photons would have to influence each other at faster-than-light speeds in order to maintain opposite polarizations at all times.
Einstein, Podolsky, and Rosen pointed out that if this aspect of the theory is correct, then quantum entanglement violates the principle of locality, which says that no causal influence can travel
faster than light. This problem—the fact that quantum entanglement violates the principle of locality—is known as the EPR paradox. (EPR stands for Einstein, Podolsky, and Rosen.)
|
{"url":"https://www.faithfulscience.com/quantum-physics/quantum-entanglement.html","timestamp":"2024-11-12T08:46:11Z","content_type":"text/html","content_length":"5595","record_id":"<urn:uuid:7bdd5b1e-5a8b-4d89-ac81-718f3cbb63e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00557.warc.gz"}
|
Generalized Linear Mixed Models
Think back to intro stats when you learned to perform linear regression. You probably learned how to calculate confidence intervals and conduct hypothesis tests on regression coefficients. Whether
you knew it or not, these sorts of statistical inference for the linear model usually rely on three requirements: the residuals are normally distributed, the residuals are independent, and the
residuals have constant variance.
What if some of these requirements are not met? For example, what if your response is whether a shelter animal is adopted? Because this response variable is binary (rather than normal), a linear
model is not appropriate. Additionally, people seeking to adopt a dog may prefer one breed over another. Because animals may be correlated (rather than independent), a linear model is not
appropriate. We can relax the requirements of the linear model to model data sets such as this one.
Relaxing the linear model requirements creates a new class of models. If the responses are not necessarily normal but are more generally from an exponential family, the model is described as a
generalized linear model. For example, a generalized linear model can model a binary outcome, such as whether a person votes for a particular candidate. If the responses are correlated, the model is
described as a mixed model. For example, siblings may be similar, and a mixed model accounts for this correlation by incorporating random effects (unobservable random variables that are typically
normally distributed with mean zero). A generalized linear mixed model (GLMM) incorporates a response from an exponential family as well as fixed and random effects. GLMMs are widely used: a Google
Scholar search for generalized linear mixed models returns over 2.2 million results.
Despite their widespread use, frequentist likelihood-based inference is limited. Frequentist likelihood-based inference includes (but is not limited to) performing maximum likelihood, calculating
Fisher information, conducting hypothesis tests, and constructing confidence intervals. Most methodology and software for GLMMs performs little more than maximum likelihood or does not perform
likelihood-based inference at all. The challenge lies in the likelihood function, which is often an intractable integral. To perform all frequentist likelihood-based inference, the entire likelihood
function is required.
My R package glmm approximates the entire likelihood function using a Monte Carlo likelihood approximation. The package maximizes the likelihood approximation and reports Monte Carlo maximum
likelihood estimates. Users can conduct all likelihood-based inference because the entire likelihood function is approximated. You can learn how to use the R package glmm by following this
introductory guide. R package glmm is downloaded approximately 1000 times per month.
Special thanks to Google and its Summer of Code for supporting my R package development during the summer of 2014.
|
{"url":"https://cknudson.com/research/glmm/","timestamp":"2024-11-03T09:43:35Z","content_type":"text/html","content_length":"28424","record_id":"<urn:uuid:a6787d16-6089-49da-bb23-57149920cbb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00633.warc.gz"}
|
Sathiya Keerthi Selvaraj
I am a researcher (Principal Staff Scientist) in the AI Group of Linkedin. I work on Distributed training of machine learning and AI systems, Huge scale Linear programming, and Information extraction
Previously (Decemebr 2017-December 2019) I was a Distinguished researcher in Criteo Research, a great team of researchers (spread out in Paris, Grenoble and Palo Alto) working on fundamental and
applied research problems in computational advertising. Previous to that, I was in Microsoft (June 2012-December 2017) (located in Mountain View, CA), first with the CISL team in Big Data and later
with the FAST division of Microsoft Office. From January 2004-April 2012 I was with the Machine Learning Group of Yahoo! Research, in Santa Clara, CA. My recent research has mainly focused on the
design of distributed training algorithms for developing various types of linear and nonlinear models on Big Data, and the application of machine learning to textual problems.
Prior to joining Yahoo! Research, I worked for 11 years at the Indian Institute of Science, Bangalore, and for 5 years at the National University of Singapore. During those sixteen years my research
focused on the development of practical algorithms for a variety of areas, such as machine learning, robotics, computer graphics and optimal control. (Many of the publications during that period are
not mentioned in this page.) My works on support vector machines (e.g., improved SMO algorithm), polytope distance computation (e.g., GJK algorithm) and model predictive control (e.g., stability
theory) are highly cited. Overall, I have published more than 100 papers in leading journals and conferences. I am an Action Editor of JMLR (Journal of Machine Learning Research) since 2008.
Previously I was an Associate Editor for the IEEE Transactions on Automation Science and Engineering.
Contact: keselvaraj at linkedin dot com
Slide deck of my talk on Interplay between Optimization and Generalization in Deep Neural Networks given at the 3rd annual Machine Learning in the Real World Workshop organized by Criteo Research,
Paris, on 8th November, 2017: Optimization_and_Generalization_Keerthi_Criteo_November_08_2017.pptx. This is a review and critique of recent works in this topic. The actual talk was for 45 minutes and
I covered the main ideas quickly. The ppt has more detailed material. I intend to update the slide deck as new works are published on this and related topics.
Slide deck of my talks on Optimization for machine learning given at UC Santa Cruz in February, 2017: Keerthi_Optimization_For_ML_UCSC_2017.pdf
In 2010 I attended and gave a talk at GilbertFest, a symposium in honor of my Ph.D thesis advisor, Elmer G. Gilbert. Check out the symposium page, which also has pdfs of his classic papers in Control
and Optimization. I am honored to have some of my joint papers with him in that list. Also, check out his A Life in Control talk given at the University of Michigan, Ann Arbor covering his marvelous
career in control systems.
Check out:
• LIBLINEAR-a Library for Large Linear Classification written by Chih-Jen Lin and his students. It has codes for the methods covered in the ICML'08, KDD'08, ICML'07/JMLR'08 papers given below.
Check out LIBLINEAR's most recent Distributed version. It gives cool speedups in multicore settings. For the cluster case where communication is a bottleneck, the algorithms in our JMLR-2017
papers below are very good.
Citations of my papers in Google Scholar
To view the publications from a specific year, select the year from the list below:
2018 2017 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 and Earlier
• An efficient distributed learning algorithm based on effective local functional approximations. With Dhruv Mahajan, Nikunj Agrawal, S. Sundararajan and Leon Bottou, To appear in JMLR, 2017. [
• A distributed block coordinate descent method for training l_1 regularized linear classifiers. With Dhruv Mahajan and S. Sundararajan, To appear in JMLR, 2017. [abstract]
• Gradient Boosted Decision Trees for High Dimensional Sparse Output. With Si Si, Cho-Jui Hsieh, Huan Zhang, Dhruv Mahajan and Inderjit Dhillon, Accepted in ICML, 2017. [abstract]
• Towards a Better Understanding of Predict and Count Models. With Tobias Schnabel and Rajiv Khanna, arXiv:1511.02024v1, 2015. [abstract]
• Learning a Hierarchical Monitoring System for Detecting and Diagnosing Service Issues. With Vinod Nair, Ameya Raul, Shwetabh Khanduja, Vikas Bahirwani, S. Sundararajan, Steve Herbert, Sudheer
Dhulipalla, and Qihong Shao, KDD, 2015. [abstract]
• Near Real-time Service Monitoring Using High-dimensional Time Series. With Vinod Nair, Sundararajan Sellamanickam, Shwetabh Khanduja, Ameya Raul, and Ajesh Shaj, ICDM, 2015. [abstract]
1999 and Earlier (To be added)
Last updated: May, 2020
|
{"url":"http://keerthis.com/","timestamp":"2024-11-03T06:54:38Z","content_type":"text/html","content_length":"101627","record_id":"<urn:uuid:048c43a8-799e-4c68-b139-afbc939bdae8>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00814.warc.gz"}
|
Covariance Matrix
We use Einstein’s summation convention.
Covariance of two discrete series $A$ and $B$ is defined as
$$ \text{Cov} ({A,B}) = \sigma_{A,B}^2 = \frac{ (a_i - \bar A) (b_i - \bar B) }{ n- 1 }, $$
where $n$ is the length of the series. The normalization factor is set to $1/(n-1)$ to mitigate the bias for small $n$.
One could show that
$$ \mathrm{Cov}({A,B}) = E( A,B ) - \bar A \bar B. $$
At first glance, the square in the definition seems to be only for notation purpose at this point.
Meanwhile, using this idea of the mean of geometric mean, we could easily generalize it to the covariance of three series,
$$ \sigma_{A,B,C}^3 = \frac{ (a_i - \bar A) (b_i - \bar B)(c_i - \bar C) }{ n-1 }, $$
or even arbitrary N series,
$$ \sigma_{A_1, A_2, …, A_N }^N &= \frac{ \sum_{i=1}^{n} \text{ geometric mean of the ith elements to the Nth power } }{ n-1 } \\ &= \frac{ (a_{1,i} - \bar A_1) \cdots (a_{N,i} - \bar A_{N})}{
n-1 }, $$
which should be called the covariance of all the N series, $\mathrm{Cov} ({A_1, A_2,\cdots, A_N })$.
We do not use these since we could easily build a covariance matrix to indicate all the possible covariances between any two variables.
For a complete picture of the data, we build a matrix for all the possible combinations of the covariances,
$$ \mathbf{C} = \begin{pmatrix} \mathrm{Cov} (A_1, A_1) & \mathrm{Cov} (A_1, A_2) \\ \mathrm{Cov} (A_2, A_1) & \mathrm{Cov} (A_2, A_2) \end{pmatrix}. $$
For real series, $\mathrm{Cov} (A_2, A_1) = \mathrm{Cov} (A_1, A_2)$.
The covariance matrix of complex numbers and quaternions is not necessariy symmetric. A more general concept of symmetries is Hermitian.
Given a dataset $X$,
$$ X = \begin{pmatrix} \mathbf X_{1} & \mathbf X_{2} & \cdots & \mathbf X_{N} \end{pmatrix} $$
where $N$ is the number of features (variables). The covariance matrix is
$$ C_{ij} = \operatorname{Cov}(\mathbf X_i, \mathbf X_j). $$
The covariance becomes variance when $i=j$.
Planted: by L Ma;
L Ma (2020). 'Covariance Matrix', Datumorphism, 03 April. Available at: https://datumorphism.leima.is/cards/statistics/covariance-matrix/.
|
{"url":"https://datumorphism.leima.is/cards/statistics/covariance-matrix/","timestamp":"2024-11-12T02:38:58Z","content_type":"text/html","content_length":"111159","record_id":"<urn:uuid:9a5482ec-27a6-4abe-a5d3-32578387d85e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00720.warc.gz"}
|
mp_arc 02-422
02-422 Jens Marklof
Pair correlation densities of inhomogeneous quadratic forms (119K, amslatex) Oct 14, 02
Abstract , Paper (src), View paper (auto. generated ps), Index of related papers
Abstract. Under explicit diophantine conditions on $(\alpha,\beta)\in\RR^2$, we prove that the local two-point correlations of the sequence given by the values $(m-\alpha)^2+(n-\beta)^2$, with $
(m,n)\in\ZZ^2$, are those of a Poisson process. This partly confirms a conjecture of Berry and Tabor on spectral statistics of quantized integrable systems, and also establishes a particular case
of the quantitative version of the Oppenheim conjecture for inhomogeneous quadratic forms of signature (2,2). The proof uses theta sums and Ratner's classification of measures invariant under
unipotent flows.
Files: 02-422.src( 02-422.keywords , inhomog.tex )
|
{"url":"http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=02-422","timestamp":"2024-11-14T09:00:41Z","content_type":"text/html","content_length":"1847","record_id":"<urn:uuid:38236d3f-1b41-4a89-84b6-aed06cbe4946>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00832.warc.gz"}
|
Rolling the Dice with the PostgreSQL Random Functions | Crunchy Data Blog
Rolling the Dice with the PostgreSQL Random Functions
Generating random numbers is a surprisingly common task in programs, whether it's to create test data or to provide a user with a random entry from a list of items.
PostgreSQL comes with just a few simple foundational functions that can be used to fulfill most needs for randomness.
Almost all your random-ness needs will be met with the random() function.
The random() function returns a double precision float in a continuous uniform distribution between 0.0 and 1.0.
What does that mean? It means that you could get any value between 0.0 and 1.0, with equal probability, for each call of random().
Here's five uniform random numbers between 0.0 and 1.0.
SELECT random() FROM generate_series(1, 5)
Yep, those look pretty random! But, maybe not so useful?
Most times when people are trying to generate random numbers, they are looking for random integers in a range, not random floats between 0.0 and 1.0.
Say you wanted random integers between 1 and 10, inclusive. How do you get that, starting from random()?
Start by scaling an ordinary random() number up be a factor of 10! Now you have a continuous distribution between 0 and 10.
SELECT 10 * random() FROM generate_series(1, 5)
Then, if you push every one of those numbers down to the nearest integer using floor() you'll end up with a random integer between 0 and 9.
SELECT floor(10 * random()) FROM generate_series(1, 5)
If you wanted a random integer between 1 and 10, you just need to add 1 to the zero-base number.
SELECT floor(10 * random()) + 1 FROM generate_series(1, 5)
Sometimes the things you are trying to do randomly aren't numbers. How do you get a random entry out of a string? Or a random row from a table?
We already saw how to get one-based integers from random() and we can apply that technique to the problem of pulling an entry from an array.
WITH f AS (
SELECT ARRAY[
'peach'] AS fruits
SELECT fruits[ceil(array_length(fruits,1) * random())] AS snack
FROM f;
Getting a random row involves some tradeoffs and thinking. For a random value from a small table, the naive way to get a single random value is this.
SELECT *
FROM fruits
ORDER BY random()
LIMIT 1
As you can imagine, this gets quite expensive if the fruits table gets too large, since it sorts the whole table every time.
If you only need a single random row, one way to achieve that is to add a random column to your table and index it.
CREATE TABLE fruits (
id SERIAL PRIMARY KEY,
fruit TEXT NOT NULL,
random FLOAT8 DEFAULT random()
INSERT INTO fruits (fruit)
VALUES ('apple'),('banana'),('cherry'),('pear'),('peach');
CREATE INDEX fruits_random_x ON fruits (random);
Then when it's time to search, use the random function to generate a starting search location and find the next highest value.
SELECT *
FROM fruits
WHERE random > random()
ORDER BY random ASC
LIMIT 1;
id | fruit | random
8 | banana | 0.1997961574379754
Be careful using this trick for more than one row though: since the values in the random column are fixed, the sequences of rows returned will be deterministic, even if the start row is random.
If you want to pull large portions of a table into a query (for random sampling, for example) look at the TABLESAMPLE clause of the SELECT command.
Suppose I wanted the entire contents of the fruits collection, but returned in two random groups? This is actually much like getting a single random value: order the whole set randomly, and then use
that ordering to determine grouping.
WITH random_fruits AS (
SELECT id, fruit
FROM fruits
ORDER BY random()
SELECT row_number() over () % 2 AS group,
id, fruit
FROM random_fruits
ORDER BY 1;
group | id | fruit
0 | 11 | peach
0 | 8 | banana
1 | 10 | pear
1 | 7 | apple
1 | 9 | cherry
The '2' in the example above is the number of groups desired.
So far we have just been looking at ways to permute the uniform distribution offered by the random() function. But there is in fact an infinite number of other probability distributions that random
numbers could be a part of.
Of that infinite collection, by far the most frequently used in practice is the "normal distribution" also known as the "Gaussian distribution" or "bell curve".
Rather than having a hard cut-off point, the normal distribution has a frequent center and then ever lower probability of values out to infinity in both directions.
The position of the center of the distribution is the "mean" and the rate of probability decay is controlled by the "standard deviation".
To generate normally distributed data in PostgreSQL, use the random_normal(mean, stddev) function that was introduced in version 16.
SELECT random_normal(0, 1)
FROM generate_series(1,10)
ORDER BY 1
It's kind of hard to appreciate that the data have a central tendency without generating a lot more of them and counting how many fall within each bin.
SELECT random_normal()::integer,
FROM generate_series(1,1000)
GROUP BY 1
ORDER BY 1
The cast to integer rounds the values towards the nearest integer, so you can see how the data are mostly between the first two standard deviations of the mean.
random_normal | count
-3 | 5
-2 | 65
-1 | 233
0 | 378
1 | 246
2 | 67
3 | 5
4 | 1
If you looked very closely at the examples in the first section you'll have noticed that they all started from the same, allegedly random values.
If random() truly is random, how did I get the same starting values four times in a row?
The answer, shockingly, is that random() is actually "pseudo-random".
A pseudorandom sequence of numbers is one that appears to be statistically random, despite having been produced by a completely deterministic and repeatable process.
With a pseudo-random number generator and a known starting point, I will always get the same sequence of numbers, at least on the same computer.
The reason most computer programs use pseudo-random number generators is that generating truly random numbers is actually quite an expensive operation (relatively speaking).
So programs instead generate one truly random number, and use that as a "seed" for a generator.
PostgreSQL uses the Blackman/Vigna "xoroshiro128 1.0" pseudo-random number generator.
By default, on start-up PostgreSQL sets up a seed value by calling an external random number generator, using an appropriate method for the platform:
• Using OpenSSL RAND_bytes() if available, or
• using Windows CryptGenRandom() on that platform, or
• using the operating system /dev/urandom if necessary.
So if you are interested in a random number, just calling random() will get you one every time.
But if you want to put your finger on the scales, you can use the setseed() function to cause your random() and random_normal() functions to generate a deterministic series of random numbers,
starting from a seed value you specify.
|
{"url":"https://www.crunchydata.com/blog/rolling-the-dice-with-postgres-random-function?utm_campaign=Crunchy%20Data%20Newsletter&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-_Mqxz2aE-06PIe4m_xXAN3N8UGdDnaXCRyfmns1h__a7cT0-jESC-__Mrb8esG54X4Nfpw","timestamp":"2024-11-14T04:23:15Z","content_type":"text/html","content_length":"74938","record_id":"<urn:uuid:62f51253-a4d2-45d2-a34f-b1c233920eb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00085.warc.gz"}
|
The definition of prime number | Let's prove Goldbach!
The definition of prime number
The disquieting Belfagor’s prime
We’ll start our study of prime numbers explaining the definition of prime number. What’s commonly known is that a prime number is a positive integer number divisible only by itself and 1. This is
true except for number 1, that according to that should be prime (1 is divisible by 1, which is also the number itself), but indeed it isn’t. Number 1 in fact is a priori excluded from prime numbers.
The reason is related to a theorem, known as the Fundamental Theorem of Arithmetic:
Fundamental theorem of arithmetics
Every integer number greater than 1 can be written as a product of prime numbers. Moreover such expression, called factorization or decomposition into prime factors, is unique apart from the order of
the factors.
Factorizing a number is always possible, and, in order to succeed, elementary passages are enough: we just have to divide the number by every prime preceding it, until we come to 1. Usually, when
factorizing a number by hand, for example 150, a scheme like the following is used, where in the left column there’s the starting number with the results of the subsequent divisions, and in the right
column there are its prime factors (alternate notations exist, for example with the columns swapped, or as a tree):
The following passages are applied:
• Computation starts by 150, which is written at the top of the left column, and it’s divided by its smallest prime divisor, which is 2, obtaining 75; the divisor is written at the top of the right
• The result of the division is in turn written in the left column, and the process is repeated on it, by dividing time after time by the smallest possible prime divisor;
• When the result of the division is 1, the process ends;
• Finally, we have $150 = 2 \cdot 3 \cdot 5 \cdot 5$. This result can also be written in a more compact way, by considering that the product of a number with itself, by definition, is a power: $150
= 2 \cdot 3 \cdot 5^2$.
If the starting number is small and the prime factors to be found are few, the operation is very simple, but this is not always the case. As you can verify with the factorizer, for example, a
relatively small number like 1024 needs many divisions, and numbers formed by big prime factors, like 89077, require many attempts in order to find a prime divisor. Moreover, if the prime factors are
extremely big, finding them becomes a real challenge, as it turned out in the RSA Factoring Challenge, which consisted in starting from some numbers obtained by multiplying two very big primes which
are known (but not revealed by the organizers), and trying to factorize the former: for some of them, by now, every attempt has failed.
If 1 was considered prime, the Fundamental theorem of arithmetics could not be true in its current form and it should be expressed differently, adding an exception. As a consequence, a myriad of
theorems that, directly or not, depend from it should be modified in turn, introducing some often unconvenient exceptions… indeed the most simple solution is assuming that 1 is not prime.
The product mentioned in the statement of the Fundamental Theorem of Arithmetic can be made up of any number $k$ of prime factors, not necessarily distinct. For example $12 = 2 \cdot 2 \cdot 3$ is
the product of $k = 3$ prime numbers. Usually a product is defined for at least $k = 2$ factors, however for the theory we are getting ready to develop, it will be convenient to extend the idea of
product including the case of the product with just one factor ($k = 1$), which is the factor itself. With this assumption, a prime number il also a product of prime numbers: a special product, made
up of the number itself that multiplies no other number. For example, here are the factorizations of numbers from 2 to 10:
Number Factorization Number of factors
4 $2 \cdot 2$ 2
6 $2 \cdot 3$ 2
8 $2 \cdot 2 \cdot 2$ 3
9 $3 \cdot 3$ 2
10 $2 \cdot 5$ 2
The fundamental Theorem of arithmetic states not only that every integer number greater than 1 has a factorization into prime factors, but also that this factorization is unique except for the order
of the factors. For example, $12 = 2 \cdot 2 \cdot 3$ and there are no other prime numbers, apart from 2 taken two times and 3 taken one time, that multiplied between themselves make 12. The unicity
of the factorization is a fundamental property that is at the base of many theorems of number theory, and it’s right here that we can understand why 1 is not considered a prime number. In fact, if 1
was prime, every positive integer would have infinite factorization into prime numbers. For example
10 = 5 \cdot 2 = 5 \cdot 2 \cdot 1 = 5 \cdot 2 \cdot 1 \cdot 1 = 5 \cdot 2 \cdot 1 \cdot 1 \cdot 1 = \dots.
In order to avoid this, and to be certain to always have a unique factorization, 1 is not considered prime. For this reason the fundamental theorem of arithmetic holds for integers greater than 1: it
cannot hold for 1, because it’s not prime and is only the product of itself, so it’s not a product of primes.
Having excluded the number 1 from prime numbers, we can state a correct definition of prime number:
Prime number
A prime number is an integer number greater than 1, which is divisible obly by itself and 1.
This definition refers to the concept of divisibility, which is worth to point out:
An integer $a$ is divisible by an integer $b$, with $b eq 0$, if $a = b c$ for some integer $c$.
If $a$ is divisible by $b$ we write $b \mid a$ (“$b$ divides $a$“), otherwise we write $b mid a$ (“$b$ does not divide $a$“).
For example, 10 is divisible by 2 because it can be written as $10 = 2 \cdot 5$, where 5 is an integer $c$ such that $10 = 2 \cdot c$. On the contrary, 10 is not divisible by 3 because there isn’t
any integer $c$ such that $10 = 3 \cdot c$. In other terms, we can say that 2 divides 10 ($2 \mid 10$) and 3 doesn’t divide 10 ($3 mid 10$).
Definition N.2 doesn’t contemplate the case of $b = 0$, so it remains undefined whether a number is divisible by 0 or not.
Here the exclusion of a particular case from a general definition is for a good reason as before, for the purpose of preserving some fundamental and convenient properties, which aren’t valid for the
excluded case. In fact we can observe that the number $c$ is univocally determined when $a$ and $b$ have been fixed (with the fraction notation we can write $c = \frac{a}{b}$), which is not true if a
= b = 0. In this case in fact we have $0 = 0 c$ for any integer $c$ and not for a specific one. Indeed, for the purpose of guaranteeing the unicity of $c$, it would be enough to exclude from the
definition just the case of $a$ and $b$both 0, instead of $b = 0$: this point could be examined further, but certainly is a matter of secondary importance.
An immediate consequence of the definition of divisibility is the following:
Within natural numbers, a divisor is less than or equal to a positive dividend
Let $a$ and $b$ be two natural numbers, with $a \gt 0$. If $b \mid a$, then $b \leq a$.
If $b \mid a$, then by Definition N.2 there exists an integer $c$ such that $bc = a$. By the sign product rule, this integer cannot be negative, because by hypothesis $a$ and $b$ are natural numbers,
hence positive. In addition, $c$ cannot be zero, because, if so, then $a = bc = b \cdot 0 = 0$, but by hypothesis it must be $a \gt 0$. So $c \geq 1$. Then $b = b \cdot 1 \leq bc = a$, hence $b \leq
Divisibility has a lot to do with the factorization into prime numbers: in fact, if $b \mid a$ – that is $a = b c$ for some integer $c$ – it means that somehow $b$ is present in the factorization of
$a$, as a prime or as a product of primes. For example:
• $2 \mid 10$ and 2 is present in the factorization of 10 as a prime number: $10 = \mathbf{2} \cdot 5$;
• $6 \mid 24$ and 6 is present in the factorization of 24 as a product of primes: $24 = 2 \cdot 2 \cdot \mathbf{2 \cdot 3}$;
• Also $8 \mid 24$ and 8 is present in the factorization of 24 as a product of primes: $24 = \mathbf{2 \cdot 2 \cdot 2} \cdot 3$;
So if $b \mid a$ it means that the factorization of $b$ is contained into the factorization of $a$. In particular, if a prime number $p \mid a$, $p$ must be one of the primes of the factorization of
$a$. For example, if $a = p_1 p_2$, with $p_1$ and $p_2$ primes, it must be $p = p_1$ or $p = p_2$. This simple property characterizes prime numbers so strongly that, if conveniently generalized, it
can be taken as an alternative definition of prime number:
Prime number, characteristic property or alternative definition
A prime number is an integer number $p > 1$ such that, if $p \mid bc$, then $p \mid b$ or $p \mid c$, for any integers $b$ and $c$. In other terms, $p$ cannot divide a product of integers $bc$
without dividing at least one of the two factors.
Clearly, if we had chosen the previous statement as the definition of prime number, then Definition N.1 would have become a property to be proved on the basis of Property N.1B. Instead, since we
chose Definition N.1 as the definition of prime number, then the latter is the starting point for proving Property N.1B. We’ll not see this proof, but let’s try to understand the statement. Let’s
decompose the number $bc$ into prime factors: for example if $b = 6$ and $c = 10$, $bc = 60$ and
60 = 2 \cdot 2 \cdot 3 \cdot 5
But since 60 is the product of 6 and 10, we can reorder its factorization in such a way to find the prime factors of 6 and those of 10:
60 = \underbrace{2 \cdot 3}_{6} \cdot \underbrace{2 \cdot 5}_{10}
For the reasoning above, if $p$ is a prime number dividing 60, $p$ must be present in its factorization (that is $p = 2$ or $p = 3$ or $p = 5$); but then, for how we subdivided it, $p$ can be present
in the factorization of 6 (that is $p = 2$ or $p = 3$), and in that case it would divide 6, or it can be present in the factorization of 10 (that is $p = 2$ or $p = 5$), and in that case it would
divide 10. Possibly $p$ can be present in both factorizations, as the case of $p = 2$, but it must be present at least in one of them. To better understand this, we can imagine to apply the following
algorithm: we scan the list of the prime factors of 60 in the ordering of (1): 2, 3, 2, 5; if we find $p$ in the first two positions, we say that $p \mid 6$; otherwise we say that $p \mid 10$. In
this way, if $p$ isn’t a divisor of 60 we won’t find it and we won’t say anything; if instead $p \mid 60$ we will find it in one or more positions and we’ll say at least one utterance between “$p \
mid 6$” and “$p \mid 10$“.
It’s also interesting to observe what happens in the case of a not prime number (which is called a composite number), like 4. We have that $4 \mid 60$, so the factorization of 4 must be contained in
that of 60. But 4 isn’t prime, so 4 cannot be present in the factorization of 60 as a prime number, but it must be present in the form of a product of prime numbers. This is the reason why 4 divides
60 without dividing 6 and 10, how you can see highlighting the prime factors of 4 ($2 \cdot 2$) in (1):
60 = \underbrace{\mathbf{2} \cdot 3}_{6} \cdot \underbrace{\mathbf{2} \cdot 5}_{10}
We can see that the number 4 “breaks” into a 2 appearing among the divisors of 6, and another 2 among the divisors of 10: therefore 4 does not divide either 6 or 10, but it divides their product 60.
this cannot happen with a prime number, because a prime number cannot “break” between the factors $b$ and $c$: it must appear “intact” inside one of them.
Tha last example is represented graphically in the picture below:
Characteristic property of prime numbers: if a prime number (3) divides a product of two factors (6 * 10 = 60), it divides at least one of them (3 | 6). A non-prime number (4) can insted divide such
a product (6 * 10 = 60) without dividing any of the two factors.
Infinity of prime numbers
An important theorem, well known since ancient times, which is a consequence of the definitions above and the Fundamental Theorem of Arithmetic, is the following:
Infinity of prime numbers
The number of primes is infinite.
The proof is so famous and simple that it’s worth to remember it. Let’s suppose that only a finite number $N$ of prime numbers exists. In this case, we could list all the prime numbers in increasing
order: $p_1, p_2, \ldots, p_N$, where $p_N$ is the biggest prime number. So, starting from this list of prime numbers, we can construct the number
M := p_1 p_2 \ldots p_N + 1 \tag{3}
For how it’s defined, this number is not divisible by any of the primes $p_1, p_2, ..., p_N$.
Let’s ask ourselves the question: which prime numbers $M$ is divisible by? Let’s see if for example it’s divisible by $p_1$. If it was, by Definition N.2 (Divisibility) an integer $c$ would exist,
such that
c \cdot p_1 = M = p_1 p_2 \ldots p_N + 1 \tag{4}
Hence, gathering $p_1$:
p_1 (c - p_2 \ldots p_N) = 1
So, again by Definition N.2, $p_1 \mid 1$. Then, by Property N.1A, it should be $p_1 \leq 1$; but this is not possible because prime numbers, by Definition N.1, must be greater than 1. This means
that the hypothetical number $c$ of equation (4) cannot exist, i.e. $M$ is not divisible by $p_1$.
The argument above does not depend on the choice of $p_1$, in fact it may be repeated also for all the other primes $p_2, \ldots, p_N$, with the result that $M$ is not divisible by any of the primes
$p_1, p_2, \ldots, p_N$.
By the Fundamental Theorem of Arithmetic, $M$ is factorizable as a product of prime numbers. By Definition N.2, all the prime numbers which appear in the factorization of $M$ divide it; so, for what
we have seen before, they cannot be equal to either of the primes $p_1, p_2, ..., p_N$. But this is absurd, because we supposed that $p_1, p_2, ..., p_N$ are the only existing prime numbers. This
means that the initial hypothesis, that only a finite number of prime numbers exists, is wrong: there are infinitely many prime numbers
The proof above is very ancient, tracing back to Euclid (IV – III century B.C.). The mathematician
Paul Erdős
considered it so well made, that he affirmed it comes “directly from the Book”, referring to an hypothetical book, kept by God, in which there are all the theorems with their most beautiful proofs.
2 Replies to “The definition of prime number”
1. Hi, I have been enjoying reading through your website, as one of the most accessible websites to explain number theory. But I think there is an error in your explanation of the infinity of
primes. You ask the question:
Why must M be prime, by the Foundamental Theorem of Arithmetic?
M is not necessarily prime, and an example that shows this is 2.3.5.7.11.13+1=30031 which is not a prime (its prime factors are 59 and 509). So the infinity of primes proof should say that either
M is a new prime number OR there exist new prime numbers greater than p_N that enable M to be factorised, because p_1…p_N cannot be factors. For the first few cases eg 2.3+1=7, 2.3.5+1=31 etc M
is indeed prime, but you don’t have to look very long to find the example quoted above.
1. Hi Tim,
we are happy to know that you are enjoying our website!
Thanks for your question. You are right: indeed there was an error in the proof of the infinity of prime numbers. We corrected the proof and we also simplified it.
We appreciated your help for improving our website. If you note any other errors, please don’t hesitate to contact us again.
|
{"url":"http://www.dimostriamogoldbach.it/en/prime-definition/?doing_wp_cron=1730825467.0679740905761718750000","timestamp":"2024-11-05T16:51:07Z","content_type":"text/html","content_length":"291620","record_id":"<urn:uuid:42e0fb5d-ecfd-42c9-9904-7f9f195cfc5a>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00008.warc.gz"}
|
This article
needs additional citations for verification
(April 2012)
Four bags with three marbles per bag gives twelve marbles (4 × 3 = 12).
Multiplication can also be thought of as scaling. Here, 2 is being multiplied by 3 using scaling, giving 6 as a result.
Animation for the multiplication 2 × 3 = 6
4 × 5 = 20. The large rectangle is made up of 20 squares, each 1 unit by 1 unit.
Area of a cloth 4.5m × 2.5m = 11.25m^2; 41/2 × 21/2 = 111/4
Multiplication (often denoted by the cross symbol ×, by the mid-line dot operator ⋅, by juxtaposition, or, on computers, by an asterisk *) is one of the four elementary mathematical operations of
arithmetic, with the other ones being addition, subtraction, and division. The result of a multiplication operation is called a product.
The multiplication of whole numbers may be thought of as repeated addition; that is, the multiplication of two numbers is equivalent to adding as many copies of one of them, the multiplicand, as the
quantity of the other one, the multiplier; both numbers can be referred to as factors.
${\displaystyle a\times b=\underbrace {b+\cdots +b} _{a{\text{ times}}}.}$
For example, 4 multiplied by 3, often written as ${\displaystyle 3\times 4}$ and spoken as "3 times 4", can be calculated by adding 3 copies of 4 together:
${\displaystyle 3\times 4=4+4+4=12.}$
Here, 3 (the multiplier) and 4 (the multiplicand) are the factors, and 12 is the product.
One of the main properties of multiplication is the commutative property, which states in this case that adding 3 copies of 4 gives the same result as adding 4 copies of 3:
${\displaystyle 4\times 3=3+3+3+3=12.}$
Thus, the designation of multiplier and multiplicand does not affect the result of the multiplication.^[1]
Systematic generalizations of this basic definition define the multiplication of integers (including negative numbers), rational numbers (fractions), and real numbers.
Multiplication can also be visualized as counting objects arranged in a rectangle (for whole numbers) or as finding the area of a rectangle whose sides have some given lengths. The area of a
rectangle does not depend on which side is measured first—a consequence of the commutative property.
The product of two measurements (or physical quantities) is a new type of measurement, usually with a derived unit. For example, multiplying the lengths (in meters or feet) of the two sides of a
rectangle gives its area (in square meters or square feet). Such a product is the subject of dimensional analysis.
The inverse operation of multiplication is division. For example, since 4 multiplied by 3 equals 12, 12 divided by 3 equals 4. Indeed, multiplication by 3, followed by division by 3, yields the
original number. The division of a number other than 0 by itself equals 1.
Several mathematical concepts expand upon the fundamental idea of multiplication. The product of a sequence, vector multiplication, complex numbers, and matrices are all examples where this can be
seen. These more advanced constructs tend to affect the basic properties in their own ways, such as becoming noncommutative in matrices and some forms of vector multiplication or changing the sign of
complex numbers.
|
{"url":"https://techsciencenews.com/wikisearch/view_html.php?sq=Google&lang=en&q=Multiplication","timestamp":"2024-11-09T22:39:46Z","content_type":"text/html","content_length":"71880","record_id":"<urn:uuid:8f13fcf3-b744-41d4-8fb2-2772f285b811>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00066.warc.gz"}
|
Recent View about Kahler Geometry and Spinor Structure of WCW
The construction of Kahler geometry of WCW ("world of classical worlds") is fundamental to TGD program. I ended up with the idea about physics as WCW geometry around 1985 and made a breakthrough
around 1990, when I realized that Kahler function for WCW could correspond to Kahler action for its preferred extremals defining the analogs of Bohr orbits so that classical theory with Bohr rules
would become an exact part of quantum theory and path integral would be replaced with genuine integral over WCW. The motivating construction was that for loop spaces leading to a unique Kahler
geometry. The geometry for the space of 3-D objects is even more complex than that for loops and the vision still is that the geometry of WCW is unique from the mere existence of Riemann connection.
This article represents the updated version of the construction providing a solution to the problems of the previous construction. The basic formulas remain as such but the expressions for WCW
super-Hamiltonians defining WCW Hamiltonians (and matrix elements of WCW metric) as their anti-commutator are replaced with those following from the dynamics of the modified Dirac action.
|
{"url":"https://prespacetime.com/index.php/pst/article/view/620/0","timestamp":"2024-11-05T10:15:05Z","content_type":"application/xhtml+xml","content_length":"16960","record_id":"<urn:uuid:44a12252-c0f4-4271-8227-405bb49576ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00137.warc.gz"}
|
Math 5043 Spring 2015
Lectures: MWF 11am-12noon Room 207 Cupples I
Instructor: Songhao Li Office: 207A Cupples I
Phone: (314)935-4208 email: sli@math.wustl.edu
Office Hour: MW 1pm-2pm or by appointment
Grader: Chris Cox
Course Webpage: http://www.math.wustl.edu/~sli/math5043_spring2015/course.html
Textbook: Algebraic Topology by Allen Hatcher
We will cover most, if not all, of the following topics:
-- Fundamental group;
-- Homology
If time permits, which is not likely the case, we will also cover selected topics in cohomology.
Homework 20%
Test 1 (Feb 16) 20%
Test 2 (Mar 23) 20%
Final (May 4) 40%
Note: For those of who will write the final exam as the department qualifying exam for geometry, the exam will be 3 hours, instead of 2 hours.
Morevoer, the qualifying exam will cover topics that is covered in Math 5041 in Fall 2014.
There will be approximately 6 homework assignments.
|
{"url":"https://www.math.wustl.edu/~sli/math5043_spring2015/course.html","timestamp":"2024-11-14T17:37:52Z","content_type":"text/html","content_length":"19401","record_id":"<urn:uuid:b31714ed-51dd-4502-bf30-41ba3757a070>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00455.warc.gz"}
|
Angular Kinematics | Brilliant Math & Science Wiki
Angular kinematics is the study of rotational motion in the absence of forces. The equations of angular kinematics are extremely similar to the usual equations of kinematics, with quantities like
displacements replaced by angular displacements and velocities replaced by angular velocities. Just as kinematics is routinely used to describe the trajectory of almost any physical system moving
linearly, the equations of angular kinematics are relevant to most rotating physical systems.
Basic Equations of Angular Kinematics
In purely rotational (circular) motion, the equations of angular kinematics are:
\[v = r\omega, \qquad a_c = -r\omega^2, \qquad a = r\alpha\]
The tangential velocity \(v\) describes the velocity of an object tangent to its path in rotational motion at angular frequency \(\omega\) and radius \(r\). This is the velocity an object would
follow if it suddenly broke free of rotational motion and traveled along a straight line. The rate of change of this velocity is the tangential acceleration \(a\). The centripetal acceleration \(a_c
\) is a second acceleration experienced by rotating objects, because changing the direction of a velocity vector requires an acceleration. Since the direction of the velocity vector changes
constantly in rotational motion, rotating objects must be continuously accelerated towards the axis of rotation by some force providing a centripetal acceleration.
From the above equations, the usual kinematic equations hold in angular form. If an object undergoes constant angular acceleration \(\alpha\), the total angular displacement is:
\[\theta - \theta_0 = \omega_0 t + \frac12 \alpha t^2\]
where \(\theta_0\) is the initial angle and \(\omega_0\) is the initial angular velocity. Similarly, the angular velocity changes according to:
\[\omega^2 = \omega_0^2 + 2\alpha (\theta - \theta_0)\]
in terms of the angular displacement, or
\[\omega = \omega_0 + \alpha t\]
in terms of time.
Though the above derivation gives the magnitudes of angular quantities correctly, it does not capture the fact that angular quantities are also vector quantities. The direction in which the angular
velocity points can be found from the right-hand rule: curving the fingers of your right hand along the direction of rotation, your thumb points in the direction of the angular velocity vector, along
the axis of rotation. This is true by definition; although it seems strange since the vector is perpendicular to the rotation, this definition turns out to be the only way to formulate a consistent
vector theory of rotational forces.
\[12\pi^2\] \[2\pi\sqrt{3}\] \[3\pi\] \[\pi\sqrt{3}\]
A turntable is spun from rest with a constant angular acceleration of \(\frac{\pi}{2} \text{ rad}/\text{s}^2\). After completing six full revolutions, what is its angular velocity in \(\text{rad}/\
\[\frac{4\pi}{5}\] \[\frac{3\pi}{10}\] \[\frac{\pi}{4}\] \[\frac{\pi}{8}\]
A merry-go-round has a radius of about \(8 \text{ m}\). A child sitting on a horse on the outer edge sees his parents, standing still at the entrance, every \(20 \text{ s}\). If the child's horse
could break free of its merry-go-round restraints and continue forward tangentially without accelerating, how fast would it be traveling in \(\text{m}/\text{s}\)?
Derivation of Equations of Angular Kinematics
In a rotating frame of reference, it is often more convenient to use polar coordinates than Cartesian coordinates. Similarly, for vectors it is more convenient to use the radial and tangential
vectors \(\hat{r}\) and \(\hat{\theta}\) in place of the usual Cartesian basis vectors \(\hat{x}\) and \(\hat{y}\). The radial vector to an object is defined so that it always points towards the
\[\hat{r} = \cos \theta\, \hat{x} + \sin \theta\, \hat{y}.\]
Note that the \(x\) and \(y\) components of the radial vector are just the usual polar coordinates, normalized to one. Similarly, the tangential vector \(\hat{\theta}\) is defined so that it is
always orthogonal to the radial vector and tangent to the circle on which the radial vector lies:
\[\hat{\theta} = -\sin \theta\,\hat{x} + \cos \theta\, \hat{y}.\]
Show that \(\frac{d\hat{r}}{dt} = \dot{\theta} \hat{\theta}\) and \(\frac{d\hat{\theta}}{dt} = -\dot{\theta} \hat{r}\), where dots indicate time derivatives.
Note that in Cartesian coordinates, the derivatives of the basis vectors \(\hat{x}\), \(\hat{y}\), etc. always vanish, because these basis vectors are fixed. The polar basis vectors, however,
rotate in time so that they are always pointing radially and tangentially along some trajectory. Computing the derivative from the definitions above using the chain rule:
\[ \frac{d\hat{r}}{dt} &= \frac{d}{dt} \left( \cos \theta\, \hat{x} + \sin \theta\, \hat{y}\right) = -\dot{\theta} \sin \theta\,\hat{x} + \dot{\theta} \cos \theta\,\hat{y} = \dot{\theta} \hat{\
theta} \\ \frac{d\hat{\theta}}{dt} &= \frac{d}{dt} \left( -\sin \theta\, \hat{x} + \cos \theta\, \hat{y}\right) = -\dot{\theta} \cos \theta\,\hat{x} - \dot{\theta} \sin \theta\,\hat{y} = -\dot{\
theta} \hat{r} \] as claimed.
From the definitions above, all of the laws of angular kinematics are straightforward to derive. Suppose that the vector pointing to some rotating object is \(\vec{r} = r\hat{r}\) in polar
coordinates, where \(r\) is the magnitude of the distance from the origin. The velocity of the object is then:
\[\frac{d\vec{r}}{dt} = \dot{r} \hat{r} +r\dot{\theta} \hat{\theta}.\]
The velocity has two components, as can be seen above. The first term, \(\dot{r} \hat{r}\), describes the radial velocity of the object away from the origin. The second term is the tangential
velocity. Denoting \(\dot{\theta} = \omega\) as the angular velocity, the tangential velocity is just \(v = r\omega\).
Taking another derivative allows identification of the different terms contributing to the acceleration of the object:
\[ \frac{d^2 \vec{r}}{dt} &= \frac{d}{dt} \frac{d\vec{r}}{dt} = \frac{d}{dt} \left( \dot{r} \hat{r} +r\dot{\theta} \hat{\theta} \right) \\ &=(\ddot{r} - r\dot{\theta}^2)\hat{r} + (r\ddot{\theta} + 2\
dot{r} \dot{\theta}) \hat{\theta} \]
The radial terms \(\ddot{r}\) and \(r\dot{\theta}^2 = r\omega^2\) describe the radial acceleration outward from the origin and the centripetal acceleration towards the origin, respectively. The
tangential terms are the tangential acceleration \(a = r\ddot{\theta} = r\alpha\), where \(\alpha = \dot{\omega}\) is the angular acceleration, and \(2\dot{r} \dot{\theta} = 2\dot{r} \omega\), the
Coriolis acceleration.
Remarkably, this derivation proves without reference to any forces at all that an object in circular motion with angular velocity \(\omega\) must accelerate radially inward with the centripetal
acceleration \(a_c\) given above.
Rotating Systems in Physics
Countless rotating systems in physics can be analyzed using the laws of angular kinematics; a few are explored in the following examples.
As most kids learn, if you quickly rotate a tube containing a ball, the ball can be made to "slingshot" out the end at very high speeds. The same effect is visible in rotating a rod with a bead
around it and many other physical scenarios. How fast does the velocity of the ball/bead increase if the tube/rod is rotated at constant angular velocity \(\omega\)?
Since there is no radial force on the ball/bead, the total radial acceleration is zero according to Newton's second law. In polar coordinates this means that:
\[\ddot{r} - r\omega^2 = 0.\]
This differential equation in \(r\) is solved by:
\[r(t) = Ae^{\omega t} + Be^{-\omega t},\]
for some constants \(A\) and \(B\) depending on initial conditions. If the ball/bead starts from rest at radius \(r_0\), these constants are fixed to be:
\[A = B = \frac{r_0}{2},\]
so the solution for the velocity of the ball/bead is found by differentiating to be:
\[\dot{r}(t) = \frac{r_0 \omega}{2} e^{\omega t} - \frac{r_0 \omega}{2} e^{-\omega t}.\]
The velocity of the ball/bead grows exponentially, since the second term damps to zero quickly.
A bead is lodged in a wheel that rolls without slipping with constant velocity \(V\). Show that the trajectory of the bead traces a cycloid in the lab frame.
In the wheel reference frame, the bead is in uniform circular motion. If the wheel is of radius \(r\), the bead has coordinates and velocity with respect to the center of the wheel: \[ \vec{r} &=
r\hat{r} \\ \vec{v} &= r \omega \hat{\theta} = V \hat{\theta} \]
In the lab reference frame, the center of the wheel moves with velocity \(V\), so it is located at:
\[\vec{R} = Vt \hat{x} + r\hat{y} = r\omega t \hat{x} + r\hat{y},\]
keeping in mind that the center of the wheel is a height \(r\) above the ground. The position of the bead in the lab frame is therefore:
\[\vec{R} + \vec{r} = r\omega t \hat{x} + r\hat{y} + r\hat{r}\]
Now if the wheel rolls clockwise (i.e., to the right) without slipping, the angle the bead has traveled is:
\[\theta(t) = -\omega t\]
assuming the bead starts at \(\theta = 0\). So the position of the bead is:
\[ r\omega t \hat{x} + r\hat{y} + r\hat{r} = r(\omega t +\cos (\omega t)) \hat{x} + r(1-\sin (\omega t)) \hat{y}.\]
Below is a plot of the position above for \(r = \omega = 1\) to verify that it is indeed a cycloid:
The fact that the Earth rotates means that everywhere on the surface of the earth is a rotating reference frame. This has observable effects from the Coriolis acceleration, most notably in the
precession of the axis of rotation of a sufficiently large pendulum. This experimental apparatus is usually called a Foucault pendulum.
Derive the precession of the Foucault pendulum assuming the Earth rotates with angular frequency \(\Omega\).
The precession results from the fact that the plane in which the pendulum oscillates rotates with the rotation of the Earth. However, at higher latitudes \(\varphi\), this precession is slower
than at the equator. In two dimensions, the Coriolis acceleration is then:
\[ \ddot{x} &= \Omega \sin \varphi \dot{y} \\ \ddot{y} &= \Omega \sin \varphi \dot{x} \]
Small oscillations of a pendulum at frequency \(\omega\) obey Hooke's law. Using this fact and Newton's second law gives the following equations of motion:
\[ \ddot{x} &= -\omega^2 x + 2\Omega \sin \varphi \dot{y} \\ \ddot{y} &= -\omega^2 y - 2\Omega \sin \varphi \dot{x} \]
The solution for the complex coordinate \(z= x+iy\) can be found by matrix ODE methods to be:
\[z = e^{-i\Omega \sin \varphi t} \left(A e^{i\omega t} + Be^{-i \omega t} \right).\]
for some constants \(A\) and \(B\) to be determined by initial conditions. The leading prefactor \(e^{-i\Omega \sin \varphi t}\) describes the \(z\) coordinate as rotating over time. Since \(z =
x+iy\), the axis of the pendulum thus rotates in the \(x,y\) plane with frequency \(\Omega \sin \varphi\). Since \(\Omega = 2\pi \text{ rads}/\text{day}\), over the course of a single day, the
pendulum oscillation precesses by an angle \(-2\pi \sin \varphi\).
[1] D. Kleppner and R. Kolenkow, An Introduction to Mechanics. McGraw-Hill, 1973.
[2] Image from https://en.wikipedia.org/wiki/Coriolis_force#/media/File:Corioliskraftanimation.gif under Creative Commons licensing for reuse with modification.
[3] Image from https://en.wikipedia.org/wiki/Foucault_pendulum#/media/File:Foucault-rotz.gif under Creative Commons licensing for reuse with modification.
|
{"url":"https://brilliant.org/wiki/angular-kinematics-problem-solving/","timestamp":"2024-11-13T12:25:15Z","content_type":"text/html","content_length":"63403","record_id":"<urn:uuid:39fb3f22-4d95-40b9-b858-adc5efc98f2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00052.warc.gz"}
|
ABC is an isosceles triangle. If the coordinates of the base ar... | Filo
is an isosceles triangle. If the coordinates of the base are and , the coordinates of vertex , is
Not the question you're searching for?
+ Ask your question
Sol. (c) Let coordinates of vertex be .
[using distance formula]
Since, is isosceles, therefore
Only option (c) satisfy Eq. (i), as
Hence, option (c) is correct.
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE
9 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Arihant Mathematics Master Resource Book (Arihant)
View more
Practice more questions from Straight Lines
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text is an isosceles triangle. If the coordinates of the base are and , the coordinates of vertex , is
Topic Straight Lines
Subject Mathematics
Class Class 11
Answer Type Text solution:1
Upvotes 112
|
{"url":"https://askfilo.com/math-question-answers/a-b-c-is-an-isosceles-triangle-if-the-coordinates-of-the-base-are-b13-and-c-27-205340","timestamp":"2024-11-14T18:16:38Z","content_type":"text/html","content_length":"574189","record_id":"<urn:uuid:4103fc5d-9669-4f52-a8e3-3f2dde2fadf8>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00589.warc.gz"}
|
Evaluation Metrics: Treynor Ratio
The Treynor ratio is a measure of risk-adjusted return, similar to the Sharpe ratio and the Sortino ratio. It is used to evaluate the performance of a portfolio by adjusting for the risk taken to
generate the return. The higher the Treynor ratio, the better the portfolio’s return relative to the risk it is taking on. It is calculated by dividing the portfolio’s excess return (the portfolio
return minus the risk-free rate of return) by the portfolio’s beta. Beta is a measure of a portfolio’s volatility in relation to the market. A beta of 1 indicates that the portfolio’s returns are in
line with the market, while a beta greater than 1 indicates that the portfolio’s returns are more volatile than the market, and a beta less than 1 indicates that the portfolio’s returns are less
volatile than the market.
The formula for the Treynor ratio is:
Treynor ratio = (portfolio return – risk-free rate of return) / portfolio beta
For example, let’s say a portfolio has a return of 8%, a beta of 1.5, and the risk-free rate of return is 2%. The Treynor ratio would be:
(8 – 2) / 1.5 = 4
A Treynor ratio of 4 indicates that the portfolio is generating a return that is 4 times higher than the level of market risk (beta) it is taking on. A Treynor ratio greater than 1 indicates that the
portfolio is generating a return that is higher than the level of market risk it is taking on, while a Treynor ratio less than 1 indicates that the portfolio is generating a return that is lower than
the level of market risk it is taking on.
The Treynor ratio is useful for evaluating the performance of a portfolio in relation to the market. It can be used to compare the risk-adjusted performance of different portfolios, and is
particularly useful for portfolios that have a high beta.
However, as with the Sharpe ratio and Sortino ratio, it is important to consider the Treynor ratio in the context of the investor’s risk tolerance and investment goals. Additionally, it is also
important to note that the Treynor ratio also does not account for skewness or kurtosis of the return distribution.
|
{"url":"https://quant.fish/wiki/evaluation-metrics-treynor-ratio/","timestamp":"2024-11-14T00:05:21Z","content_type":"text/html","content_length":"122570","record_id":"<urn:uuid:94897e7b-0828-4395-8ee3-9279f4ff3692>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00614.warc.gz"}
|
Charle’s Law Calculator | Solve Gas Volume and Temperature Problems
Home » Simplify your calculations with ease. » Chemistry Calculators »
Charle’s Law Calculator | Solve Gas Volume and Temperature Problems
Understanding the relationships between gas volume and temperature is crucial in many scientific and engineering applications. Charles’ Law is a fundamental principle that describes this
relationship. In this article, we’ll explore the basics of Charles’ Law, discuss its real-life applications, and introduce you to our powerful Charles’ Law Calculator, which simplifies solving gas
volume and temperature problems.
Charles’ Law: Definition and Formula
Charles’ Law Definition
Charles’ Law, also known as the law of volumes, is a gas law that states that, at a constant pressure, the volume of an ideal gas is directly proportional to its absolute temperature. This
relationship was first described by Jacques Charles, a French chemist, and physicist, in the 1780s.
Charles’ Law Formula
The formula for Charles’ Law is expressed as:
V₁ / T₁ = V₂ / T₂
• V₁ and V₂ are the initial and final volumes of the gas, respectively
• T₁ and T₂ are the initial and final temperatures of the gas in Kelvin, respectively
Charles’ Law Calculator: Features and Functionality
Input Fields and Unit Selections
Our Charles’ Law Calculator is designed to be user-friendly and versatile. It allows you to input initial and final volumes (V₁ and V₂) and temperatures (T₁ and T₂) in various units. You can choose
from different units for volume (e.g., cubic meters, liters, cubic inches) and temperature (e.g., Kelvin, Celsius, Fahrenheit).
Calculation Process
The calculator uses the Charles’ Law formula to compute the missing variable when three of the four variables are given. It handles conversions between units and ensures accurate calculations.
Error Handling and Validation
The calculator performs error handling and validation to ensure that the inputs are valid and non-zero (if applicable). If an input value does not meet the necessary criteria, an error message is
displayed to inform the user.
How to Use the Charles’ Law Calculator
Step-by-Step Instructions
1. Enter the initial volume (V₁) and select the appropriate unit.
2. Enter the initial temperature (T₁) and select the appropriate unit.
3. Enter the final volume (V₂) or final temperature (T₂) and select the appropriate unit. Leave the field for the variable you want to calculate empty.
4. Click the “Calculate” button. The calculator will compute the missing variable and display the result.
5. If necessary, click the “Reset” button to clear the input fields and start a new calculation.
Example Calculation with Given Inputs
Initial parameters:
• Initial volume (V₁) = 3 m³
• Initial temperature (T₁) = 7 K
Final parameters:
• Final volume (V₂) = 2 m³
• Final temperature (T₂) = ? K
Using the Charles’ Law Calculator, we find that the final temperature (T₂) is approximately 4.667 K.
Tips for Using the Calculator Effectively
• Double-check your inputs for accuracy before performing a calculation.
• Make sure you have selected the correct units for each variable.
• Remember that the calculator assumes ideal gas behavior, so the results may not be accurate for all real gases under certain conditions.
Applications of Charles’ Law
Everyday Examples
• Hot air balloons: As the air inside the balloon is heated, it expands, causing the balloon to rise.
• Car tires: Tires can expand or contract due to temperature changes, affecting their pressure and performance.
Industrial Applications
• Refrigeration systems: Charles’ Law helps engineers design systems that efficiently control gas volume and temperature.
• Gas storage: Understanding the relationship between gas volume and temperature is essential for safely storing and transporting gases under various conditions.
Scientific Research
• Atmospheric studies: Researchers use Charles’ Law to model the behavior of gases in the Earth’s atmosphere and predict the impact of temperature changes on gas volume.
• Space exploration: Charles’ Law is used in the design of space vehicles and equipment, where the behavior of gases under extreme temperature conditions must be considered.
Frequently Asked Questions (FAQs)
What is Charles’ Law?
Charles’ Law is a fundamental gas law that states that the volume of an ideal gas is directly proportional to its absolute temperature when pressure is held constant.
How do I use the Charles’ Law Calculator?
Enter the initial and final volumes and temperatures of the gas in the appropriate input fields, select the desired units, and click “Calculate.” The calculator will compute the missing variable
based on the Charles’ Law formula.
When should I use Charles’ Law?
Use Charles’ Law when you need to determine the relationship between gas volume and temperature at a constant pressure or to solve problems involving changes in gas volume and temperature.
What units can I use with the calculator?
The calculator supports various units for volume (e.g., cubic meters, liters, cubic inches) and temperature (e.g., Kelvin, Celsius, Fahrenheit).
Can the calculator handle different units for volume and temperature?
Yes, the calculator can handle different units for volume and temperature. It automatically converts the inputs to the appropriate units for calculation and displays the result in the selected unit.
Charles’ Law is a fundamental principle that helps us understand the relationship between gas volume and temperature. Our easy-to-use Charles’ Law Calculator simplifies the process of solving gas
volume and temperature problems, making it a valuable tool for students, engineers, and researchers alike. By understanding Charles’ Law and its applications, we can better grasp the behavior of
gases in various contexts, from everyday life to cutting-edge scientific research.
Leave a Comment
|
{"url":"https://calculatorshub.net/chemistry-calculators/charles-law-calculator/","timestamp":"2024-11-09T10:35:00Z","content_type":"text/html","content_length":"128115","record_id":"<urn:uuid:60cd19fc-950f-4d80-b8b4-46dbf1dd3afc>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00378.warc.gz"}
|
Xianke Zhang
Xianke Zhang (张贤科, born 1941) is a Chinese mathematician, professor and author at the Shenzhen Graduate School of Harbin Institute of Technology..^[1]
Academic Career[edit]
Zhang was among the first professors in China to join the renown Southern University of Science and Technology (SusTech)^[2]. Before that, he was a professor in Tsinghua University for 14 years^[3].
He moved to the Shenzhen Graduate School of Harbin Institute of Technology in 2016.
Posts Held[edit]
Zhang was made a Visiting Scholar at the University of Maryland in the United States in 1987 until 1987. He became a Professor at Tsuing Hua University in China in 1993, a post he left in 2011. Zhang
was also made a Scientist at the Abdus Salam International Cemtre for Theoretical Physics in Italy in 1991, before being appointed the Senior Scientist in 1999. He still holds this position.
Contribution in Chinese Education[edit]
Zhang is known in particular for his artistic integration of traditional culture into teaching. He is the author of several well-known mathematics textbooks, including Introduction to Algebra Number
Theory (代数数论导引)^[4], Advanced Algebra (高等代数学)^[5], Advanced Linear Algebra (高等线性代数)^[6]^[7], Elementary Number Theory (初等数论)^[8]. He is also the author of Advanced Algebra Method
(高等代数解题方法)^[9] and Famous Greek Problems and Modern Mathematics (古希腊命题与现代数学)^[10].
His teaching belief was influenced by Luogeng Hua, a pioneering Chinese mathematician who also graduated from the University of Science and Technology of China^[11].
National awards[edit]
• 'National Natural Science' Award (China, 1989)
• 'Chinese PhD for Distinguished Contribution' Award (China, 1991) (“做出突出贡献的中国博士学位获得者”奖)
• 'Advancing Science Award - Chinese Academy of Science' (China, 1988)
Other awards[edit]
Zhang has received multiple from the city of Beijing, province of Anhui, Qsing Hua University, University of Science and Technology of china, China Baowu Steel Group.
On Number Fields of Type (L, L, …, L) , Scientia Sinica, A27(1984), No. 10, 1018-1026; (the author of the followings is Zhang Xianke unless otherwise specified)
Cyclic Quartic Fields and Genus Theory of Their Subfields, J. Number Theory, 18 (1984), No.3, 350-355;
A Simple Construction of Genus Fields of Abelian Number Fields, Proceed. of American Math. Soc. 94(1985), No.3. 393-395;
Determination of algebraic function fields of type (2, 2, …, 2) with class number one, Sci. Sinica Ser. A31(1988), no.8, 908-915;
Ten Formulae of Type Ankeny-Artin-Chowla for Class Numbers of General Cyclic Quartic Fields, Scientia Sinica, A32(1989), 4:417-428
Ideal Class Groups and Their Subgroups of Real Quadratic Fields (by Zhang XK & L.Washington), Science in China, A40(1997) 9: 909-916
Counterexample and correction about genus fields of number fields. J. Number Theory 23(1986), no.3, 318-321.
Structure and Prime Decomp. Law and Relative Extensions of Abelian Fields with Prime Power Degree, Science in China, A42(1999)8:816-824
Steinitz Class of Mordell-Weill Groups of Elliptic Curves with C M (by Liu Tong & Zhang XK ) Pacific J. of Math. 193(2000), 2: 371-379.
L-Series and Their 2-adic Valuations at s=1 Attached to CM Elliptic Curves, Acta Arithmetica, 103.1(2002), 79-95 (by DR QIU, XK ZHANG)
Congruence Formulae modulo powers of 2 for class numbers of cyclic quartic fields, Sci. China A 52 (2009), No.2, 417-426 (by Ma Lianrong, Li Wei, Zhang Xianke)
This article "Xianke Zhang" is from Wikipedia. The list of its authors can be seen in its historical and/or the page Edithistory:Xianke Zhang. Articles copied from Draft Namespace on Wikipedia could
be seen on the Draft Namespace of Wikipedia and not main one.
|
{"url":"https://en.everybodywiki.com/Xianke_Zhang","timestamp":"2024-11-10T16:15:26Z","content_type":"text/html","content_length":"68819","record_id":"<urn:uuid:c91c7f66-3a39-4c2b-9425-98f6ede6efba>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00333.warc.gz"}
|
Refuting Flat Earth Misconceptions: The Earth is a Geoid
Which model is best to describe the earth's shape?
The best model to describe the Earth's shape is the geoid. While simpler models like a sphere or an ellipsoid are often used, the geoid provides the most accurate representation of the Earth's shape.
Geoid Model
The geoid is the model that represents the Earth's shape based on its actual gravitational field. It corresponds to the hypothetical sea level surface of the Earth (mean sea level) if there were no
winds, tides, or currents, extended across the continents. The geoid takes into account the Earth's uneven distribution of mass, which affects gravity and, therefore, the Earth's shape.
Key Features:
1. Irregular Shape: The geoid is irregular and more accurate than a perfect sphere or ellipsoid because it reflects local variations in gravity due to mountain ranges, ocean trenches, and density
variations in the Earth's interior.
2. Gravity-Based Surface: It is defined as an equipotential surface where the gravitational potential energy is constant, meaning the force of gravity is perpendicular to the surface at every point.
3. Reference for Heights: The geoid is used as the reference surface for measuring elevations (like mountain heights) and for GPS systems to more accurately determine positions.
Simpler Models:
1. Ellipsoid (Oblate Spheroid):
□ A good approximation for the Earth's shape is an ellipsoid (or oblate spheroid), which accounts for the Earth's flattening at the poles and bulging at the equator due to its rotation.
□ The ellipsoid is mathematically simpler and used for many practical purposes like mapping and satellite navigation.
□ Common ellipsoid models include the WGS84 (World Geodetic System 1984), which is widely used in GPS.
2. Sphere:
□ A perfect sphere is a very rough approximation of the Earth’s shape. Although simple, this model does not account for the Earth’s flattening at the poles or other irregularities.
□ It's useful for basic calculations but far from accurate.
• Geoid: The most accurate, accounting for all gravitational variations and irregularities. It is the best model for high-precision applications like satellite-based measurements and determining
sea levels.
• Ellipsoid: A close approximation of the Earth's overall shape, easier to work with mathematically, and used in many global positioning systems.
• Sphere: A very simplified and imprecise model, but useful for basic geometric or astronomical applications.
The geoid is the best model to describe the Earth's shape in terms of physical accuracy, as it accounts for local gravitational variations and provides a more realistic depiction of the Earth's
surface. For practical applications like GPS, the ellipsoid model (such as WGS84) is commonly used due to its simplicity and sufficient accuracy for most purposes.
Can the earth being a Geoid be an explanation for some arguments flat-earthers make?
Yes, the geoid model can help address some of the misconceptions and arguments often made by flat-earthers. Many flat-earth arguments stem from misunderstandings about the Earth’s shape, particularly
the idea that the Earth should appear perfectly smooth or uniformly round like a sphere. The geoid model clarifies that the Earth’s shape is not perfectly spherical but irregular and influenced by
gravitational variations, which may help explain phenomena that flat-earthers misinterpret.
Common Flat-Earth Arguments and How the Geoid Helps Address Them:
1. Argument: The horizon always appears flat
Explanation with the Geoid:
Flat-earthers often claim that the horizon appears flat when viewed from the ground or even from an airplane. The geoid model shows that while the Earth is curved, the curvature is very gradual.
Because the Earth is so large, local sections of the Earth's surface may appear flat to the naked eye, especially when observed over short distances or low altitudes.
The Earth's irregular geoid shape also means that, depending on your location, gravitational forces might create slight differences in elevation (even at sea level), contributing to the illusion
of flatness.
2. Argument: Water always finds its level
Explanation with the Geoid:
Flat-earthers often argue that bodies of water, like oceans or lakes, should be flat because "water finds its level." The geoid model shows that water does indeed follow the Earth's gravitational
field, conforming to the geoid shape rather than a perfectly flat plane. "Level" on Earth refers to surfaces that are perpendicular to the local direction of gravity, which aligns with the curved
geoid rather than being completely flat.
The geoid explains that water flows to regions of lower gravitational potential, and when in equilibrium, large bodies of water like oceans conform to the irregular gravitational surface of the
3. Argument: We can’t see the curvature of the Earth
Explanation with the Geoid:
The gradual curvature of the Earth’s surface is challenging to perceive over small distances. Even though the Earth is curved, the curvature is so subtle that it’s not easily visible from ground
level or even from low altitudes. The geoid model helps explain that local gravitational variations mean the Earth isn’t a perfect sphere, and where you are on Earth (such as near mountains or
valleys) might affect your perception of its curvature.
The geoid, with its undulating surface, can further clarify that small local features may distort one’s perception, contributing to the difficulty in observing the Earth’s curvature without
traveling to very high altitudes.
4. Argument: Airplanes don’t account for curvature
Explanation with the Geoid:
Flat-earthers sometimes claim that pilots don’t adjust for curvature when flying, suggesting the Earth must be flat. In reality, airplane flight paths follow the geoid surface, which means that
even though the Earth is curved, gravity constantly pulls the airplane toward the center of the Earth. Therefore, pilots don’t need to "adjust" for curvature manually because the plane’s path
automatically follows the Earth's shape through gravitational forces.
Since the geoid accounts for these gravitational variations, flight paths follow this natural shape, and airplanes naturally curve along with the Earth without requiring specific adjustments.
5. Argument: Inconsistent measurements of Earth’s curvature
Explanation with the Geoid:
Some flat-earthers argue that measurements of the Earth’s curvature are inconsistent, citing examples where the curvature is different in various locations. The geoid model directly addresses
this by explaining that the Earth’s shape is not uniform—it is influenced by variations in mass distribution beneath the surface, resulting in different local gravitational fields. These
differences cause local variations in the curvature and elevation of the Earth, which can explain why measurements might vary depending on where they are taken.
The geoid model, with its explanation of gravitational variation and the Earth's irregular shape, can help address some of the misconceptions flat-earthers raise. By showing that the Earth is not a
perfect sphere but an uneven surface shaped by gravity, the geoid helps explain why certain observations (such as a flat horizon or water’s surface) might seem to support flat-earth arguments when,
in reality, they align with the Earth's true geoid shape.
*Answers provided by GPT-4o
|
{"url":"https://www.basedtheology.com/2024/10/refuting-flat-earth-misconceptions.html","timestamp":"2024-11-06T03:58:49Z","content_type":"text/html","content_length":"71023","record_id":"<urn:uuid:2bc78fd6-9eec-4d5c-9031-d1f21cdc0d90>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00333.warc.gz"}
|
• Phreatic mode doesn't work very well in cases where the water table must pass through several layers of the model, such as may happen in mountainous terrain.
• AquaGeo, Ltd., invites FEFLOW users to try our free plug in called ZoneTable. This plug in works with Version 6.2 in 3D flow models, and provides a different approach to specifying Kx, Ky, Kz,
specific storage and unsaturated flow porosity (specific yield). The plug in can simplify the process of model setup and modification by using a material-property lookup table and zonation array.
Pete Sinton & Bill Wingle
• Alice, are you attempting to set the elevation of each slice to one value, and then deactivate elements to mimic elevation differences of geological layers? In 6.1, elements cannot be
deactivated, but you can set the permeability and storage of the element to very small values so that the act like an impermeable barrier or block of material. This won't work if the elements you
want to "inactivate" are at the top of the model where non-zero recharge (in/out flow) is applied.
• I've been testing AlgoMesh and figured out how to do this in 6.2: Open feflow, but do not open a fem or smh file. Click "File" and "Import Mesh...", and then add, for example, a shape file of the
mesh. It will import as a 2d confined model. I don't know yet what it does if you have a model already open...
• The initial temperature can also be set in all earlier versions of feflow
• You can also define the wall using super mesh (SM) elements instead of lines. Ahalo of SM elements that grow in width with distance from the wall to facilitate control mesh density.
• select the nodes you want the data for, export the temperature for the selection, then use excel or some other program to compute the average
• I haven't yet found a more efficient way other than close and careful inspection of the supermesh, and I agree, using feflow to do the supermesh is the most efficient editing method.
• The 'ideal element size' is based on a 2D analytical solution applied to a 3D system? What analytical solution is used? The Book doesn't indicate for flow-only situations. I agree that smaller
element size (closer node spacing) may not result in more accuracy because at some point the numerical approximation will match the continuous PDE very closely (within the limits of the machine).
What Peter is calling an "over estimate" is not the same thing as numerical error. When you add nodes (smaller elements), the simulation is indeed more accurate (or as accurate as the machine
precision allows) but the computed drawdown values at nodes inside the "virtual radius" (the actual or physical radius of the pumped well) do not represent what actually happens inside the bore
I assume the analytical solution Peter refers to is the typical one developed by CV Theis. The Theis solution does not apply to what actually happens inside of the bore hole either. However, both
the numerical and the analytical solutions can be used to compute drawdowns within the radius of the bore and, since the bore hole is not actually simulated by either, both will provide accurate
results for a point sink (one of the assumptions for the Theis equation is that the borehole is infinitely small).
I wonder if the virtual radius is really a good guideline for node spacing near the bore hole in cases where one wants to simulate drawdown near to, or at, the borehole wall. Maybe I will do some
As for the artifact (numerical error), I agree with Peter that its not that important provided (1) you are not attempting to accurately simulate the drawdown near to, or at, the wall of the
borehole and (2) the error doesn't overwhelm your mass balance.
I think Mark did get more accurate results when he added nodes.
• I see. I hadn't used this feature of FEFLOW before, but looking at the help file, the "ideal element size" seems fixed regardless of the pumping rate. When I change the well radius (which is the
radius of the vertical pipe element feflow internally assigns), the "ideal" radius changes, but the rate has no effect on the "ideal" radius.
However, the larger the pumping rate, the larger the hydraulic gradients at the well node and associated pipe elements, which to me means that elements have to be smaller (nodes closer together)
to get an accurate simulation of heads in and near the well.
The help file states this: "Due to spatial discretization in numerical modelling, the hydraulic head resulting from the simulation at the well nodes themselves highly depends on the size of the
elements around the well location. "
So while I don't understand why the ideal radius isn't also a function of the pumping rate, it is clear to me that smaller elements are needed as the pumping rate increases. In any case, the way
you describe your problem leads me to think your node spacing (and hence element size) is too large.
You basically tested this when you put nodes closer to the pumping well. your result was more drawdown, which is exactly what I would expect with a more accurate simulation. The drawdown wasn't
over-estimated in that simulation, it was more accurately calculated. FEFLOW is simulating an ideal pumping well but your field data comes from a realistic (non-ideal) well.
|
{"url":"https://support.dhigroup.com/public/62c05a76-a3c8-e911-a96d-000d3a4640d2/forum-posts","timestamp":"2024-11-12T02:41:16Z","content_type":"text/html","content_length":"66828","record_id":"<urn:uuid:aaa6307a-309a-4e7c-a9a7-d07da8074500>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00036.warc.gz"}
|
Mathematics teaching practice in secondary school
COURSE AIMS AND OBJECTIVES: The course aims are to train students - prospective mathematics teachers for successful preparation, performing and analysis of mathematical lessons at secondary
school level.
COURSE DESCRIPTION AND SYLLABUS: Mathematical teaching practice takes place in secondary schools. Groups of students (max 5) will attend lessons of mathematics taken by chosen teachers from
secondary schools. They will be acquainted with law regulations and school organization. They will be introduced with pedagogical documentation, mathematical syllabus at primary school level.
They will plan, prepare for teaching and teach several lessons in class. During the practice, student will write a log-book. For each lesson they will write a detailed didactical preparation.
Afterward, they will prepare and perform a public lesson.
|
{"url":"https://www.pmf.unizg.hr/math/en/course/mpimuss","timestamp":"2024-11-07T15:43:50Z","content_type":"text/html","content_length":"81868","record_id":"<urn:uuid:5c9d27f3-2d7c-44f9-a11a-e95099d8c047>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00801.warc.gz"}
|
Lecture 6 Problems. - ppt video online download
2 Example A solid conducting sphere of radius a has a net charge +2Q. A conducting spherical shell of inner radius b and outer radius c is concentric with the solid sphere and has a net charge –Q as
shown in figure. Using Gauss’s law find the electric field in the regions labeled 1, 2, 3, 4 and find the charge distribution on the spherical shell.
3 E = 0 since no charge inside the Gaussian surfaceRegion (1) r < a To find the E inside the solid sphere of radius a we construct a Gaussian surface of radius r < a E = 0 since no charge inside the
Gaussian surface Region (2) a < r < b we construct a spherical Gaussian surface of radius r Where equal zero. Why?
4 Region (4) r > c we construct a spherical Gaussian surface of radius r > c, the total net charge inside the Gaussian surface is q = 2Q + (-Q) = +Q Therefore Gauss’s law gives Where r>c Region (3) b
> r < c E=0 How?
5 Example: A long straight wire is surrounded by a hollow cylinder whose axis coincides with that wire as shown in figure. The solid wire has a charge per unit length of + , and the hollow cylinder
has a net charge per unit length of +2 . Use Gauss law to find (a) the charge per unit length on the inner and outer surfaces of the hollow cylinder and (b) the electric field outside the hollow
cylinder, a distance r from the axis.
6 linner = -l Also inner + louter = 2l thus louter = 3l(a) Use a cylindrical Gaussian surface S1 within the conducting cylinder where E=0 linner = -l Also inner + louter = 2l thus louter = 3l (b) For
a Gaussian surface S2 outside the conducting cylinder
7 Example: Consider a long cylindrical charge distribution of radius R with a uniform charge density r. Find the electric field at distance r from the axis where r<R. If we choose a cylindrical
Gaussian surface of length L and radius r, Its volume is , and it encloses a charge By applying Gauss’s law we get, Thus radially outward from the cylinder axis Notice that the electric field will
increase as r increases, and also the electric field is proportional to r for r<R. For the region outside the cylinder (r>R), the electric field will decrease as r increases.
8 Example: Two large non-conducting sheets of +ve charge face each other as shown in figure. What is E at points (i) to the left of the sheets (ii) between them and (iii) to the right of the sheets?
We know previously that for each sheet, the magnitude of the field at any point is a) At point to the left of the two parallel sheets E = - E1 + (- E2) = - 2E
9 b) At point between the two sheets E = E1 + (- E2) = zero(c) At point to the right of the two parallel sheets E = E1 + E2 = 2E
10 Example: Two large metal plates face each other and carry charges with surface density +s and -s respectively, on their inner surfaces as shown in figure 4.24. What is E at points (i) to the left
of the sheets (ii) between them and (iii) to the right of the sheets?
11 E = 8x104N/C A = 0.25m2 q = 0.17x10-6C sigma = 0.68x10-6C/m2Example: A square plate of copper of sides 50cm is placed in an extended electric field of 8*104N/C directed perpendicular to the plate.
Find (a) the charge density of each face of the plate E = 8x104N/C A = 0.25m2 q = 0.17x10-6C sigma = 0.68x10-6C/m2
12 Example: An electric field of intensity 3. 5Example: An electric field of intensity 3.5*103N/C is applied the x axis. Calculate the electric flux through a rectangular plane 0.35m wide and 0.70m
long if (a) the plane is parallel to the yz plane, (b) the plane is parallel to the xy plane, and (c) the plane contains the y axis and its normal makes an angle of 40o with the x axis. (a) the plane
is parallel to the yz plane (b) the plane is parallel to the xy plane The angel 90 c) the plane is parallel to the xy plane The angel 40
13 Example: A point charge of +5mC is located at the center of a sphere with a radius of 12cm. What is the electric flux through the surface of this sphere? Example: (a) Two charges of 8mC and -5mC
are inside a cube of sides 0.45m. What is the total electric flux through the cube? (b) Repeat (a) if the same two charges are inside a spherical shell of radius m.
14 Example: A long, straight metal rod has a radius of 5cm and a charge per unit length of 30nC/m. Find the electric field at the following distances from the axis of the rod: (a) 3cm, (b) 10cm, (c)
100cm. Use to find (a) zero (b) 5.4x103N/C (c) 540N/C
15 Example: The electric field everywhere on the surface of a conducting hollow sphere of radius 0.75m is measured to be equal to 8.90*102N/C and points radially toward the center of the sphere. What
is the net charge within the surface? j = EA = E 4pr2 j = 6.3x103 N.m2/C j = q/eo q=5.5x10-8C
16 (1) A closed surface encloses a net charge of 2. 50 x 10-6 C(1) A closed surface encloses a net charge of 2.50 x 10-6 C. What is the net electric flux through the surface? In what direction is
this net flux? Ans: x 105 N m2/C; outward (2) What is the charge per unit area, in coulombs per square meter, of an infinite sheet of charge if the electric field produced by the sheet of charge has
a magnitude of 4.50 N/C ? Ans x C/m2 (3) The electric field in the region between a pair of oppositely charges plane parallel conducting plates, each 100 cm2 in area, is 7.20 x 103 N/C. What is the
charge on each plate? Neglect edge effects. Ans x C
17 [6] A conducting sphere carrying a charge ‘q’ has a radius ‘a’[6] A conducting sphere carrying a charge ‘q’ has a radius ‘a’. It is inside a concentric hollow conducting sphere of inner radius ‘b’
and outer radius ‘c’. The hollow sphere has no net charge. Calculate the electric field for: a) r < a; b) a < r < b; c) b < r < c; d) r > c; e) What is the charge on the inner surface of the hollow
sphere? f) What is the charge on the outer surface? [7] A small sphere whose mass is 0.60 g carries a charge of 3.0 x 10-9 C and is attached to one end of a silk fiber 8.00 cm long. The other end of
the fiber is attached to a large vertical conducting plate, which has a surface charge of x C/m2 on each side. Find the angle the fiber makes with the vertical plate when the sphere is in
|
{"url":"http://slideplayer.com/slide/3619984/","timestamp":"2024-11-03T23:08:58Z","content_type":"text/html","content_length":"196596","record_id":"<urn:uuid:b05d6b67-2996-4c57-88a7-6f9f2133eeac>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00322.warc.gz"}
|
Common Mistakes to Avoid When Converting VFD Frequency to RPM in context of vfd frequency to rpm
01 Sep 2024
Common Mistakes to Avoid When Converting VFD Frequency to RPM
Variable Frequency Drives (VFDs) are widely used in industrial applications to control the speed and torque of motors. One common task when working with VFDs is converting the frequency output by the
drive to the corresponding motor RPM. However, this conversion requires careful attention to avoid errors that can lead to inaccurate calculations or even damage to the motor. In this article, we
will discuss the common mistakes to avoid when converting VFD frequency to RPM.
Mistake #1: Ignoring the Motor Type
Different motors have different characteristics that affect the relationship between frequency and RPM. For example, induction motors typically follow a linear relationship between frequency and RPM,
while synchronous motors exhibit a more complex relationship. Failing to consider the motor type can lead to inaccurate conversions. To avoid this mistake, always consult the motor’s specifications
or manufacturer documentation to determine its type.
Formula: None
Mistake #2: Not Accounting for Pole Pairs
The number of pole pairs in a motor affects the conversion factor between frequency and RPM. Failing to account for pole pairs can result in significant errors. To avoid this mistake, ensure you know
the number of pole pairs for your motor.
Formula: RPM = (60 x Frequency) / (Pole Pairs x Number of Cycles per Electrical Cycle)
• RPM is the motor speed in revolutions per minute
• Frequency is the VFD output frequency in Hz
• Pole Pairs is the number of pole pairs in the motor
• Number of Cycles per Electrical Cycle is the number of electrical cycles per mechanical revolution (typically 2 for most motors)
Mistake #3: Using an Incorrect Conversion Factor
Some VFDs provide a conversion factor between frequency and RPM, but this factor may not be accurate for all motors. Failing to verify the accuracy of the conversion factor can lead to errors. To
avoid this mistake, consult the motor’s specifications or manufacturer documentation to determine the correct conversion factor.
Formula: None
Mistake #4: Not Considering Load Conditions
The load conditions on the motor can affect its speed and torque characteristics. Failing to consider these conditions can result in inaccurate conversions. To avoid this mistake, ensure you know the
load conditions (e.g., full load, partial load, or no-load) and adjust your calculations accordingly.
Formula: None
Mistake #5: Not Using a Consistent Unit System
When converting between frequency and RPM, it is essential to use a consistent unit system. Failing to do so can lead to errors. To avoid this mistake, ensure you are using either Hz (frequency) or
RPM as your primary unit.
Formula: None
Converting VFD frequency to RPM requires attention to detail and careful consideration of the motor type, pole pairs, conversion factor, load conditions, and unit system. By avoiding these common
mistakes, you can ensure accurate calculations and reliable operation of your motor. Remember to consult the motor’s specifications or manufacturer documentation for specific guidance on converting
between frequency and RPM.
Additional Tips
• Always verify the accuracy of the VFD output frequency and motor speed measurements.
• Consider using a VFD with built-in conversion capabilities to simplify the process.
• Consult with experienced professionals or seek additional training if you are unsure about the conversion process.
Related articles for ‘vfd frequency to rpm’ :
• Reading: Common Mistakes to Avoid When Converting VFD Frequency to RPM in context of vfd frequency to rpm
Calculators for ‘vfd frequency to rpm’
|
{"url":"https://blog.truegeometry.com/tutorials/education/a138c0037d64e0cd96e2779e80e63fd8/JSON_TO_ARTCL_Common_Mistakes_to_Avoid_When_Converting_VFD_Frequency_to_RPM_in_c.html","timestamp":"2024-11-06T07:28:43Z","content_type":"text/html","content_length":"17864","record_id":"<urn:uuid:93fb424e-2450-4867-b574-65ade2555297>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00837.warc.gz"}
|
Implication - (Formal Logic II) - Vocab, Definition, Explanations | Fiveable
from class:
Formal Logic II
Implication is a logical relationship between two propositions where the truth of one proposition (the antecedent) guarantees the truth of another proposition (the consequent). This concept is
essential in understanding how statements relate to one another, especially in terms of cause and effect, as well as reasoning processes.
congrats on reading the definition of implication. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. In propositional logic, an implication is often symbolized as 'A → B', where A is the antecedent and B is the consequent.
2. An implication is considered false only when the antecedent is true and the consequent is false; in all other cases, it is true.
3. Implications can be constructed in first-order logic using predicates and quantifiers to express relationships between objects.
4. Understanding implications is crucial for constructing valid arguments in formal proofs, where you derive conclusions based on premises.
5. In intuitionistic logic, implications take on a more constructive interpretation, meaning that to prove 'A → B', one must provide a method to transform any proof of A into a proof of B.
Review Questions
• How does the concept of implication enhance our understanding of basic propositional logic?
□ Implication serves as a foundational element in propositional logic by establishing how two statements relate. When we analyze implications, we see that the truth of one statement can lead to
another, which helps in evaluating logical arguments. By recognizing this relationship, we can construct more complex logical statements and understand their truth conditions, thus enhancing
our ability to reason effectively.
• Discuss how implications are represented in first-order logic and why this representation matters.
□ In first-order logic, implications are represented through predicates and quantifiers, allowing for more nuanced relationships among objects. This representation matters because it enables us
to express complex statements about specific entities and their properties. For instance, using quantifiers like 'for all' or 'there exists' along with implications allows us to capture
universal truths or specific conditions that link different predicates together, which is crucial for rigorous reasoning.
• Evaluate the differences between classical and intuitionistic interpretations of implication and their implications for logical reasoning.
□ Classical logic treats implications as a material relationship where 'A → B' is false only when A is true and B is false. However, intuitionistic logic interprets implications constructively;
one must provide a method to convert a proof of A into a proof of B. This difference means that while classical logic allows for non-constructive proofs (like proofs by contradiction),
intuitionistic logic emphasizes constructive methods. This shift affects how we approach proofs and reasoning in contexts like mathematics and computer science, where constructive validity is
often required.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
|
{"url":"https://library.fiveable.me/key-terms/formal-logic-ii/implication","timestamp":"2024-11-13T08:52:23Z","content_type":"text/html","content_length":"154020","record_id":"<urn:uuid:322ad0bc-0e99-4ff7-b408-dd9ce0f1d4b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00331.warc.gz"}
|
Infinite Powers
From preeminent math personality and author of The Joy of x, a brilliant and endlessly appealing explanation of calculus—how it works and why it makes our lives immeasurably better.
Without calculus, we wouldn’t have cell phones, TV, GPS, or ultrasound. We wouldn’t have unraveled DNA or discovered Neptune or figured out how to put 5,000 songs in your pocket.
Though many of us were scared away from this essential, engrossing subject in high school and college, Steven Strogatz’s brilliantly creative, down-to-earth history shows that calculus is not about
complexity; it’s about simplicity. It harnesses an unreal number—infinity—to tackle real-world problems, breaking them down into easier ones and then reassembling the answers into solutions that feel
Infinite Powers recounts how calculus tantalized and thrilled its inventors, starting with its first glimmers in ancient Greece and bringing us right up to the discovery of gravitational waves (a
phenomenon predicted by calculus). Strogatz reveals how this form of math rose to the challenges of each age: how to determine the area of a circle with only sand and a stick; how to explain why Mars
goes “backwards” sometimes; how to make electricity with magnets; how to ensure your rocket doesn’t miss the moon; how to turn the tide in the fight against AIDS.
As Strogatz proves, calculus is truly the language of the universe. By unveiling the principles of that language, Infinite Powers makes us marvel at the world anew.
|
{"url":"http://unity3d.heurist.com:8083/book/9374","timestamp":"2024-11-10T12:12:55Z","content_type":"text/html","content_length":"20240","record_id":"<urn:uuid:2380254c-ec1c-43a2-9f65-a94a57ca18ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00393.warc.gz"}
|
You guys are GREAT!! It has been 20 years since I have even thought about Algebra, now with my daughter I want to be able to help her. The step-by-step approach is wonderful!!!
S.D., Oregon
This is great, finishing homework much faster!
Jessie James, AK
I am very much relieved to see my son doing well in Algebra. He was always making mistakes, as his concepts of arithmetic were not at all clear. Then I decided to buy Algebrator. I was amazed to see
the change in him. Now he enjoys his mathematics and the mistakes are considerably reduced.
Paola Randy, IN
I recommend the Algebrator to students who need help with fractions, equations and algebra. The program is a great tool! Not only does it give you the answers but it also shows you how and why you
come up with those answers. Ive shown my students how to use the program during some of our lessons. A couple of them even bought the program to help them out with their algebra homework.
C.P., Massachusetts
|
{"url":"https://factoring-polynomials.com/adding-polynomials/angle-complements/best-algebra-learning.html","timestamp":"2024-11-07T17:17:21Z","content_type":"text/html","content_length":"82661","record_id":"<urn:uuid:917db35f-4d9f-4ab8-af5a-040cab028ae1>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00050.warc.gz"}
|
Title: Coalgebra and Modal Logic ABSTRACT:In recent years, Universal Coalgebra has emerged as a general framework for modelling various kinds of state-based evolving systems. Whereas algebras have
operations for constructing new elements from old, coalgebras provide means to observe or unfold objects. Thus coalgebras are remarkably well tailored to model the concept of state-based dynamics,
where typically, a ¡®state of affairs¡¯ can be observed and modified. Of key importance in this area is the concept of behavior, together with related notions such as invariance and observational
The generality of the concept enables one to build into the type of a coalgebra many different features like input, output, nondeterminism, probability distributions, etc. Thus many fundamental
phenomena in computer science (data streams, automata, transition systems), logic (Kripke models and frames) and mathematics (non-well-founded sets, power series) have in fact a very natural
coalgebraic modelling.
The talk will have two parts. We start with a gentle introduction to the theory of coalgebra, concentrating on the concept of observational indistinguishability (or bisimulation). In the second part
of the talk we discuss the role of modal logic in the theory of coalgebra. We will argue that (a suitably generalized version of) modal logic is the right language for specifying and reasoning about
coalgebraic behavior. We will finish with a discussion of a fundamental dynamic distributive law, which has applications in areas as diverse as automata theory, game theory, and topology. (The talk
does not presuppose any previous exposure to coalgebra.)
Time£º3:00 - 4:30 pm, June 18, 2008 Place£ºXinzhai Room 353, Tsinghua University
|
{"url":"https://corpora.tika.apache.org/base/docs/commoncrawl3/PO/POUQFTVGP6DQONFVPBHE46LOYGGELSAV","timestamp":"2024-11-02T22:46:12Z","content_type":"text/plain","content_length":"2032","record_id":"<urn:uuid:397a9da5-0585-4013-abcf-38b112543102>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00492.warc.gz"}
|