content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Wilcox Archives - The small S scientist
Comparing the quantiles of two groups provides information that is lost by simply looking at means or medians. This post shows how to do that.
Traditionally, the comparison of two groups focuses on comparing means or medians. But, as Wilcox (2012) explains, there are many more features of the distributions of two groups that we may compare
in order to shed light on how the groups differ. An interesting approach is to estimate the difference between the quantiles of the two groups. Wilcox (2012, pp. 138-150) shows us an approach that is
based on the shift function. The procedure boils down to estimating the quantiles of both groups, and plotting the quantiles of the first group against the difference between the quantiles.
In order to aid in comparing the quantiles of the groups, I’ve created a function for R that can be used for plotting the comparison between the two groups. The functions uses the ggplot2 package and
the WRS package (that can be found here: WRS: A package of R.R. Wilcox’ robust statistics functions version 0.24 from R-Forge (rdrr.io)) ; see also: Installation of WRS package (Wilcox’ Robust
Statistics) | R-bloggers (r-bloggers.com).).
plotSband <- function(x, y, x.name = "Control") {
x <- sort(x[!is.na(x)])
y <- sort(y[!is.na(y)])
qhat <- 1:length(x)/length(x)
idx.y <- floor(qhat*length(y) + .5)
idx.y[idx.y <= 0] <- 1
idx.y[idx.y > length(y)] <- length(y)
delta <- y[idx.y] - x
cis <- WRS::sband(x, y, plotit=F)$m[, c(2, 3)]
check.missing <- apply(cis, 2, function(x) sum(is.na(x)))
if (sum(check.missing == length(x)) > 1) {
stop("All CI limits equal to - or + Infinity")
ylims <- c(min(cis[!is.na(cis[,1]), 1]) - .50,
max(cis[!is.na(cis[,2]), 2]) + .50)
cis[is.na(cis[, 1]), 1] <- ylims[1]*5
cis[is.na(cis[, 2]), 2] <- ylims[2]*5
thePlot <- ggplot(mapping = aes(x)) +
xlab(x.name) +
geom_smooth(aes(x = x, y = delta), se = F, col="blue") +
ylab("Delta") +
geom_point(aes(x = quantile(x, c(.25, .50, .75)),
y = rep(ylims[1], 3)), pch=c(3, 2, 3), size=2) +
geom_ribbon(aes(ymin = cis[,1], ymax = cis[,2]), alpha=.20) +
coord_cartesian(ylim = ylims)
Let’s look at an example. Figure 1 presents data from an experiment investigating the persuasive effect of narratives on intentions of adopting a healthy lifestyle (see for details Boeijinga, Hoeken,
and Sanders (2017)). The plotted data are the differences in intention between the quantiles of a group of participants who read a narrative focusing on risk-perception (detailing the risks of
unhealthy behavior) and a group of participants who read a narrative focusing on action-planning (here called the control group), focusing on how the healthy behavior may actually be implemented by
the participant.
Figure 1. Output from the plotSband-function
Figure 1 shows the following. The triangle is the median of the data in the control group, and the plusses the .25th and .75th quantiles. The shaded regions define the simultaneous 95% confidence
intervals for the differences between the quantiles of the two groups. Here, these regions appear quite ragged, because of the discrete nature of the data. For values below 2.5 and above 3.5, the
limits (respectively the lower and upper limits of the 95% CI’S) equal infinity, so these values extend beyond the limits of the y-axis. (The sband function returns NA for these limits). The
smoothed-regression line should help interpret the general trend.
How can we interpret Figure 1? First of all, if you think that it is important to look at statistical significance, note that none of the 95% intervals exclude zero, so none of the differences reach
the traditional significance at the .05 level. As we can see, none of them exclude differences as large as -0.50 as well, so we should not be tempted to conclude that because zero is in the interval
that we should adopt zero as the point-estimate. For instance, if we look at x = 2.5, we see that the 95% CI equlas [-1.5, 0.0], the value zero is included interval, but so is the value -1.5. It
would be unlogical to conclude that zero is our best estimate if so many values are included in the interval.
The loess-regression line suggests that the differences in quantiles between the two groups of the narrative is relatively steady for the lower quantiles of the distribution (up to the x = 3.0, or
so; or at least below the median), but for quantiles larger than the median the effect gets smaller and smaller until the regression line crosses zero at the value x = 3.75. This value is
approximately the .88 quantile of the distribution of the scores in the control condition (this is not shown in the graph).
The values on the y-axis are the differences between the quantiles. A negative delta means that the quantile of the control condition has a larger value than the corresponding quantile in the
experimental condition. The results therefore suggest that participants in the control condition with a relatively low intention score, would have scored even lower in the other condition. To give
some perspective: expressed in the number of standard deviations of the intention scores in the control group a delta of -0.50 corresponds to a 0.8 SD difference.
Note however, that due to the limited number of observations in the experiment, the uncertainty about the direction of the effect is very large, especially in the tails of the distribution (roughly
below the .25 and above the .75 quantile). So, even though the data suggest that Action Planning leads to more positive intentions, especially for the lower quantiles, but still considerably for the
.75 quantile, a (much) larger dataset is needed to obtain more convincing evidence for this pattern. | {"url":"https://small-s.science/tag/wilcox/","timestamp":"2024-11-02T02:17:38Z","content_type":"text/html","content_length":"87307","record_id":"<urn:uuid:cb98f1a9-ac42-4816-8701-1592c50ba8c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00740.warc.gz"} |
Related Rates
An application of implicit differentiation is through finding related rates. Finding a related rate means finding the rates of change of two or more related variables that are changing with respect
to time.
Let’s take an imaginary, inverted cone with a height of h and a radius of r. Assuming that our cone is filled with liquid that drains its volume at a constant rate of x m^2/s, we can figure out the
rate of our liquid height’s diminishment is relation to the rate of the cone’s decreasing volume. However, we do need a few constants if we do not want our answers in terms of variables.
Example 1: Suppose you have an inverted cone (funnel) of radius 8 and height 6, which is filled to the brim with water. Through a small hole at the bottom of this funnel, water will leak out at a
constant 3 m^2/s. At what rate is the height of the water changing when the water is half as high as it started?
We start this problem off by finding an equation that related volume to height and radius for a conical shape. For this problem, we will use the equation
Because we want to find the rate of change for the height, and because we have no use for the radius, we can eliminate a variable by finding the radius in terms of height. When radius is 8, height is
6, so we can figure out that r=4h/3. We can then substitute this back into the original volume equation to get
Now, all we have to do is plug in our given numbers from the initial problem. We can put in 3 m^2/s for dV/dt, the rate of volume change. We can put in 3 for h, because we know that h at the instant
that we want our rate of height change is at half the height of the initial condition of h=6. dh/dt will be what we are looking for, as that is that rate of change of h in respect with time.
The rate of the change in height is 3/[16 π] m/s when the height of the water is half as high as when it started.
Example 2: Suppose you have a 6 foot tall man walking away from a light post that is 14 feet tall. The man walks at a constant 3 m/s. How fast is the tip of his shadow moving along the ground?
The man casts a shadow of length “x[1]” when he is a horizontal distance “x[2]” away from the light post. We can say that the distance between the shadow tip and the light post is “x” which is equal
to x[1] + x[2].
By looking at the example, it seems that what we are solving for is dx[1]/dt. However, this is a common mistake. Note that the rate at which the tip of the shadow is moving along the ground is equal
to the rate at which the distance between the tip of the shadow and the light post is increasing. Thus, we are actually solving for dx/dt.
The height of the man is 6 feet, so the distance between the top of the man’s head to the top of the light post is 14 - 6 = 8 feet. Now we have two triangles. One has legs of lengths 14 and x (also
known as x[1] + x[2]), and the other has legs of lengths 8 and x[2].
We can see that these are similar right triangles, and we can set up the following equation:
(x[1] + x[2])/14 = x[2]/8
x/14 = x[2]/8
x = (7/4)x[2]
Now, we can implicitly differentiate.
dx/dt = (7/4)dx[2]/dt
We were given that the man is walking away from the light post at 3 m/s. This we can substitute as x[2] because the rate at which he walks is equal to the rate the distance between him and the light
post is increasing.
dx/dt = (7/4)(3)
dx/dt = 21/4 m/s
The tip of the shadow is moving at a rate of 21/4 m/s. | {"url":"http://www.hivepc.com/calculus/survivalguide/rrates.html","timestamp":"2024-11-10T14:22:20Z","content_type":"application/xhtml+xml","content_length":"5625","record_id":"<urn:uuid:5cfe8d4d-091e-4a23-8771-add39818f807>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00523.warc.gz"} |
Rebar Cost Calculator
Rebar, or reinforcing steel, is a critical component of most construction projects that require concrete reinforcement. The total cost of rebar depends on the quantity, length, and price per unit.
This calculator helps you estimate the total cost, taking into account any potential wastage.
How to Calculate Rebar Cost?
1. Enter the total number of rebar units you need.
2. Provide the length of each rebar unit in feet.
3. Input the cost per rebar unit.
4. (Optional) Enter the expected wastage percentage if you want to account for any extra material that might be required.
5. Click 'Calculate Rebar Cost' to get the estimated total cost.
Example Calculation
If you need 50 rebar units, each 20 feet long, at $5 per unit, and expect a 5% wastage, the total cost would be:
Total Cost = (Quantity * Length * Cost Per Unit) * (1 + Wastage) = (50 * 20 * 5) * 1.05 = $5,250
Why This Calculator Is Useful
Knowing the total cost of rebar ahead of time can help you plan your construction budget more accurately and avoid unexpected material shortages or cost overruns. The wastage percentage feature
ensures that you account for additional material that might be required during construction. | {"url":"https://profitcalculate.com/rebar-cost-calculator/","timestamp":"2024-11-12T09:47:26Z","content_type":"text/html","content_length":"27946","record_id":"<urn:uuid:f0329802-afbc-4a53-b5b5-084e2f41e3e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00776.warc.gz"} |
math - Zodi Games
Help the zombie ride the farm animals by quickly answer math questions! Click or tap to answer math questions
Addictive math game where you can test your addition skills. To play this game, select the balls that will add up to the number shown at the bottom of the screen. When the selected balls values add
up to the target value, then, the selected balls are removed from the game. The aim of the … Read more
2048 Cuteness Edition is a HTML5 Skill Game.Enjoy this cute 2048 version! Use arrow keys to move the tiles. When two identical animals touch, they’ll merge into one.Will you get to the 2048 tile? Use
arrow keys or Swipe to move numbers.
Wordoku is a HTML5 Logic Game.Enjoy this original variant of the classic Sudoku with letters and words.3 different difficulty level:EasyMediumHard Click on a cell to place a Letter.
Lovely puzzle-logic game, ideal for kids and adults! Train your mind by solving puzzles, fill board with right tower’s heights as fast as you can and improve your logic thinking!Available for desktop
and mobile browsers. Tap/click on options and towers.Your task is to build a tower on every square, in such a way that:*Each row/column … Read more
How quick are you to do simple additions? For Touch Devices: – Tap Play to start- Tap 1, 2 or 3 to answerFor Desktop Devices:- Click Play/Press space to start- Click 1, 2 or 3 to answer
Emoji Math, is developed to agility fast calculation in simple mathematical operations. With this entertaining game, both children and adults will be able in a few days to perform fast calculations,
only with the help of addition, subtraction and multiplication, we must reach the number target that we are.Do not miss this opportunity to train … Read more
Mathematics for kids, is developed to agility fast calculation in simple mathematical operations. With this entertaining game, both children and adults will be able in a few days to perform fast
calculations, only with the help of addition, subtraction and multiplication, we must reach the number target that we are.Do not miss this opportunity to … Read more
Fun and addictive game who bring together match 3 game style with math. Connect numbers until match result shown in big circle. You can use hint button to help you in scoring.
Solve the problem at top of ‘x’ and then tap the fish containing that value. Solve the problem at top of ‘x’ and then tap the fish containing that value.
Drop the objects with the number aligned to the number you want to remove diagonally, vertically or horizontally Drop the objects with the number aligned to the number you want to remove diagonally,
vertically or horizontally
Drag the square towards the circles such that the number of the square became zero. Drag the square towards the circles such that the number of the square became zero.
Drag the squirrel towards the numbers that sums the numbers and equals to the number on top of the hole. Drag the squirrel towards the numbers that sums the numbers and equals to the number on top of
the hole.
How much fast you are while solving the equations? Test your skills by playing this game, answer is either 1 or 2 or 3, but it is not easy as it looks. Good Luck Use Tap or Mouse to select the
correct option
SpaceMandala is a new logic game with an original and relaxing game play.The game consists of 42 levels divided into 3 difficulty modes.Relax, think and play ! Fill the empty forms to get to the next
level.Click on the button to make the empty forms turn.Click on a form to select it then, click on … Read more
The game is optimised for all mobile and desktop browsersProcedural generation of game situationsAddition, substraction and multiplication are supported Count colored shapes and choose the right
Do you like math or logic games? Thanks to this amusing number connecting you will start to love it! Match the same adjacent numbers to convert them to a higher number. Match further to reach 10 and
beyond! Easy to play, hard to master! Reach the highest number by matching the same adjacent numbers. The … Read more
Your goal is to arrange the 3 discs such that each column adds to the same number. The challenge is that you don’t know what that number should be.Comlete all three levels and become the winner.
Arrange the discs and click “Control” to check combination of numbers.If the solution is correct, you’ll go to next … Read more
How good and fast is your mathematics skills? Select the correct anser among 3 answer options and reach the highest score.
Rullo is a simple math puzzle where you have a board full of numbers. The goal is to make the sum of numbers in each row and column be equal to the answer in the box. What you have to do is to remove
some numbers from the equation by clicking on them.The board sizes … Read more
Math Test is fun html5 Multiplication game for kids and adults. You have got a 60 sec to answer correct as many quastion as possible. Time is not on your side, think fast and do your best. Have fun.
Mouse or tap to play.
Math Test 2 is an online game that you can play for free. Math Test 2 is a game of multiplication, addition and subtraction. You only have 60 seconds to answer as many questions as you can. Your
brain needs to think quickly to find the right answer and get more points. Have a good … Read more
This is fun Mathematical Skills game. In this wonderful educational game you have all the basic operations of mathematics. You have only 3 second to choose correct answer. Turn on your mind and enjoy
the game! Use mouse or tap to screen to play!
Quick Math is an addictive game with simple graphic. Tap a right calculation. Sounds easy? Believe me it is not! It takes super human reflexes to master it! Try to get as many score as possible.Share
your best score with your friends and have fun. I think they will grow jealous of your success. Just … Read more
QuickMath is a simple Math game.You are only challenged to answer available Mathematics questions.There are 5 categories of Mathematics that you can play.Addition, Reduction, Multiplication,
Distribution and Combination. Choose the correct answer.
A Christmas match 3 game with a mathematical twist. You’ll need to figure out if the rounding statement is true or false to crunch Christmas items. Crunch the right items to make matches and clear
the board. Clear the board of all of the gray boxes surrounding each Christmas item to complete a level. A … Read more
Ten Blocks — an exciting and fun mix of digital puzzles such as Sudoku and falling block games that you know and love. Drag the cubes onto the board, make 10 in any row or column and score points!
LMB – drag and drop block.
A fun mix of bubble shooting and 2048! Merge the numbers together as you try to keep less then 12 pucks on the stage at a time. Click or tap to shoot pucks
Complete all 20 holes in this golf game that helps you practice / reinforce your addition skills the fun way. Choose one of three modes based on your addition skills: Beginner, Intermediate and
Expert. Completing holes will award bonus shots. Collect up to 3 bonus shots to help you reduce your shots taken on holes … Read more
Best HTML5 sudoku game online.Mutiple game modes and grid sizes for beginner to expert.Complete Sudoku tutorial and Sudoku user guide. Click a cell and select the right number
Math Test Challenge is fun and challenging game suitable for all ages. If you think that you are good at math than try this game and make a high score. Five sec for every quastion so think fast. Have
fun playing. Mouse or tap to play.
Kids Math Challenge is fun and challenging game suitable for all ages If you think that you are good at math than try this game and make a high score. This time we only use plus and minus so what are
you waiting for? Start playing:). Have fun! Mouse or tap to play.
EG True and false is a casual game in which math questions appear quickly on the page, so you must also respond quickly to get points, otherwise, you’ll be a loser. Gradually, the speed of the game
is increased and make you confused. This game is very interesting.Ecaps Games with tons of games for all … Read more
Multiplication Math Challenge is an online game that you can play for free. Multiplication Math Challenge is a game of multiplication. You have 10 seconds for each questions. Your brain needs to
think quickly to find the right answer and get points. Have a good time. Mouse or tap to play.
Subtraction Math Challenge is fun math game suitable for all ages. I hope you like math and this game is just to test your math skills. Ten seconds to answer each quastion. Have fun playing. Mouse or
tap to play.
Clear out the baddies by identifying and sending a missile to their location on the coordinate grid. Miss and they’ll fire back taking down your shield some. Choose to play timed or untimed and on
Quadrant I or on all 4 quadrants. Use the on-screen number pad to plug in the X Y coordinate of … Read more
Count animals on the screen before the time runs out and pick the right answer. How long can you last? Tap the correct number, in the bottom of the screen, equal to the number of avatars on the
scene, before the time runs out.
Easy Math is fun html5 game that you can play online for free. Are you good at math? Well, test your math skills in this free game and have fun. Do your best and make a high score. Mouse or tap to
Mathematics for Kids! – is a fun and beautiful mathematical game for children, the main purpose of the game is to give the correct answer to the mathematical example for three seconds, and get the
most surprising number of correct answers! And how much can you gain ?, a very interesting game, you need to … Read more
Try to match four of the same numbers together to combine them. Keep the board as empty as possible as you try to get a high score. Click or tap to move blocks | {"url":"https://zodigames.com/tag/math/","timestamp":"2024-11-13T22:14:22Z","content_type":"text/html","content_length":"200388","record_id":"<urn:uuid:2d4e36d5-2ab0-4ee0-b2cf-cf682501aaa0>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00449.warc.gz"} |
What Is Drift Velocity? - GetdailyTech
What Is Drift Velocity?
Are you curious to know what is drift velocity? You have come to the right place as I am going to tell you everything about drift velocity in a very simple explanation. Without further discussion
let’s begin to know what is drift velocity?
Drift velocity is a fundamental concept in physics, particularly in the study of electricity and magnetism. In this article, we delve into the intricacies of drift velocity, examining its
significance, formulas, and applications. Whether you’re a Class 10 student curious about basic physics principles or a Class 12 student seeking in-depth knowledge, this guide is designed to
illuminate the concept.
What Is Drift Velocity?
Drift velocity is the average velocity attained by charged particles, such as electrons in a conductor, due to the application of an electric field. In simpler terms, it represents the net flow of
charge carriers in a particular direction under the influence of an electric force.
What Is Drift Velocity In Physics Class 12: Advanced Concepts
For Class 12 physics students, understanding drift velocity involves exploring advanced concepts related to current electricity. It is an essential topic in the curriculum, linking theoretical
knowledge with practical applications in electrical circuits and conductors.
What Is Drift Velocity In Physics Class 11: Laying The Foundation
In Class 11, students encounter the concept of drift velocity as part of their foundational physics education. It serves as a precursor to more advanced discussions in Class 12, providing a stepping
stone to comprehend the behavior of charged particles in conductors.
Drift Velocity Formula: Unveiling The Mathematical Expression
The drift velocity formula is expressed as v_d = I / (n * A * e), where:
• v_d is the drift velocity,
• I is the current,
• n is the number density of charge carriers,
• A is the cross-sectional area of the conductor,
• e is the elementary charge.
Understanding this formula is crucial for calculating drift velocity in different scenarios involving electric currents.
Drift Velocity Derivation: Exploring The Mathematical Framework
The derivation of drift velocity involves delving into the principles of electromagnetism. By understanding the underlying mathematical framework, students gain insights into how electric fields
influence the motion of charged particles, leading to the development of the drift velocity formula.
On GetDailyBuzz you will get to know beneficial information which required in your daily life.
What Is Drift Velocity Class 11: Building Blocks Of Understanding
In Class 11, students encounter the foundational aspects of drift velocity. They learn about the basics of current electricity, electron mobility, and the factors influencing drift velocity. This
knowledge sets the stage for more in-depth exploration in subsequent classes.
What Is Drift Velocity Class 10: Introduction To Electrical Concepts
For Class 10 students, the concept of drift velocity introduces them to the world of electricity. It provides a glimpse into the behavior of electrons in conductors, laying the groundwork for a more
comprehensive understanding in higher classes.
Drift Velocity Formula Class 12: Applying Advanced Mathematics
In Class 12, students not only use the drift velocity formula but also delve into its applications in complex electrical circuits. This advanced application of the formula enhances problem-solving
skills and reinforces the link between theoretical concepts and practical scenarios.
In conclusion, drift velocity is a crucial concept in physics that spans across various academic levels. From the foundational understanding in Class 10 to the advanced applications in Class 12, the
exploration of drift velocity enriches students’ comprehension of electrical phenomena. Whether you’re pondering the drift velocity formula or engaging in its derivation, the journey through this
physics concept opens doors to a deeper understanding of the fundamental principles governing charged particle motion in conductors.
What Is Meant By Drift Velocity?
Drift velocity is the average velocity with which electrons ‘drift’ in the presence of an electric field. It’s the drift velocity (or drift speed) that contributes to the electric current. In
contrast, thermal velocity causes random motion resulting in collisions with metal ions.
What Is Drift Velocity Of Electron Class 10?
DRIFT VELOCITY =Th average velocity gained by the free electrons of a conductor, with which the electrons get drifted under the influence of an electric field, applied externally across the
What Is The Difference Between Drift Velocity And Random Velocity?
The net velocity with which these electrons drift is known as drift velocity. The order of magnitude of drift velocity is 10 -4 m/s. i) The velocity with which these free electrons move in random
directions is known as random velocity. Order of magnitude = 105 m/s.
Is Drift Velocity Fast?
While the drift velocity is relatively slow, it is still an important concept in understanding the behavior of electric currents in conductors.
I Have Covered All The Following Queries And Topics In The Above Article
What Is Drift Velocity In Physics Class 12
What Is Drift Velocity In Physics Class 11
What Is Drift Velocity In Physics
What Is Drift Velocity Formula
What Is Drift Velocity Class 11
What Is Drift Velocity Class 10
Drift Velocity Formula Class 12
Drift Velocity Derivation
what is drift velocity | {"url":"https://getdailytech.com/what-is-drift-velocity/","timestamp":"2024-11-13T01:43:48Z","content_type":"text/html","content_length":"75509","record_id":"<urn:uuid:323e7014-abec-4997-8d62-179f8ff3bf58>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00293.warc.gz"} |
Generate Data Options
Generate Synethic Data Options
The following options appear on the Generate Data tabs, Data and Parameters.
Generate Synthetic Data dialog, Data tab
The following options appear on the Data tab of the Generate Synthetic Data dialog. These options pertain to the data source and the variables included.
All variables in the data source data range are listed in this field. If the first row in the dataset contains headings, select First Row Contains Headers.
Selected Variables
Select a variable(s) in the Variables field, then click > to move the variable(s) to the Selected Variables field. Synthetic data will be generated for the variables appearing in this field.
Generate Synthetic Data dialog, Parameters tab
The following options appear on the Parameters tab of the Generate Synthetic Data dialog. These options pertain to the Distribution Terms, Correlation Fitting and available output.
Metalog Terms
• If Fixed is selected, Analytic Solver will attempt to fit and use the Metalog distribution with the specified number of terms entered into the # Terms column. (Only 1 distribution will be fit.)
If Fixed is selected, Metalog Selection Test is disabled.
• If Auto is selected, Analytic Solver will attempt to fit all possible Metalog distributions, up to the entered value for Max Terms, and select and utilize the best Metalog distribution according
to the goodness-of-fit test selected in the Metalog Selection Test menu.
Click the down arrow on the right of Fitting Options to enter either the maximum number of terms (if Auto is selected) or the exact number of terms (if Fixed is selected) for each variable as well as
a lower and/or upper bound. By default the lower and upper bounds are set to the variable’s minimum and maximum values, respectively. If no lower or upper bound is entered, Analytic Solver will
fit a semi- (with one bound present) or unbounded (with no bounds present) Metalog function.
Distribution Fitting section of the Generate Data dialog
Metalog Selection Test
Click the down arrow to select the desired Goodness-of-Fit test used by Analytic Solver. The Goodness of Fit test is used to select the best Metalog form for each data variable among the candidate
distributions containing a different number of terms, from 2 to the value entered for Max Terms. The default Goodness-of-Fit test is Anderson-Darling.
Goodness of Fit Tests:
• Chi Square – Uses the chi-square statistic and distribution to rank the distributions. Sample data is first divided into intervals using either equal probability, then the number of points that
fall into each interval are compared with the expected number of points in each interval. The null hypotheses is rejected using a 90% significance level, if the chi-squared test statistic is
greater than the critical value statistic. Note: The Chi Square test is used indirectly in continuous fitting as a support in the AIC test. The AIC test must succeed in fitting as this is a
necessary condition as well as the fitting of at least one of the tests, Chi Squared, Kolmogorov-Smirnoff, or Anderson-Darling.
• Kolmogorov-Smirnoff –This test computes the difference (D) between the continuous distribution function (CDF) and the empirical cumulative distribution function (ECDF). The null hypothesis is
rejected if, at the 90% significance level, D is larger than the critical value statistic.
• Anderson (Default) -Darling –Ranks the fitted distribution using the Anderson Darling statistic, A2 . The null hypothesis is rejected using a 90% significance level, if A2 is larger than the
critical value statistic. This test awards more weight to the distribution tails then the Kolmogorov-Smirnoff test.
• AIC – The AIC test is a Chi Squared test corrected for the number of distribution parameters and sample size. AIC = 2 * p – 2 + ln(L) where p is the number of distribution parameters, n is the
fitted sample size (number of data points) and ln(L) is the log-likelihood function computed on the fitted data.
• AICc –When the sample size is small, there is a significant chance that the AIC test will select a model with a large number of parameters. In other words, AIC will overfit the data. AICc was
developed to reduce the possibility of overfitting by applying a penalty to the number of parameters. Assuming that the model is univariate, is linear in the parameters and has
normally-distributed residuals, the formula for AICc is: AICc = AIC + 2 * p *(p + 1) / (n - p − 1) where n = sample size, p = # of parameters. As the sample size approaches infinity, the penalty
on the number of parameters converges to 0 resulting in AICc converging to AIC.
• BIC – The Bayesian information criterion (BIC) is defined as: BIC = ln(n) * p - 2 * ln(L) where p is the number of distribution parameters, n is the fitted sample size (number of data points) and
ln(L) is the log-likelihood function computed on the fitted data.
• The BICc is the alternative version of BIC, corrected for the sample size BICc = BIC + 2 * p * (p + 1) / (n – p - 1).
• Maximum Likelihood – The (negated) raw value of the estimated maximum log likelihood utilized in tests described above.
Fit Correlation
Select Fit Correlation to fit a correlation between the variables. If this option is left unchecked, correlation fitting will not be performed.
• If Rank is selected Analytic Solver will use the Spearman rank order correlation coefficient to compute a correlation matrix that includes all included variables.
• Selecting Copula opens the Copula Options dialog where you can select and drag five types of copulas into a desired order of priority.
Correlation Fitting section of the Generate Data dialog
Generate Sample
Select Generate Sample to generate synthetic data for each selected variable. Use the Sample Size field to increase the size of the sample generated.
If this option is left unchecked, variable data will be fitted to a Metalog distribution and also correlations, if Fit Correlation is selected, but no synthetic data will be generated.
Click Advanced to open the Sampling Options dialog.
Sampling Options dialog
From this dialog, users can set the Random Seed, Random Generator, Sampling Method and Random Streams.
Random Seed
Setting the random number seed to a nonzero value (any number of your choice is OK) ensures that the same sequence of random numbers is used for each simulation. When the seed is zero or the field
is left empty, the random number generator is initialized from the system clock, so the sequence of random numbers will be different in each simulation. If you need the results from one simulation
to another to be strictly comparable, you should set the seed. To do this, simply type the desired number into the box. (Default Value = 12345)
Random Generator
Use this menu to select a random number generation algorithm. Analytic Solver Data Science includes an advanced set of random number generation capabilities.
Computer-generated numbers are never truly “random,” since they are always computed by an algorithm – they are called pseudorandom numbers. A random number generator is designed to quickly generate
sequences of numbers that are as close to statistically independent as possible. Eventually, an algorithm will generate the same number seen sometime earlier in the sequence, and at this point the
sequence will begin to repeat. The period of the random number generator is the number of values it can generate before repeating.
A long period is desirable, but there is a tradeoff between the length of the period and the degree of statistical independence achieved within the period. Hence, Analytic Solver Data Science offers
a choice of four random number generators:
• Park-Miller “Minimal” Generator with Bayes-Durham shuffle and safeguards. This generator has a period of 231-2. Its properties are good, but the following choices are usually better.
• Combined Multiple Recursive Generator of L’Ecuyer (L’Ecuyer-CMRG). This generator has a period of 2191, and excellent statistical independence of samples within the period.
• Well Equidistributed Long-period Linear (WELL) generator of Panneton, L’Ecuyer and Matsumoto. This generator combines a long period of 21024 with very good statistical independence.
• Mersenne Twister (default setting) generator of Matsumoto and Nishimura. This generator has the longest period of 219937-1, but the samples are not as “equidistributed” as for the WELL and
L-Ecuyer-CMRG generators.
• HDR Random Number Generator, designed by Doug Hubbard. Permits data generation running on various computer platforms to generate identical or independent streams of random numbers.
Sampling Method
Use this option group to select Monte Carlo, Latin Hypercube, or Sobol RQMC sampling.
• Monte Carlo: In standard Monte Carlo sampling, numbers generated by the chosen random number generator are used directly to obtain sample values. With this method, the variance or estimation
error in computed samples is inversely proportional to the square root of the number of trials (controlled by the Sample Size); hence to cut the error in half, four times as many trials are
Analytic Solver Data Science provides two other sampling methods than can significantly improve the ‘coverage’ of the sample space, and thus reduce the variance in computed samples. This means that
you can achieve a given level of accuracy (low variance or error) with fewer trials.
• Latin Hypercube (default): Latin Hypercube sampling begins with a stratified sample in each dimension (one for each selected variable), which constrains the random numbers drawn to lie in a set
of subintervals from 0 to 1. Then these one-dimensional samples are combined and randomly permuted so that they ‘cover’ a unit hypercube in a stratified manner.
• Sobol RQMC (Randomized QMC). Sobol numbers are an example of so-called “Quasi Monte Carlo” or “low-discrepancy numbers,” which are generated with a goal of coverage of the sample space rather
than “randomness” and statistical independence. Analytic Solver Data Science adds a “random shift” to Sobol numbers, which improves their statistical independence.
Random Streams
Use this option group to select a Single Stream for each variable or an Independent Stream (the default) for each variable.
If Single Stream is selected, a single sequence of random numbers is generated. Values are taken consecutively from this sequence to obtain samples for each selected variable. This introduces a
subtle dependence between the samples for all distributions in one trial. In many applications, the effect is too small to make a difference – but in some cases, better results are obtained if
independent random number sequences (streams) are used for each distribution in the model. Analytic Solver Data Science offers this capability for Monte Carlo sampling and Latin Hypercube sampling;
it does not apply to Sobol numbers.
Reports and Charts
Distribution Fitting: Report included on the SyntheticData_Output worksheet includes the number of terms, the coefficients for each term, the lower and upper bounds and the goodnesss of fit
statistics used when fitting each Metalog distribution.
Correlation Fitting Report: Displays the correlation matrix on the SyntheticData_Output worksheet
Frequency Charts: Displays the multivariate chart produced by the Analyze Data feature. Double click each chart to view an interactive chart and detailed data (statistics, percentiles and six sigma
indices) about each variable included in the analysis. For more information on the Analyze Data feature included in the latest version of Analytic Solver Data Science, see the Exploring Data chapter
that appears later in this guide.
Metalog Curves: Select this Chart option to add Metalog distribution curves to each variable displayed in the multivariate chart and interactive charts described above for Frequency Charts. | {"url":"https://www.solver.com/generate-data-using","timestamp":"2024-11-07T10:04:53Z","content_type":"text/html","content_length":"68520","record_id":"<urn:uuid:d56ccd1a-0f4b-4cdd-8fd9-29e071cae584>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00835.warc.gz"} |
(Aktu Btech) Control System Important Unit-2 Time Response Analysis - Bachelor Exam
(Aktu Btech) Control System Important Unit-2 Time Response Analysis
Master Control Systems Using Aktu’s Quantum Notes. Access critical insights and frequently asked questions to ace your B.Tech studies. Your journey to success begins here! Unit-2 Time Response
Dudes 🤔.. You want more useful details regarding this subject. Please keep in mind this as well.
Important Questions For Control System:
*Quantum *B.tech-Syllabus
*Circulars *B.tech AKTU RESULT
* Btech 3rd Year * Aktu Solved Question Paper
Q1. Explain various standard test signals, and also find relation between them.
Ans. A. Standard test signals:
1. Unit step signal:
• i. Signals which start at time t = 0 and have magnitude of unity are called unit step signals.
• ii. They are represented by a unit step function u(t).
• iii. They are defined mathematically as:
2. Unit ramp signal:
• i. Signals which start from zero and are linear in nature with a constant slope m are called unit ramp signals.
• ii. They are represented by a unit ramp function r(t).
• iii. They are defined mathematically as:
3. Unit impulse signal:
• i. Signals which act for very small time but have large amplitude are called unit impulse functions.
• ii. They are represented by δ(t).
• iii. They are defined mathematically as,
4. Unit parabolic signal : The continuous-time unit parabolic function p(t), also called acceleration signal starts at t = 0, and is defined as:
B. Relation between standard test signals:
1. Relation between impulse and step signal :
2. Relation between step and ramp Signal:
3. Relation between ramp and parabolic signal:
Q2. Derive the time response of first order system.
Ans. 1. Consider a first order system with unity feedback.
3. Response to unit step input:
Taking inverse Laplace transform, we have
4. Response to unit impulse input :
5. Response to unit ramp input:
Taking inverse Laplace transform, we have
Q3. The open loop transfer function of a unity feedback control system is given by G(s) = 9/s (s + 3). Find the natural frequency of response, damping ratio, damped frequency and time constant.
Ans. 1. Transfer function of closed loop system,
2. Comparing eq. (2.6.1) by standard second order characteristic equation, ‘
5. Damped frequency,
Q4. Define the following term:
i. Rise time
ii. Peak time
iii. Peak overshoot
iv. Settling time
Ans. 1. Delay time (t[d]): It is the time required for the response to reach 50 % of the final value in first time.
2. Rise time (t[r]): It is the time required for the response to rise from 10% to 90 % of its final value for overdamped system and 0 to 100 % for underdamped systems.
3. Peak time (t[p]): The peak time is the time required for the response to reach the first peak of the time response or first peak overshoot.
4. Maximum overshoot (M[p]): It is the normalized difference between the peak of the time response and steady output. “The maximum percent overshoot is defined as
5. Settling time (t[s]): The setting time is the time required for the response to reach and stay within the specified range (2 % to 5 %) of its final value.
Q5. The unity feedback system is characterized by an open loop transfer function is G(S)= K/s(s + 20). Determine the gain K, so that the system will have a damping ratio of 0.6. For this value of K,
determine unit step response, time domain specifications: settling time (2 % criterion), peak overshoot, rise time, penk time, delay time for a unit-step input.
Q6. Write a short note on proportional control. Also write its advantages and disadvantages.
Ans. A. Proportional control:
• 1. In proportional control, the error signal serves as the actuating signal for the control action in the control system.
• 2. The error signal is the difference between the feedback signal acquired from the output and the reference input signal.
• 3. The system described in Fig. is a proportional control system because the actuating signal is proportional to the error signal.
• 4. Take into account a second order system where the mistake itself serves as the controller input and the proportional constant is K=1.
B. Advantages:
• 1. Steady state error is reduced hence the system becomes more stable.
• 2. Easy to implement.
• 3. Relative stability is improved.
C. Disadvantages:
• 1. Due to the presence of these controllers, we get some offsets in the system.
• 2. Proportional controllers also increase the maximum overshoot of the system.
Control System Btech Quantum PDF, Syllabus, Important Questions
Control System Quantum PDF | AKTU Quantum PDF:
Quantum Series Links
Quantum -2022-23 2022-23
AKTU Important Links | Btech Syllabus
Important Links-Btech (AKTU)
Leave a Comment | {"url":"https://bachelorexam.com/control-system/kee-502-unit-2-important-2/","timestamp":"2024-11-08T02:07:23Z","content_type":"text/html","content_length":"223083","record_id":"<urn:uuid:022bbcce-95a5-4f02-9e7c-22bc0b4658f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00439.warc.gz"} |
Dual time point
Dual time point estimation of K[i]
Net influx rate (K[i]) of a PET tracer is usually estimated from a dynamic PET data using either Patlak plot or compartmental model, and it requires the measurement of arterial plasma curve starting
from the tracer administration until the end of the PET scan. Blood sampling is not necessary, if the input function can be measured from the dynamic PET image. An estimate of K[i], FUR, does not
require dynamic imaging, but can be calculated from a static late-scan image; however, FUR calculation still needs the input function (integral) from the whole time, starting from the tracer
administration until the end of the PET scan, and late-scan image can not provide that. If the concentration in the blood can be measured during the late-scan, then tissue-to-blood ratio can be
calculated, and it is superior to SUV as a surrogate parameter of K[i] (van den Hoff et al., 2013).
If two static late-scans can be performed, and input function can be measured from the PET images, then K[i] (and subsequently metabolic rate) can be calculated from that dual time point (DTP) data
alone (van den Hoff et al., 2013), assuming that
1. there is an irreversible compartment in tissue in which the tracer is trapped (Patlak plot and FUR methods have this same requirement), and
2. input function follows mono-exponential decay between the two PET scans, and
3. a population-based estimate of the y axis intercept of Patlak plot is available (this would also improve the FUR method).
If the tracer concentration in plasma and tissue at time t are represented by C[P](t) and C[T](t), respectively, and the PET scan times are t[1] and t[2], then the estimate of K[i] can be calculated
from equation:
, where Ī[r] is the population-based estimate of the y axis intercept of the Patlak plot.
Alternatively, if the time course of the input function is assumed to follow an inverse power law (hyperbola) for t ≳ 1-2 min p.i.,
, then even simpler equation for an estimate of K[i] can be derived (Hofheinz et al., 2016):
, but usage of this method requires that the inter- and intra-individual variability of b can be ignored and a population average of b has been calculated.
If input function is available from the injection time, dual-time point K[i] can be calculated from equation
, from which the intercept of the Patlak plot has been cancelled out (Wu et al., 2021). If input function is not measured, but blood radioactivity concentration can be assessed from the two PET
scans, then those values could be used to scale population-based input function to individual level (Wu et al., 2021).
If arterial plasma data at the times of the two PET scans is not available, not even from the images, but a reference region can be found in the images, then it can be used to calculate a surrogate
parameter for K[i]^ref, that in case of dynamic imaging could be calculated using Patlak plot with reference tissue input. Alves et al (2017) have applied this method to [^18F]FDOPA studies, using
occipital cortex as reference region:
C[R] represents the radioactivity concentration in the reference tissue as a function of time. RPT (reference Patlak time) is the area-under-curve (AUC) of the reference tissue from tracer
administration time (t=0) to the scan time (t[1] or t[2]) divided by the concentration in the reference region at the corresponding scan time. For the method to be useful, the RPT(t[1]) must be
assumed to have sufficiently low inter- and intra-individual variability, so that a population average of RPT(t[1]) can be calculated and used in the previous equation, and in calculation of RPT(t
[2]) from equation
See also:
Alves LI, Meles SK, Willemsen AT, Dierckx RA, Marques da Silva AM, Leenders KL, Koole M. Dual time point method for the quantification of irreversible tracer kinetics: A reference tissue approach
applied to [^18F]-FDOPA brain PET. J Cereb Blood Flow Metab. 2017; 37(9): 3124-3134. doi: 10.1177/0271678X16684137.
van den Hoff J, Hofheinz F, Oehme L, Schramm G, Langner J, Beuthien-Baumann B, Steinbach J, Kotzerke J. Dual time point based quantification of metabolic uptake rates in ^18F-FDG PET. EJNMMI Res.
2013; 3: 16. doi: 10.1186/2191-219X-3-16.
Hofheinz F, van den Hoff J, Steffen IG, Lougovski A, Ego K, Amthauer H, Apostolova I. Comparative evaluation of SUV, tumor-to-blood standard uptake ratio (SUR), and dual time point measurements for
assessment of the metabolic uptake rate in FDG PET. EJNMMI Res. 2016; 6(1): 53. doi: 10.1186/s13550-016-0208-5.
Logan J. Graphical analysis of PET data applied to reversible and irreversible tracers. Nucl Med Biol. 2000; 27: 661-670. doi: 10.1016/S0969-8051(00)00137-2.
Patlak CS, Blasberg RG. Graphical evaluation of blood-to-brain transfer constants from multiple-time uptake data. Generalizations. J Cereb Blood Flow Metab. 1985; 5: 584-590. doi: 10.1038/
Tags: Patlak plot, Ki, Late-scan
Updated at: 2022-01-14
Created at: 2018-02-14
Written by: Vesa Oikonen | {"url":"http://www.turkupetcentre.net/petanalysis/model_ki_dtp.html","timestamp":"2024-11-07T02:43:22Z","content_type":"text/html","content_length":"13437","record_id":"<urn:uuid:645259ff-7aa6-4b72-91dc-93e2f3809260>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00771.warc.gz"} |
Bresenham Circle VS others - Printable Version
Bresenham Circle VS others - matalog - 07-04-2021 01:10 AM
On the thoughts of Bresenham circles drawn in BASIC languages, I have taken the opportunity to create a VS here, and there is not a lot of difference.
What led me to show this, was my post on FILLPOLY_P(). Although, I suspect that Bresenham's method can't even help the FILLPOLY algorithm, because that algorithm seems to be flawed in some ways,
especially for shapes described in more than a rough description of their outline.
Fillpoly() seems to need a distance farther than 2 horizontal or vertical pixels, while still connected by pixels, or it does not work correctly.
EXPORT BRES2()
LOCAL COORD:={};
LOCAL A,R,ST;
LOCAL XC,YC,X,Y,R,D;
WHILE Y>=X DO
IF D>0 THEN Y:=Y-1;D:=D+4*(X-Y)+10;ELSE D:=D+4*X+6;END;
WAIT (-1);
IF ISKEYDOWN(4) THEN BREAK(3); END;
UNTIL 0;
Obviously the Bresenham method has been slowed down here to show the beautiful way that it draws.
RE: Bresenham Circle VS others - C.Ret - 07-04-2021 06:41 AM
Nice implementation indeed, you now have an efficient way of filling your circles without lost of detail or no fill gaps !
EXPORT FCIRCLE_P(XC,YC,R,Col)
LOCAL D:=3-2*R, X:=0, Y:=R;
WHILE Y>=X DO
IF D>0 THEN Y:=Y-1; D:=D+4*(X-Y)+10 ELSE D:=D+4* X + 6 END;
LINE_P(XC+X,YC+Y,XC+X,YC-Y,Col); LINE_P(XC-X,YC+Y,XC-X,YC-Y,Col);
LINE_P(XC+Y,YC+X,XC+Y,YC-X,Col); LINE_P(XC-Y,YC+X,XC-Y,YC-X,Col);
X:=X+1 END;
But, you may have explained a bit more the trick you are using with the indicator variable D. Are the definition correct for any value of radius R or is there any hidden beast in the eke ?
It also a bit of a shame, you don't have post this related subject in your previous thread, since it is closely related, this will have help readers and subscribers to easier follow your progress.
RE: Bresenham Circle VS others - Liamtoh Resu - 07-04-2021 12:17 PM
I find these graphics programs fascinating.
I had to grok m's code to figure out how to get an output from c's code.
(eg. 160,120,100,255) gave me a blue circle.
Explanations of bresenham's algorithm tend be very detailed.
Meanwhile I found this link for the bitmap/midpoint circle algorithm:
Thanks again.
RE: Bresenham Circle VS others - C.Ret - 07-04-2021 03:05 PM
(07-04-2021 12:17 PM)Liamtoh Resu Wrote: [...]I had to grok m's code to figure out how to get an output from c's code.
(eg. 160,120,100,255) gave me a blue circle.[...]
Sorry, I don't have commented the code in my previous post. The Col variable is expecting regular color code for the HP Prime.
The best way is to use the RGB(r,b,g,α) function or a valid structured #rrggbbαα binary integer.
Beware that in this code, I produce the method used by matalog and I am not sure that the variable D is correctly set.
RE: Bresenham Circle VS others - Mark Power - 07-07-2021 09:13 PM
The true beauty of Bresenham’s algorithm is that the calculations are limited to integer addition, integer subtraction and integer x4 which can be implemented as shift left by 2 bits. All of which
means the algorithm can be implemented in assembly language/machine code for stunning speed on the most basic of processors.
If you want to see an HP48 implementation in assembly language have a look at glib. | {"url":"https://www.hpmuseum.org/forum/printthread.php?tid=17205","timestamp":"2024-11-10T22:07:17Z","content_type":"application/xhtml+xml","content_length":"8597","record_id":"<urn:uuid:f7b19480-a817-49b3-a7dd-dbc846160cb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00623.warc.gz"} |
The relationship between ATR and standard deviation
Let's begin this post with a gross generalisation:
Professional traders tend to measure risk and target risk using standard deviation. Amateur traders tend to use a funky little number called the ATR: 'Average True Range'.
Both try and achieve the same aim: summarise the typical movement in the price of something using a single number. However they are calculated differently. Can we reconcile the two measures? This is
an important thing to do - it will help us understand the pros and cons of each estimator, and help people using different measures to communicate with each other. It will also help ameliorate the
image of ATR as a poor mans volatility measure, and the standard deviation as some kind of quant witchcraft unsuited to trading in the real world.
A quick primer on the standard deviation (SD)
Measuring standard deviation is relatively easy. We start off with some returns at some time frequency. Most commonly these are daily returns, taken off closing prices. We then plug them into the
usual standard deviation formula:
1/N.sum{[x - x*]^2}
A more professional method is to use an exponentially weighted moving average; this gives a smoother transition between volatility shifts which is very useful if you're scaling your position
according to vol (
and you should
Some interesting points can be made here.
How many points should one use? All of history, or just last week? Broadly speaking using the last few weeks of standard deviation gives the best forecast for future standard deviation.
We don't get closing prices over weekends. To measure a calendar day volatility rather than a business day volatility I'd need to multiply the value by
where X is the number of business days. There is a standard assumption in doing any time scaling of volatility, which is that returns are independent. A more subtle assumption that we're making is
that the market price is about as volatile over the weekend as during the week. If for example we assumed that nothing happened at the weekend then no adjustment would be required.
We could use less frequent prices, weekly or monthly, or even annual. However it's not obvious why you'd want to do that - it will give you less data.
We could,
in theory
, use more frequent prices; for example hourly, minute or even second by second prices. Note that at some point the volatility of the price would be dominated by 'bid-ask bounce' (even if the mid
price doesn't shift, a series of buys and offers in the market will create apparent movement) and you'd have an overestimate of volatility. When you reach that point depends on the liquidity of the
market, and the ratio of the volatility to the tick size.
If we use more frequent prices then we'd need to scale them up, eg to go from hourly volatility to calendar day volatility we'd do something like multiply by
. But what should Y be? If there are 8 hours of market open time then should we multiply by 8? That assumes that there is no volatility overnight, something we know isn't true. Should we multiply by
24? That assumes that we are as likely to see market moving action at 3am as we are when the non farm payroll comes out in the afternoon (UK market time).
[Note: Even in a market that trades 24 hours a day like the OTC spot FX market there is still an issue... although we have hourly prices it's still unclear whether we should treat them all as
contributing equally to volatility.]
This is analogous to our problem with rescaling business day vol - when the market is closed the vol is unobservable; we don't know what the vol is like when the market is closed versus when it is
open. This is a key insight which will be important later.
A quick primer on the Average True Range (ATR)
max[(high-low), abs(high - close_prev), abs(low-close_prev)]
Then we take aa moving average of the true ranges, over some number of data points n (the averaging is effectively an exponential weighting, with a fixed weight on the most recent value of 1/n). As
with volatility the usual practice involves using daily data, but I guess you could be dumb and use less frequent data, or try and use more frequent data.
Using more data is probably better, although one obvious disadvantage is that the high and the low could be spurious noise or one off spikes. I suppose one could argue that closing prices are subject
to being pushed around as well.
Comparing SD and ATR
So the key differences here are:
• standard deviations are normalised versus the average return; the ATR is not
• There is a square, average, then square root in the calculation of standard deviations; this will upweight larger returns versus smaller returns. The ATR calculation is just an average of
absolute changes
• the standard deviation is calculated using just a daily return close - close_prev, so doesn't use any intraday data unlike the ATR. Also the true range will always be equal to or greater than the
daily change.
The first point isn't too important since over daily data the average return is going to be relatively small compared to the volatility of the price. However it does mean that in a trending market
the ATR will be biased upwards compared to SD. The second point is quite interesting, and it means for example that just after a large market move the ATR will be understated compared to the SD. The
third point is the most interesting of all and I'll spend most of the post discussing it.
Mapping of SD and ATR: the easy bit
Let's start with the easy bit; these two points:
• There is a square, average, then square root in the calculation of standard deviations; this will upweight larger returns versus smaller returns. The ATR calculation is just an average.
• standard deviations are normalised versus the average return; the ATR is not
As I've already mentioned we can pretty much ignore the second point; certainly if we assume prices don't have any drift then on average it will cancel to zero. The effect of the first point will
depend on the underlying distribution of returns. Let's assume for the moment they are Gaussian, then using a simple simulation you could do in a spreadsheet the empirical effect comes out at a ratio
of about 1.255.
In other words focusing purely on the difference between square,average,square root and the mean absolute return, to go from ATR to SD you'd multiply by 1.255. In real life there are likely to be
more jumps, which suggests this number would be a little bigger.
I leave the analytical calculation of this figure as an exercise for the reader. I wish it was a much cooler number, but it isn't.
Mapping of SD and ATR: the hard bit
That 'just' leaves us with this difference:
• the standard deviation is calculated using just a daily return close - close_prev, so doesn't use any intraday data unlike the ATR. Also the true range will always be equal to or greater than the
daily change.
So to go from SD to ATR we'd need to multiply the SD by Y, where Y>=1. But here we can't just do some trivial adjustment. Let me explain why. Consider the following price action:
The blue line shows when the market was closed; the red line shows when it was open. The x-axis is the hour count, so the market is open between 8am and 5pm daily. The return for standard deviation
purposes is the difference between the closing price on the second day (taken at 5pm) and the first day (also at 5pm):
close - close_prev = 109.48 - 103.89 = 5.59
But the true range for the second day shown will be:
max[(high-low), abs(high - close_prev), abs(low-close_prev)]
= max[(109.48 - 107.09), abs(109.48-103.89), abs(107.09-103.89)] = 5.59
In this example the two terms are equivalent. However this won't always be the case; and it's easy to construct examples where the differences aren't the same. To come up with a rule of thumb like I
did before I would need to generate loads of these experiments.
[I could also do this experiment in reverse by taking the closing prices and using something like a
Brownian bridge
to interpolate the missing values]
However in generating the plot above I've made a key assumption which is that the volatility is the same throughout the 24 hours - and as already alluded to we can't easily make that assumption. It's
more likely that the vol is lower. Here for example is another plot, but this time I've assumed that when the market is closed the vol is 1/5 of the normal value when the market isn't trading.
This is all very well, but what should the ratio of open:close volatility be? By definition it's unobservable. We could try and infer it from tracking option prices which are close to expiry, but
that could well be a biased measure (people are likely to demand higher implied vol when the market is closed, since the option isn't hedgeable).
[This problem would also apply if we tried to use the brownian bridge approach]
Mapping of SD and ATR: an empirical approach
We've got as far as we can running experiments; the only thing left to do is look at some real data. Here is
some pysystemtrade code
. This will acquire contract level data (with HLOC prices) from quandl, and set up some futures data in mongodb, as described
. It will then measure the ATR and standard deviation, and compare the two.
I ran this over a whole bunch of futures contracts, and the magic number I got was 0.875. Basically if you have an ATR and you want to convert it to a daily standard deviation, then you multiply the
ATR by 0.875. If you prefer your standard deviations annualised then multiply the ATR by 14.
It's worth thinking about how this number relates to our previous findings:
"focusing purely on the difference between square,average,square root and the mean absolute return, to go from ATR to SD you'd multiply by 1.255. In real life there are likely to be more jumps, which
suggests this number would be a little bigger."
"[to account for the difference between range and true range] So to go from SD to ATR we'd need to multiply the SD by Y, where Y>=1."
In other words:
SD = ATR*0.875 [empirical]
ATR = SD*Y and SD = ATR*1.255 -> SD = ATR*1.255/Y [theory]
This suggests Y is around 1.43. I note in passing this is damn close to the square root of 2. Does this mean anything? I don't know but it's much cooler than the somewhat arbitrary 1.255.
There is no best measure of volatility. SD and ATR are just different, each with their own strengths and weaknesses. If you happen to have an ATR measure but want to use standard deviation as a risk
measure then you can multiply it by 0.875. The reverse is also possible.
Incidentally I'll be using this result in my new book; to be published next year.
18 comments:
1. Any clue on the title/subject of the new book?
1. The title will probably have the word 'trading' in it.
2. "very useful if you're scaling your position according to vol (and you should!)." Or perhaps not, at least not in a linear manner. The interesting stuff is not in the comparisons, but in the
resulting distributions, which exhibit non-intuitive properties. Hint: These series are are not mean reverting. Good trading!
3. lots of use for both approaches for entries as well as stops. My experience is the current market structure gives the best stop points
4. I am now interested on how this "current market structure" is defined. Just to make an objective comparison with Robert's work. Thank you.
5. If you use back-adjusted futures data for backtesting there is a good chance that your historical prices will go negative at some point in the past. Usually the daily return calculations use the
ratio of Close/Close_Prev, which becomes meaningless with negative prices, which in turn prevents you from using stddev. That's the reason I prefer ATR as my vol measure. Stddev practitioners
never discuss negative historical prices and maybe that's not an issue with stocks, but for futures traders it's a reality.
I noticed that you use the difference (Close - Close_Prev), not the ratio, which I suppose gets around the issue of negative prices.
Either way, very good discussion.
1. Yes when using adjusted prices I calculate any percentage standard deviation as (close - close_prev) / (close_prev_specific_contract) where specific_contract is NOT the adjusted price but the
price of the current contract traded.
Though mostly I don't use percentages... as you've noticed.
So this isn't a valid reason to prefer ATR or standard deviation IMHO, since it's easily fixed.
6. The way I see, ATR is a measure related to "plain" volatility, i.e. the squared STD, which would connect it to the (more robust) Kelly criterion based position sizing while STD is used for the
7. Very interesting article. I used to measure targets and stops in ATR units. Recently I've been working on a system that uses Z Scores and moving averages of Z Scores. With this system stops and
targets are more suitable in Z units or standard deviations. However not all trading platforms can easily calculate stdev for stops & targets or not as easily available as with ATR. Based on my
understanding of your article, a target of 1Z = 0.875 X ATR assuming we are using the same period in our Z & ATR calculations. Correct?
1. Not sure. What is your calculation for this 'Z score'?
8. ZScore is calculated with formula (Close-SimpleMovingAverage)/StandardDeviation. A Z Score of lets say 1.0 tells us price is above 1 x deviation from the mean. The period to calculate the moving
average & standard deviation must be the same. The resulted ZScore obviously applies to that moving average only. A spreadsheet can easily be created to calculate the ZScore of most popular
averages (20,50,100,253) and have a clear macroview of price action in reference to short, medium & long term averages.
9. Oh man, can you give a hint of how to get 1.255 ? Are you using a pure random walk with ATR=1 and no drift so with 0 average, then, Sigma should also be 1, no ?
1. No. For example
Returns-0.5, +0.5, -1.5, +1.5
Avg(abs(x)) = 1.0
Std dev = sqrt(.25+.25+2.25+2.25)= 2.24
The exact ratio will depend on the distribution
2. >>> pandas.Series([-0.5, 0.5, -1.5, 1.5]).std()
>>> numpy.sqrt((0.5**2 + 0.5**2 + 1.5**2 + 1.5**2)/(4 - 1))
So why 2.24? Why do you use the average *absolute* value of x?
3. I actually forgot to divide by 3, so it should be sqrt(5/3) = 1.29 :-)
"Why do you use the average *absolute* value of x?" I didn't that would be 1.333
4. If you didn't use the average absolute value of x, why did you write "Avg(abs(x)) = 1.0"?
5. Oh sorry misunderstood. Avg(abs(1.0)) is an ATR calculation.
10. SD is 0.8 ATR for a normal distribution
Comments are moderated. So there will be a delay before they are published. Don't bother with spam, it wastes your time and mine. | {"url":"https://qoppac.blogspot.com/2018/12/the-relationship-between-atr-and.html","timestamp":"2024-11-09T12:21:06Z","content_type":"text/html","content_length":"154719","record_id":"<urn:uuid:48d9d734-fe38-4041-8323-5d94361a0041>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00772.warc.gz"} |
Free Math Mystery Picture Worksheets Multiplication
Math, specifically multiplication, creates the cornerstone of countless scholastic self-controls and real-world applications. Yet, for numerous students, grasping multiplication can present a
challenge. To address this hurdle, instructors and parents have actually embraced an effective device: Free Math Mystery Picture Worksheets Multiplication.
Intro to Free Math Mystery Picture Worksheets Multiplication
Free Math Mystery Picture Worksheets Multiplication
Free Math Mystery Picture Worksheets Multiplication -
Mystery Picture FREEBIE Differentiated Mystery Pictures are an exciting way to review multiplication Students solve the multiplication problem in each box and then color it in according to the key
Your students will love watching the picture unfold before their eyes This dinosaur picture comes
Math Mystery Picture Worksheets Basic addition subtraction multiplication and division fact worksheets Mystery picture worksheets require students to answer basic facts and color according to the
code For coordinate grid graph art pictures please jump over to Graph Art Mystery Pictures
Value of Multiplication Technique Comprehending multiplication is essential, laying a solid foundation for innovative mathematical ideas. Free Math Mystery Picture Worksheets Multiplication offer
structured and targeted technique, fostering a much deeper comprehension of this essential arithmetic procedure.
Development of Free Math Mystery Picture Worksheets Multiplication
Multiplication Mystery Worksheets Free Printable
Multiplication Mystery Worksheets Free Printable
They have fun coloring and discovering the mystery picture and stay engaged for the whole activity This free multiplication mystery picture of a spooky house is great for October especially during
Halloween week Students will complete the single digit multiplication problems then use the answers as the code to create the picture
Multiplication Color By Number Mystery Picture Mystery Pictures is my favorite way to teach the math This freebie is MUST HAVE Solve the math problem look at the color next to it and then color in
ALL of the squares that have that answer Keywords math basic operations 2nd grade 3rd grade 4th grade homeschooler worksheets
From traditional pen-and-paper exercises to digitized interactive styles, Free Math Mystery Picture Worksheets Multiplication have evolved, accommodating varied learning designs and choices.
Kinds Of Free Math Mystery Picture Worksheets Multiplication
Fundamental Multiplication Sheets Basic workouts concentrating on multiplication tables, aiding learners develop a solid math base.
Word Issue Worksheets
Real-life situations integrated right into problems, boosting critical reasoning and application skills.
Timed Multiplication Drills Examinations created to boost rate and accuracy, aiding in fast psychological mathematics.
Advantages of Using Free Math Mystery Picture Worksheets Multiplication
Free Math Mystery Picture Worksheets Multiplication John Moon s
Free Math Mystery Picture Worksheets Multiplication John Moon s
Math Mystery Pictures Practice basic addition subtraction multiplication and division facts the fun way with Super Teacher Worksheets Math Mystery Pictures Simply solve each problem and color
according to the key at the bottom to reveal an interesting picture We have over 20 mystery pictures available for each basic math operation
2nd Grade Math Mystery Pictures Coloring Worksheets Your 2nd graders will surely be interested to work through the math problems in this collection so they can solve the mystery pictures The hidden
images include pictures of animals kids vehicles fruits vegetables monsters and a whole lot more
Improved Mathematical Skills
Constant practice develops multiplication effectiveness, enhancing general math capabilities.
Enhanced Problem-Solving Talents
Word issues in worksheets create analytical thinking and strategy application.
Self-Paced Discovering Advantages
Worksheets suit individual learning rates, promoting a comfortable and versatile learning atmosphere.
Just How to Develop Engaging Free Math Mystery Picture Worksheets Multiplication
Integrating Visuals and Shades Lively visuals and colors record interest, making worksheets visually appealing and engaging.
Including Real-Life Circumstances
Relating multiplication to daily circumstances adds significance and usefulness to exercises.
Tailoring Worksheets to Various Ability Degrees Customizing worksheets based upon varying efficiency levels makes sure inclusive understanding. Interactive and Online Multiplication Resources Digital
Multiplication Tools and Gamings Technology-based resources provide interactive learning experiences, making multiplication engaging and enjoyable. Interactive Web Sites and Applications On the
internet platforms supply diverse and accessible multiplication technique, supplementing typical worksheets. Tailoring Worksheets for Numerous Learning Styles Aesthetic Students Aesthetic aids and
layouts help comprehension for students inclined toward visual knowing. Auditory Learners Verbal multiplication troubles or mnemonics deal with students who grasp ideas through acoustic ways.
Kinesthetic Students Hands-on tasks and manipulatives support kinesthetic students in recognizing multiplication. Tips for Effective Application in Understanding Uniformity in Practice Routine method
reinforces multiplication abilities, advertising retention and fluency. Balancing Rep and Selection A mix of repetitive workouts and diverse problem formats keeps interest and understanding.
Supplying Useful Feedback Comments help in identifying locations of enhancement, urging continued development. Difficulties in Multiplication Method and Solutions Inspiration and Involvement
Difficulties Monotonous drills can cause uninterest; ingenious techniques can reignite inspiration. Getting Rid Of Worry of Mathematics Unfavorable assumptions around mathematics can impede
progression; creating a favorable discovering environment is vital. Impact of Free Math Mystery Picture Worksheets Multiplication on Academic Efficiency Researches and Research Study Findings Study
shows a favorable correlation in between regular worksheet use and improved mathematics efficiency.
Free Math Mystery Picture Worksheets Multiplication emerge as flexible devices, promoting mathematical proficiency in learners while accommodating varied knowing styles. From basic drills to
interactive online resources, these worksheets not only improve multiplication abilities however also advertise vital reasoning and analytic capabilities.
Multiplication Mystery Picture Worksheets Mystery Graph Pictures
Multiplication Worksheets Mystery Picture PrintableMultiplication
Check more of Free Math Mystery Picture Worksheets Multiplication below
Multiplication Worksheets Mystery Picture PrintableMultiplication
Free Printable Mystery Picture Worksheets
Free Printable Math Mystery Picture Worksheets Printable Worksheets
Free Printable Math Mystery Picture Worksheets Free Printable
Math Mysteries Printable Free Printable Word Searches
Emoji Multiplication Mystery Pictures Ford s Board
Math Mystery Picture Worksheets Super Teacher Worksheets
Math Mystery Picture Worksheets Basic addition subtraction multiplication and division fact worksheets Mystery picture worksheets require students to answer basic facts and color according to the
code For coordinate grid graph art pictures please jump over to Graph Art Mystery Pictures
Free Digital Math Activities for Multiplication 35 Mystery pictures
Numeric math activities can be how much more than just entering an answer in an text box In factor digital math activities can be hands on just like math business This post will share the of my new
favorite digital calculation activities Mystery Pictures with free basic multiplication practice ones to use right away
Math Mystery Picture Worksheets Basic addition subtraction multiplication and division fact worksheets Mystery picture worksheets require students to answer basic facts and color according to the
code For coordinate grid graph art pictures please jump over to Graph Art Mystery Pictures
Numeric math activities can be how much more than just entering an answer in an text box In factor digital math activities can be hands on just like math business This post will share the of my new
favorite digital calculation activities Mystery Pictures with free basic multiplication practice ones to use right away
Free Printable Math Mystery Picture Worksheets Free Printable
Free Printable Mystery Picture Worksheets
Math Mysteries Printable Free Printable Word Searches
Emoji Multiplication Mystery Pictures Ford s Board
Free Printable Math Mystery Picture Worksheets Free Printable
Math Mystery Puzzle Worksheet
Math Mystery Puzzle Worksheet
Emoji Multiplication Mystery Pictures Ford s Board
Frequently Asked Questions (Frequently Asked Questions).
Are Free Math Mystery Picture Worksheets Multiplication suitable for any age teams?
Yes, worksheets can be customized to different age and ability levels, making them versatile for numerous learners.
Exactly how typically should students exercise utilizing Free Math Mystery Picture Worksheets Multiplication?
Constant technique is key. Regular sessions, ideally a couple of times a week, can produce significant enhancement.
Can worksheets alone enhance mathematics abilities?
Worksheets are an important device yet must be supplemented with varied learning methods for thorough skill advancement.
Are there on the internet platforms offering complimentary Free Math Mystery Picture Worksheets Multiplication?
Yes, several educational sites supply free access to a large range of Free Math Mystery Picture Worksheets Multiplication.
Exactly how can parents support their youngsters's multiplication practice at home?
Motivating consistent practice, providing help, and developing a positive knowing setting are advantageous steps. | {"url":"https://crown-darts.com/en/free-math-mystery-picture-worksheets-multiplication.html","timestamp":"2024-11-13T22:20:58Z","content_type":"text/html","content_length":"29169","record_id":"<urn:uuid:a37d6b99-c8d4-4ac2-bee1-633ec5ad7d3a>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00256.warc.gz"} |
how to calculate degeneracy of energy levels
The repulsive forces due to electrons are absent in hydrogen atoms. donor energy level and acceptor energy level. , which is unique, for each of the possible pairs of eigenvalues {a,b}, then
physically distinct), they are therefore degenerate. Math is the study of numbers, shapes, and patterns. , { , L Since 2 The best way to find degeneracy is the (# of positions)^molecules. = The
eigenfunctions corresponding to a n-fold degenerate eigenvalue form a basis for a n-dimensional irreducible representation of the Symmetry group of the Hamiltonian. s The fraction of electrons that
we "transfer" to higher energies ~ k BT/E F, the energy increase for these electrons ~ k BT. | How to calculate degeneracy of energy levels Postby Hazem Nasef 1I Fri Jan 26, 2018 8:42 pm I believe
normally that the number of states possible in a system would be given to you, or you would be able to deduce it from information given (i.e. This is particularly important because it will break the
degeneracy of the Hydrogen ground state. This video looks at sequence code degeneracy when decoding from a protein sequence to a DNA sequence. H {\displaystyle E} , which commutes with both E , so
the representation of How to calculate degeneracy of energy levels. {\displaystyle E_{n_{x},n_{y},n_{z}}=(n_{x}+n_{y}+n_{z}+3/2)\hbar \omega }, or, V ^ [1] : p. 267f The degeneracy with respect to m
l {\displaystyle m_{l}} is an essential degeneracy which is present for any central potential , and arises from the absence of a preferred spatial direction. / {\displaystyle S|\alpha \rangle } {\
displaystyle L_{x}} This is called degeneracy, and it means that a system can be in multiple, distinct states (which are denoted by those integers) but yield the same energy. {\displaystyle {\hat
{A}}} The degenerate eigenstates with a given energy eigenvalue form a vector subspace, but not every basis of eigenstates of this space is a good starting point for perturbation theory, because
typically there would not be any eigenstates of the perturbed system near them. m We will calculate for states (see Condon and Shortley for more details). He was a contributing editor at PC Magazine
and was on the faculty at both MIT and Cornell. Degeneracy typically arises due to underlying symmetries in the Hamiltonian. s A perturbed eigenstate k , certain pairs of states are degenerate. {\
displaystyle s} Having 0 in and {\displaystyle |\psi \rangle } {\displaystyle m_{l}=m_{l1}} l | For example, the ground state, n = 1, has degeneracy = n^2 = 1 (which makes sense because l, and
therefore m, can only equal zero for this state).\r\n\r\nFor n = 2, you have a degeneracy of 4:\r\n\r\nPhysics For Dummies and Physics Essentials For Dummies. Dr. Holzner received his PhD at Cornell. | {"url":"https://curtisstone.com/irt-data/how-to-calculate-degeneracy-of-energy-levels","timestamp":"2024-11-03T17:06:21Z","content_type":"text/html","content_length":"21286","record_id":"<urn:uuid:8afe262d-73bd-4719-9452-11fff4adf705>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00102.warc.gz"} |
Free Group Study Rooms with Timer & Music | FiveableWork-Energy Theorem | AP Physics C: Mechanics Class Notes | Fiveable
Energy is one of the biggest concepts in physics, and you can see it in every unit we've covered in the past and will cover in the future. A tip given to me by a wise physics teacher was that almost
every FRQ can be at least partially tackled with energy!
Force Interactions
• Why does a stretched rubber band return to its original length?
• Why is it easier to walk up a flight of steps, rather than run, when the gravitational potential energy of the system is the same?
Force Interactions:
Why is no work done when you push against a wall, but work is done when you coast down a hill?
Unit 3 will cover approximately 14%-17% of the exam and should take around 10 to 20, 45-minute class periods to cover. The
AP Classroom
personal progress check has 20 multiple choice questions and 1 free response question for you to practice on.
The net work done on a (point-like) object is equal to the object’s change in the kinetic energy.
The work-energy theorem is a fundamental principle in physics that relates the work done on an object to the change in its kinetic energy. Here are some key concepts and implications of the theorem:
• Work is defined as the dot product of force and displacement. Mathematically, it can be represented as W = Fdcosθ, where F is the force, d is the displacement, and θ is the angle between the
force and displacement vectors.
• Kinetic energy is the energy an object possesses due to its motion. It is given by the formula KE = 1/2mv^2, where m is the mass of the object and v is its velocity.
• The work-energy theorem states that the work done on an object is equal to the change in its kinetic energy. So if W is the work done on an object and ΔK is the change in its kinetic energy, then
W = ΔK.
• This means that any work done on an object will cause a change in its kinetic energy. For example, if you apply a force to an object and make it move, the work you do will increase the object's
kinetic energy. Conversely, if you apply a force in the opposite direction to slow down an object, the work you do will decrease its kinetic energy.
• The theorem applies to both conservative and non-conservative forces. Conservative forces are forces that depend only on the object's position, such as gravitational and spring forces.
Non-conservative forces are forces that depend on the object's velocity, such as friction and air resistance. The work-energy theorem states that the work done by any force, whether conservative
or non-conservative, will cause a change in the object's kinetic energy.
• The theorem also applies to systems where multiple forces are acting on the object. For example, if you have an object being pushed by two different forces, the work done by the forces will be
added together to determine the change in kinetic energy of the object.
• The theorem can be used to calculate the work done by a force and the change in kinetic energy of an object. For example, if you know the force and displacement of an object, you can use the
work-energy theorem to calculate the change in its kinetic energy. Or, if you know the initial and final kinetic energy of an object, you can use the theorem to calculate the work done on it.
• The theorem is valid only for the case of constant mass. If the mass of an object changes, the theorem is no longer valid.
In equation form, the Work-Energy Theorem looks like this:
In which W is work and K is kinetic energy.
Kinetic energy is typically defined as:
where m is mass and v is velocity.
Here is the derivation of the Work-Energy Theorem:
F=dv/dt then use the chain rule
And we know that the equation for work is W = Fxd so:
W=m[1/2(v^2)] evaluated from Vo to Vf
Work done by a variable force is the area under a force vs radius plot! This can be seen in your formula chart as:
⚠️Wait...what is work?
is when there is a force exerted on an object that causes the object to be displaced. Work is a scalar that can be negative or positive, depending on if there's energy put in or taken out of the
If you know about vectors, you should be aware that work is the scalar product between force and displacement. Only the force parallel to the direction of motion is included.
Here's the most popular formula for work that is not calculus based:
Practice Questions:
1. (a) Calculate the work done on a 1500-kg elevator car by its cable to lift it 40.0 m at constant speed, assuming friction averages 100 N.
(b) What is the work done on the lift by the gravitational force in this process?
(c) What is the total work done on the lift? (
Taken from Lumen Learning
(a) Start by drawing a free body diagram, with the force of tension and the gravitational force.
(c) The only two forces that are doing work on the lift are gravity and tension, not friction. Therefore the net amount of work is zero.
2. (a) Using energy considerations, calculate the average force a 60.0-kg sprinter exerts backward on the track to accelerate from 2.00 to 8.00 m/s in a distance of 25.0 m, if he encounters a
headwind that exerts an average force of 30.0 N against him. (
Taken from Lumen Learning
Always start by drawing your free body diagram!
Let's start off with a tried and true classic: Newton's Second Law
We're looking for the force that the sprinter is exerting but we don't know his acceleration! | {"url":"https://hours-zltil9zhf-thinkfiveable.vercel.app/ap-physics-c-m/unit-3/work-energy-theorem/study-guide/ptrepKPH1uzoTXmZUsPS","timestamp":"2024-11-11T04:39:03Z","content_type":"text/html","content_length":"140889","record_id":"<urn:uuid:5c4dbadc-e2bf-4cfb-9234-3e1e88db0d42>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00607.warc.gz"} |
How to use TensorFlow to perform sentiment analysis?
Performing sentiment analysis using TensorFlow can be accomplished through the following steps:
1. Data Preprocessing: The first step is to preprocess the data. This includes tasks such as cleaning the data, tokenizing it, and converting it to a numerical representation. The data should also
be split into training and testing datasets.
2. Building the Model: The next step is to build a neural network model. One common approach is to use a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) cells. The input layer of
the network will take in the preprocessed data, followed by one or more hidden layers, and an output layer. The output layer will produce a probability distribution over the sentiment classes
(e.g., positive, negative, neutral).
3. Training the Model: Once the model is built, it needs to be trained on the training dataset. During training, the model will learn to minimize a loss function (e.g., cross-entropy) by adjusting
the weights and biases of the neural network.
4. Evaluating the Model: After training, the model needs to be evaluated on the testing dataset. This will give an estimate of the model's performance on new, unseen data.
5. Making Predictions: Finally, the trained model can be used to make predictions on new, unseen data. To make a prediction, the preprocessed data is fed into the model, and the output layer
produces a probability distribution over the sentiment classes.
In summary, performing sentiment analysis using TensorFlow involves preprocessing the data, building a neural network model, training the model, evaluating its performance, and using it to make
predictions on new data. | {"url":"https://devhubby.com/thread/how-to-use-tensorflow-to-perform-sentiment-analysis","timestamp":"2024-11-09T03:49:39Z","content_type":"text/html","content_length":"116217","record_id":"<urn:uuid:a7018859-5af2-473c-b765-4de72a0a47cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00129.warc.gz"} |
Price per Bottle Calculator
The Price per Bottle Calculator can calculate the price for each bottle based on the total price of all the bottles.
To calculate the price per bottle, we divide the total price of all the bottles by the number of bottles.
Please enter the total price of all the bottles and the number of bottles so we can calculate the price per bottle:
Price per Brick Calculator
Here is a similar calculator you may find interesting. | {"url":"https://pricecalculator.org/per/price-per-bottle-calculator.html","timestamp":"2024-11-12T07:16:42Z","content_type":"text/html","content_length":"6486","record_id":"<urn:uuid:ca830283-03b3-488d-b3b1-d2f9f9a4dc35>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00168.warc.gz"} |
Math Counterexamples
We consider real-valued functions.
A real-valued function \(f : I \to \mathbb R\) (where \(I \subseteq\) is an interval) is continuous at \(x_0 \in I\) when: \[(\forall \epsilon > 0) (\exists \delta > 0)(\forall x \in I)(\vert x- x_0
\vert \le \delta \Rightarrow \vert f(x)- f(x_0) \vert \le \epsilon).\] When \(f\) is continuous at all \(x \in I\), we say that \(f\) is continuous on \(I\).
\(f : I \to \mathbb R\) is said to be uniform continuity on \(I\) if \[(\forall \epsilon > 0) (\exists \delta > 0)(\forall x,y \in I)(\vert x- y \vert \le \delta \Rightarrow \vert f(x)- f(y) \vert \
le \epsilon).\]
Obviously, a function which is uniform continuous on \(I\) is continuous on \(I\). Is the converse true? The answer is negative.
An (unbounded) continuous function which is not uniform continuous
The map \[
f : & \mathbb R & \longrightarrow & \mathbb R \\
& x & \longmapsto & x^2 \end{array}\] is continuous. Let’s prove that it is not uniform continuous. For \(0 < x < y\) we have \[\vert f(x)-f(y) \vert = y^2-x^2 = (y-x)(y+x) \ge 2x (y-x)\] Hence for \
(y-x= \delta >0\) and \(x = \frac{1}{\delta}\) we get
\[\vert f(x) -f(y) \vert \ge 2x (y-x) =2 > 1\] which means that the definition of uniform continuity is not fulfilled for \(\epsilon = 1\).
For this example, the function is unbounded as \(\lim\limits_{x \to \infty} x^2 = \infty\). Continue reading Continuity versus uniform continuity
Around binary relations on sets
We are considering here binary relations on a set \(A\). Let’s recall that a binary relation \(R\) on \(A\) is a subset of the cartesian product \(R \subseteq A \times A\). The statement \((x,y) \in
R\) is read as \(x\) is \(R\)-related to \(y\) and also denoted by \(x R y \).
Some importants properties of a binary relation \(R\) are:
For all \(x \in A\) it holds \(x R y\)
For all \(x \in A\) it holds not \(x R y\)
For all \(x,y \in A\) it holds that if \(x R y\) then \(y R x\)
For all \(x,y \in A\) if \(x R y\) and \(y R x\) then \(x=y\)
For all \(x,y,z \in A\) it holds that if \(x R y\) and \(y R z\) then \(x R z\)
A relation that is reflexive, symmetric and transitive is called an equivalence relation. Let’s see that being reflexive, symmetric and transitive are independent properties.
Symmetric and transitive but not reflexive
We provide two examples of such relations. For the first one, we take for \(A\) the set of the real numbers \(\mathbb R\) and the relation \[R = \{(x,y) \in \mathbb R^2 \, | \, xy >0\}.\] \(R\) is
symmetric as the multiplication is also symmetric. \(R\) is also transitive as if \(xy > 0\) and \(yz > 0\) you get \(xy^2 z >0\). And as \(y^2 > 0\), we have \(xz > 0\) which means that \(x R z\).
However, \(R\) is not reflexive as \(0 R 0\) doesn’t hold.
For our second example, we take \(A= \mathbb N\) and \(R=\{(1,1)\}\). It is easy to verify that \(R\) is symmetric and transitive. However \(R\) is not reflexive as \(n R n\) doesn’t hold for \(n \
neq 1\). Continue reading Around binary relations on sets
A proper subspace without an orthogonal complement
We consider an inner product space \(V\) over the field of real numbers \(\mathbb R\). The inner product is denoted by \(\langle \cdot , \cdot \rangle\).
When \(V\) is a finite dimensional space, every proper subspace \(F \subset V\) has an orthogonal complement \(F^\perp\) with \(V = F \oplus F^\perp\). This is no more true for infinite dimensional
spaces and we present here an example.
Consider the space \(V=\mathcal C([0,1],\mathbb R)\) of the continuous real functions defined on the segment \([0,1]\). The bilinear map
\langle \cdot , \cdot \rangle : & V \times V & \longrightarrow & \mathbb R \\
& (f,g) & \longmapsto & \langle f , g \rangle = \displaystyle \int_0^1 f(t)g(t) \, dt \end{array}
\] is an inner product on \(V\).
Let’s consider the proper subspace \(H = \{f \in V \, ; \, f(0)=0\}\). \(H\) is an hyperplane of \(V\) as \(H\) is the kernel of the linear form \(\varphi : f \mapsto f(0)\) defined on \(V\). \(H\)
is a proper subspace as \(\varphi\) is not always vanishing. Let’s prove that \(H^\perp = \{0\}\).
Take \(g \in H^\perp\). By definition of \(H^\perp\) we have \(\int_0^1 f(t) g(t) \, dt = 0\) for all \(f \in H\). In particular the function \(h : t \mapsto t g(t)\) belongs to \(H\). Hence
\[0 = \langle h , g \rangle = \displaystyle \int_0^1 t g(t)g(t) \, dt\] The map \(t \mapsto t g^2(t)\) is continuous, non-negative on \([0,1]\) and its integral on this segment vanishes. Hence \(t g^
2(t)\) is always vanishing on \([0,1]\), and \(g\) is always vanishing on \((0,1]\). As \(g\) is continuous, we finally get that \(g = 0\).
\(H\) doesn’t have an orthogonal complement.
Moreover we have
\[(H^\perp)^\perp = \{0\}^\perp = V \neq H\]
A non-compact closed ball
Consider a normed vector space \((X, \Vert \cdot \Vert)\). If \(X\) is finite-dimensional, then a subset \(Y \subset X\) is compact if and only if it is closed and bounded. In particular a closed
ball \(B_r[a] = \{x \in X \, ; \, \Vert x – a \Vert \le r\}\) is always compact if \(X\) is finite-dimensional.
What about infinite-dimensional spaces?
The space \(A=C([0,1],\mathbb R)\)
Consider the space \(A=C([0,1],\mathbb R)\) of the real continuous functions defined on the interval \([0,1]\) endowed with the sup norm:
\[\Vert f \Vert = \sup\limits_{x \in [0,1]} \vert f(x) \vert\]
Is the closed unit ball \(B_1[0]\) compact? The answer is negative and we provide two proofs.
The first one is based on open covers. For \(n \ge 1\), we denote by \(f_n\) the piecewise linear map defined by \[
f_n(0)=f_n(\frac{1}{2^n}-\frac{1}{2^{n+2}})=0 \\
f_n(\frac{1}{2^n})=1 \\
\end{cases}\] All the \(f_n\) belong to \(B_1[0]\). Moreover for \(1 \le n < m\) we have \(\frac{1}{2^n}+\frac{1}{2^{n+2}} < \frac{1}{2^m}-\frac{1}{2^{m+2}}\). Hence the supports of the \(f_n\) are
disjoint and \(\Vert f_n – f_m \Vert = 1\).
Now consider the open cover \(\mathcal U=\{B_{\frac{1}{2}}(x) \, ; \, x \in B_1[0]\}\). For \(x \in B_1[0]\}\) and \(u,v \in B_{\frac{1}{2}}(x)\), \(\Vert u -v \Vert < 1\). Therefore, each \(B_{\frac
{1}{2}}(x)\) contains at most one \(f_n\) and a finite subcover of \(\mathcal U\) will contain only a finite number of \(f_n\) proving that \(A\) is not compact.
Second proof based on convergent subsequence. As \(A\) is a metric space, it is enough to prove that \(A\) is not sequentially compact. Consider the sequence of functions \(g_n : x \mapsto x^n\). The
sequence is bounded as for all \(n \in \mathbb N\), \(\Vert g_n \Vert = 1\). If \((g_n)\) would have a convergent subsequence, the subsequence would converge pointwise to the function equal to \(0\)
on \([0,1)\) and to \(1\) at \(1\). As this function is not continuous, \((g_n)\) cannot have a subsequence converging to a map \(g \in A\).
Riesz’s theorem
The non-compactness of \(A=C([0,1],\mathbb R)\) is not so strange. Based on Riesz’s lemma one can show that the unit ball of an infinite-dimensional normed space \(X\) is never compact. This is
sometimes known as the Riesz’s theorem.
The non-compactness of \(A=C([0,1],\mathbb R)\) is just standard for infinite-dimensional normed vector spaces! | {"url":"https://www.mathcounterexamples.net/2016/03/","timestamp":"2024-11-08T15:43:03Z","content_type":"text/html","content_length":"71022","record_id":"<urn:uuid:eb11c8e8-d6d6-410b-95e8-82a28344c7ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00754.warc.gz"} |
Integrable Days 65th birthday celebration for Alexander P. Veselov
• M. Feigin , Glasgow
• E. Ferapontov , Loughborough
• V. Novikov , Loughborough
This is the webage for the Integrable Days a two-day workshop to celebrate 65th birthday of Alexander Veselov.
Integrable Days are a part of the `Classical and Quantum Integrability’ collaborative workshop series, involving universities of Glasgow, Edinburgh, Heriot-Watt, Leeds, Loughborough and Northumbria,
and supported by the London Mathematical Society
This years meeting will be an occasion to celebrate the 65th birthday of our friend, colleague and teacher Alexander P. Veselov, who initiated Integrable Days at Loughborough University, which run
annually in November since 1996.
Links to recordings can be found here.
This event was kindly funded by the London Mathematical Society.
Sophie Morier-Genoud (Paris) - q-analogues of real numbers
In recent joint work with Valentin Ovsienko we defined q-analogues of real numbers. Our construction is based on a q-deformation of the Farey graph. I will explain the construction and give the main
properties. In particular I will mention links with the combinatorics of posets, cluster algebras, Jones polynomials...
This talk was not recorded
Giovanni Felder (Zurich) - The integrable Boltzmann system
Ludwig Boltzmann, in his search for an example of a chaotic dynamical system, studied the planar motion of a particle subject to a central force bouncing elastically at a line. As recently noticed by
Gallavotti and Jauslin, the system is actually integrable if the force has an inverse-square law. I will review the construction of the second integral of motion and present the results: the orbits
of the Poincaré map are periodic or quasi-periodic and anisochronous, so that KAM perturbation theory (Moser's theorem) applies, implying that for small perturbations of the inverse-square law the
system is still not chaotic. The proof relies on mapping the Poincaré map to a translation by an element of an elliptic curve. A corollary is the Poncelet property: if an orbit is periodic for given
generic values of the integrals of motions then all orbits for these values are periodic.
Rod Halburd (UCL) - Variants of the Painlevé property and integrable subsystems
We use global results about functions that are meromorphic in regions of the plane to find individual solutions of differential, difference and delay-differential equations whose only movable
singularities are poles. We also allow for simple global branching. In this way we can find or describe subsets of solutions of equations that are in general non-integrable.
Vsevolod Adler (Moscow) - Stationary solutions of non-autonomous symmetries of integrable equations
The talk is about some recent results on Painleve-type reductions for integrable equations. My first example is a reduction obtained as a stationary equation for master-symmetry of KdV equation. It
is equivalent to some fourth order ODE and numerical experiments show that some of its special solutions may be related to the Gurevich-Pitaevskii problem on decay of initial discontinuity. The
second example is about non-Abelian Volterra lattices. Here we study several low-order reductions and demonstrate their relation with non-Abelian analogues of discrete and continuous Painleve
Anna Felikson (Durham) - Mutations of non-integer quivers: finite mutation type
Given a skew-symmetric non-integer (real) matrix, one can construct a quiver with non-integer weights of arrows. Such a quiver can be mutated according to usual rules of quiver mutation introduced
within the theory of cluster algebras by Fomin and Zelevinsky. We classify non-integer quivers of finite mutation type and prove that all of them admit some geometric interpretation (either related
to orbifolds or to reflection groups). In particular, the reflection group construction gives rise to the notion of non-integer quivers of finite and affine types. We also study exchange graphs of
quivers of finite and affine types in rank 3. The talk is based on joint works with Pavel Tumarkin and Philipp Lampe.
Oleg Chalykh (Leeds) - Twisted Ruijsenaars model
The quantum Ruijsenaars model is a q-analogue of the Calogero—Moser model, described by n commuting partial difference operators (quantum hamiltonians) h_1, …, h_n. As it turns out, for each natural
number l>1 there exists another integrable system whose quantum hamiltonians have the same leading terms as the l-th powers of h_1, …, h_n. I will discuss several ways of arriving at this | {"url":"https://www.icms.org.uk/events/2021/integrable-days-65th-birthday-celebration-alexander-p-veselov","timestamp":"2024-11-03T19:56:18Z","content_type":"text/html","content_length":"38511","record_id":"<urn:uuid:66b3da8c-58a1-4bc5-b168-e85d1de7f4d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00189.warc.gz"} |
Two Illustrations of "Is g the same as f?"
Here are two illustrations for how one might want to check to see if g is the same as f. The attached file is a Maple 10 worksheet.
=== Content of Worksheet follows [MaplePrimes moderator]
For the original question, see this forum posting.
Hello, Tom. Newbie? I would hate to pass myself off as an old hand in the august company of these Maple experts! On the other hand, I have been doing this a long time, haven't I?
Here is what my question was all about. I will give an easy example first.
Suppose a newbie (even to me) is asked to solve the simple ordinary differential equation y' +
> y:=t->exp(-3.14*t);
But, having been admonished to always check his work, he does this:
> diff(y(t),t)+Pi*y(t);
Of course, a newbie that is an aspiring engineer might say that this is industrial grade zero.
Upon seeing this response, among other things, I would ask that the graph of this last be drawn.
> plot(-3.14*exp(-3.14*t)+Pi*exp(-3.14*t),t=0..1);
At least now, there is an estimate of how much error there is over the interval [0,1].
> restart;
A year or so later, and with much more experience, this potential engineer takes my PDE class. First thing you know, there is a discussion of the heat equation on a disk. If the solutions are
independent of
Bessel's equation comes up. The student might go looking for the zeros of the Bessel functions. (From the perspective of the mathematics involved, this is comparable to approximating with 3.14 in the
above simple problem.) For simplicity, let's compute only 10 of the Bessel zeros.
> for n from 1 to 10 do
If the initial distribution was
> for n from 1 to 10 do
Then, a potential solution for the equation would be created.
> u:=(t,r)->sum(A[n]*BesselJ(0,zero[n]*r)*exp(-zero[n]^2*t),n=1..10):
Having been told to check and check and check solutions, the student does.
First, the initial value can be checked. HERE IS THE ISSUE: is
I draw graphs, off-setting u(0,r) a little so that you can tell them apart.
> plot([1-r^2,u(0,r)+0.01],r=0..1,color=[red,green]);
HERE IS THE ISSUE, again. Is the following difference the same as zero. This is a check of the PDE.
> simplify(diff(r*diff(u(t,r),r),r)/r-diff(u(t,r),t));
I think no amount of simply, combine, collect, ... will make that zero and I think I know why. So, I draw a graph of this difference.
> plot3d(%,r=0..1,t=0..1,axes=normal,orientation=[-35,70]);
Even I agree that this graph is a graph of industrial grade zero!
Of course, it the student had used the exact zeros of the Bessel functions, things would have worked differently.
> v:=(t,r)->sum(A[n]*BesselJ(0,BesselJZeros(0,n)*r)*
> simplify(diff(r*diff(v(t,r),r),r)/r-diff(v(t,r),t));
And, the graph of the solution could be drawn and animated.
> plots[animate3d]([r,theta,v(t,r)],r=0..1,theta=-Pi..Pi,t=0..1,
This post generated using the online HTML conversion tool
Download the original worksheet
Tags are words are used to describe and categorize your content. Combine multiple words with dashes(-), and seperate tags with spaces. | {"url":"https://beta.mapleprimes.com/posts/43762-Two-Illustrations-Of-Is-G-The-Same-As-F","timestamp":"2024-11-14T02:04:01Z","content_type":"text/html","content_length":"115685","record_id":"<urn:uuid:801fb0b9-66f8-41da-a18a-a945db75751c>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00098.warc.gz"} |
Lahar events in the last 2000 years from Vesuvius eruptions – Part 3: Hazard assessment over the Campanian Plain
Articles | Volume 15, issue 4
© Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License.
Lahar events in the last 2000 years from Vesuvius eruptions – Part 3: Hazard assessment over the Campanian Plain
In this study we present a novel general methodology for probabilistic volcanic hazard assessment (PVHA) for lahars. We apply the methodology to perform a probabilistic assessment in the Campanian
Plain (southern Italy), focusing on syn-eruptive lahars from a reference size eruption from Somma–Vesuvius. We take advantage of new field data relative to volcaniclastic flow deposits in the target
region (Di Vito et al., 2024b) and recent improvements in modelling lahars (de' Michieli Vitturi et al., 2024). The former allowed defining proper probability density functions for the parameters
related to the flow initial conditions, and the latter allowed computationally faster model runs. In this way, we are able to explore the effects of uncertainty in the initial flow conditions on the
invasion of lahars in the target area by sampling coherent sets of values for the input model parameters and performing a large number of simulations. We also account for the uncertainty in the
position of lahar generation by running the analysis on 11 different catchments threatening the Campanian Plain. The post-processing of the simulation outputs led to the production of hazard curves
for the maximum flow thickness reached on a grid of points covering the Campanian Plain. By cutting the hazard curves at selected threshold values, we produce a portfolio of hazard maps and
probability maps for the maximum flow thickness. We also produce hazard surface and probability maps for the simultaneous exceeding of pairs of thresholds in flow thickness and dynamic pressure. The
latter hazard products represent, on one hand, a novel product in PVHA for lahars and, on the other hand, a useful means of impact assessment by assigning a probability to the occurrence of lahars
that simultaneously have a relevant flow thickness and large dynamic pressure.
Received: 19 Jun 2023 – Discussion started: 18 Jul 2023 – Revised: 14 Nov 2023 – Accepted: 26 Dec 2023 – Published: 03 Apr 2024
Lahars are flows of water and entrained sediments that originate from the remobilization of volcaniclastic deposits by water, either from rain, melting ice or snow, or a sudden release by a crater
lake (see the companion paper by de' Michieli Vitturi et al., 2024, and references therein). They represent one of the processes causing the highest death toll among volcanic phenomena. According to
the analysis by Auker et al. (2013), syn-eruptive lahars are responsible for about 14% of the fatalities in their database, whereas post-eruptive or inter-eruptive lahars cause another 3%. Among
the most tragic episodes of lahar impact, we recall the lahar generated from Nevado del Ruiz, which buried the city of Armero, killing about 23000 people, making it the second-worst volcanic
disaster of the 20th century (Pierson et al., 1990; Voight et al., 2013; Parra and Cepeda, 1990). Other examples include the series of lahars that hit the surroundings of the Pinatubo volcano in the
years after the 1991 eruption (Pierson et al., 1996; Umbal and Rodolfo, 1996; Rodolfo et al., 1996), the Tangiwai disaster at Ruapehu (Manville, 2004) and the lahars from the 2008–2009 Chaiten
eruption (Pierson et al., 2013).
A simplified method commonly used so far to describe lahar impact is the LAHARZ model (Schilling, 1998). It is based on the statistical correlation between the inundated area and the mass flow volume
inferred from past events. However, in recent years, a few examples of probabilistic hazard assessment for lahars based on more robust statistical treatments, like statistical surrogates or emulation
approaches, have been proposed for different volcanoes worldwide, such as Mead and Magill (2017) on Ruapehu (New Zealand), Tierz et al. (2017) on Vesuvius (Italy), and Gattuso et al. (2021) on
Vulcano (Italy).
Hazard assessment for lahars needs to consider (i) the identification of potential source regions for volcanic material and for water, including snowcaps and glaciers; (ii) the potential magnitude
and characteristics of the flow; (iii) the topography between the source region and the potential targets at risk; (iv) the potential for modification of the flow properties along the path; (v) the
frequency of such events in the past; and (vi) the meteorological data at the source region and along the potential path of such flows, especially for extreme events.
As explained in the companion paper by de' Michieli Vitturi et al. (2024), lahars can change character downstream through processes of flow bulking and debulking, generating high variability in both
time (i.e. unsteadiness) and space (i.e. non-uniformity) for variables pivotal for hazard assessment, such as particle concentration, granulometry and componentry, bulk rheology, and velocity. The
full complexities associated with these processes prevent us from effectively modelling these flows for quantitative hazard assessment purposes. At present, even if we could describe all the
underlying physics, a full 3D simulation of all these phenomena would require prohibitive computational costs, and current numerical models describe only some of the observed phenomena or use
simplified approaches (see the companion paper by de' Michieli Vitturi et al., 2024). In terms of numerical modelling, a good compromise between the completeness of the physics behind these phenomena
and the computational feasibility is represented by the shallow water approach (de' Michieli Vitturi et al., 2024), where model complexity is reduced with a depth averaging of flow properties. This
approach approximates the original 3D problem with a 2D model, and it is the one we apply in this work.
In the Campania region, which is largely covered by fallout and pyroclastic density current (PDC) deposits from eruptions of Somma–Vesuvius, Campi Flegrei, and Ischia volcanoes, the signature of
several syn- and post-eruptive lahars has been found in the geological record (Di Vito et al., 2024b; Sulpizio et al., 2006; Zanchetta et al., 2004a, b). Furthermore, detailed lists of documented
lahars in the 20th century are available in the literature (Fiorillo and Wilson, 2004). Despite such evidence, up to now most of the probabilistic volcanic hazard assessments (PVHAs) for this region
have mainly focused on PDCs (e.g. Neri et al., 2008, 2015; Gurioli et al., 2010; Sandri et al., 2018; Tierz et al., 2018) and tephra fallout (e.g. Costa et al., 2009; Selva et al., 2010, 2018; Sandri
et al., 2016; Massaro et al., 2023), while systematic quantitative hazard assessments from lahars (see, for example, Jenkins et al., 2022) have been lacking. An exception is provided by Tierz et al.
(2017), who applied a Bayesian belief network to assess the effect of different factors (linked to rainfall intensity and volcanoclastic volume) on the probability of different initial volumes of
lahars. However, that study did not explore the variability in the hazard assessment related to the initial flow conditions (mostly linked to the flow volume, detachment area, and volumetric solid
PVHA for lahars requires performing a high number of simulations in order to enable a quantification of the uncertainty linked to model parameters.
Recent technical improvements (e.g. code parallelization) and generalizations (e.g. description of erosion and deposition during the flow) of lahar models, such as that implemented in the most recent
version of the IMEX_SfloW2D model described in the companion paper by de' Michieli Vitturi et al. (2024), permit hundreds of simulations from different catchments, with different initial and boundary
conditions in reasonable times, necessary for characterizing the intrinsic variability and for the production of hazard maps.
Following several surveying campaigns carried out to characterize lahar deposits in natural exposures, archaeological excavations, and ad hoc trenches in the plain surrounding the Vesuvius edifice
and along the Apennine valleys, Di Vito et al. (2024b) present the results of a multidisciplinary study which shows the presence of volcaniclastic deposits (mostly debris and mud flows but also from
hyperconcentrated flood flows) even in areas very far from both the Apennine hills and the valleys of Somma–Vesuvius, demonstrating the high mobility of these flows. In particular, Di Vito et al.
(2024b) focused on the analysis of the syn- and post-eruptive lahar deposits generated by the two sub-Plinian eruptions of Vesuvius in 472CE and 1631. Thicknesses, sedimentological features (lithic
content, pumice provenance, grain size, boulder entrainment), and vertical and/or lateral continuity of the deposits were reported during the campaigns in order to establish characteristic facies
(massive to structured, poorly sorted to better-sorted with respect to the primary pyroclastic deposits and topography) and general flow dynamics (velocity, dynamic pressure, thickness) of those
volcaniclastic systems. Results show that the inclusion of fine ash in the whole deposit distribution, the depositional mechanism of the primary pyroclastic deposits (fallout vs. current), and the
large-scale topographic effects (plain vs. valley) are the main geological features affecting the size and style of the remobilization that occurred for the two eruptions (Pollena and 1631).
In this work, we take advantage of these new field data analyses and recent improvements in modelling lahar flows (de' Michieli Vitturi et al., 2024) to explore the effect of uncertainty in the flow
initial conditions on the invasion of lahars in the Campanian Plain (Fig. 1a) by sampling coherent sets of values for the input model parameters and subsequently performing a considerable number of
lahar simulations (1100 in total) needed for the production of hazard maps.
We present a novel general methodology for PVHA for lahar flows but we focus on syn-eruptive lahars, conditional on the occurrence of a reference eruption for Somma–Vesuvius (the medium-magnitude
scenario by Cioni et al., 2008; Macedonio et al., 2008; Sandri et al., 2016).
In particular, we account for the following:
• 11 different catchments (Fig. 1b) where lahars could originate and impact the target area of the Campanian Plain from both the Somma–Vesuvius edifice and from the Apennines sectors to the east
and south;
• deposits from PDCs (mostly on the Somma–Vesuvius catchments) and from tephra fallout (on the Apennine catchments) from the reference eruption of Somma–Vesuvius;
• the maximum expected rainfall in a few days, taken to be of the order of 500mm, as extracted from the rainfall record in the last 70 years in the Campania Region (Fiorillo and Wilson, 2004), as
well as coherence among the initial values of the flow (initial thickness, detachment area, and volumetric solid fraction), the deposit porosity (water-saturated), and the available water from
rainfall and volcaniclastic sediments from the reference eruption. In order to do so, we build up a strategy to sample the input model parameters, which will be illustrated in Sect. 3.1.
The first and last points, in particular, allow us to explore the uncertainty in the position of lahar generation and in the initial flow conditions (in terms of area of detachment, initial volume,
volumetric solid fraction).
The goal of the study is to quantify the conditional probability of invasion by at least one lahar originating from the remobilization of tephra deposits due to the reference eruption at
Somma–Vesuvius (Sandri et al., 2016). In order to provide useful results for future quantification of the impact of syn-eruptive lahars, we express our results in terms of exceedance probability for
selected thresholds of two variables pivotal for hazard assessment, such as maximum flow thickness and dynamic pressure. The domain of interest, covering the Campanian Plain (see Fig. 1a), was
discretized for computational reasons on a 50m×50m grid. This resolution represents a good compromise between solution accuracy and computational time required for a simulation (see companion
paper, de' Michieli Vitturi et al., 2024). As we mentioned above, we also compute the exceedance probability of selected pairs of critical thresholds in dynamic pressure and flow thickness that
jointly are key parameters for lahar impact assessment (e.g. Wilson et al., 2014). In other words, we take into account the flow “history” in every target grid point, and we compute the probability
of the flow to simultaneously exceed (in at least one time step of the simulated flow) two values of flow thickness and dynamic pressure, repeating this for several pairs of values.
The paper is organized as follows: first we very briefly summarize the geological information and the features of the model; second, we present the method used for the PVHA, and then we show the
results as maps. Finally, we present a discussion and conclusion in order to highlight the main achievements and the current limitations, which will be addressed in future works.
2Field data and transport model
The code developed by generalizing the IMEX_SfloW2D model for describing lahar flows, described in the companion paper by de' Michieli Vitturi et al. (2024), was calibrated using the field data
presented by Di Vito et al. (2024b). The field data were used also to define the available initial deposit from a reference eruption from Somma–Vesuvius in the different catchments from PDC and
tephra fallout deposits.
2.1Remobilizable PDCs and tephra deposits
In this study, for hazard assessment purposes we considered a reference eruption belonging to the medium-magnitude scenario (MMS; Sandri et al., 2016). The initial volumes that can be remobilized as
lahars come from PDC and tephra fallout deposits as well as from the available water (rain in our case). PDC deposits are more relevant for Somma–Vesuvius catchments, which are proximal to the
source; tephra fall deposits, dispersed by the local wind fields, are mostly relevant for the Apennine catchments, where PDCs from an MMS eruption do not leave any appreciable deposit.
As regards the PDC deposits, we used the field data from the most recent sub-Plinian eruptions, which are the 472CE (Pollena; Sulpizio et al., 2005) and the 1631CE (Rosi et al., 1993) eruptions.
Cautiously, in each grid cell of a given catchment we considered the maximum thickness between these two PDC events to be available PDC deposits, as mapped by Gurioli et al. (2010).
As regards tephra fall deposits, we rely on the results of the simulations presented in Sandri et al. (2016), where 1000 simulations were performed for the MMS considering variability of the
meteorological conditions and ESPs (total erupted mass, mass eruption rate, total grain size distribution). Specifically, we randomly sampled (without repetition) 100 fallout deposits from the 1000
available. Hence we used those deposit distributions for the simulations performed with the generalized IMEX_SfloW2D model.
2.2Lahar runout
A considerable set of lahar runout estimations inferred from the deposits in the Campanian Plain (see Fig. 2), associated with the 472 and 1631 eruptions (Di Vito et al., 2024b), were used to
calibrate the empirical parameters needed for friction, erosion, and deposition terms (de' Michieli Vitturi et al., 2024).
2.3Grain size distribution
The grain size distribution (GSD) is another critical input for modelling lahar transport (de' Michieli Vitturi et al., 2024). For this study we used those reconstructed by Di Vito et al. (2024b)
based on the field data from the catchments around Somma–Vesuvius (catchments 1 to 6 in Fig. 1b) integrated with those from Pozzelle quarry (Sulpizio et al., 2006) and from Vallo di Lauro (at the
base of catchment 8 in Fig. 1b). In particular, both types of GSDs have been reconstructed by fitting a mixture of two Weibull distributions and then averaging the values of the local grain sizes
sampled in the field at the base of these catchments separately. The reconstructed GSDs are reported in Fig. 3: for the Apennine catchments (7–11 in Fig. 1b) we used the GSD reconstructed from the
field data taken in Vallo di Lauro (Fig. 3b), while for the Vesuvian catchments (1–6 in Fig. 1b) we obviously use the GSD reported in Fig. 3a and reconstructed from field data taken there.
Finally, in order to assess the effect of the uncertainty in the reconstructed GSDs we performed a sensitivity test (see Sect. S1 in the Supplement) of the weight of the coarse and fine populations
on the simulated deposits, showing that it is not very critical in terms of simulated deposit, maximum thickness, and dynamic pressure 24h after the flow mobilization.
2.4Digital elevation model
For a correct modelling of the areas invaded by lahars it is necessary to use a digital elevation model (DEM) as accurate as possible. To this end we used a digital terrain model (DTM) derived from
an airborne lidar survey of 2012 combined with the TIN Italy topography (Tarquini et al., 2007) in a portion of the sub-Apennine areas where the lidar data were not available. The lidar data were
provided by the Italian Ministero dell'Ambiente e della Tutela del Territorio e del Mare (MATTM) through a series of ASCII files storing the elevation data in the latitude–longitude WGS84 reference
system. The tiles, each covering 1km^2, have been processed to create a single elevation matrix georeferenced in the WGS84-UTM-Zone 33N geodetic cartographic system and memorizing the elevation data
at 32 bits. The obtained matrix, with spatial resolution of 1m and vertical accuracy <30cm (Pizzimenti et al., 2016), was cleaned from residual anthropic or artificial features and subsequently
resampled at 10m cell size in order to be combined with the TIN Italy model. The resulting matrix (4129×5088 cells), covering 1600km^2, was used as the topography model for the area (Fig. 1c) for
the simulations, discretized at a computational grid of 50m×50m, which was tested by de' Michieli Vitturi et al. (2024) as a good resolution able to reproduce the main features of the flow in
reasonable computational times.
3.1Sampling strategy
In order to explore the natural variability in the processes governing lahar initiation, we first identify the key independent parameters. Given a catchment, the key parameters are the initial flow
volume and initial solid volumetric fraction (α[s]); the first depends on the initial area of lahar detachment, on the thickness of the available remobilizable deposits, and on the available water
from both rain or other external water sources and the water content filling soil pores.
The initial value of α[s] is very variable and hard to assess; thus, we uniformly sample it in the range [0.10–0.60], which represents general limits for a lahar, encompassing a wide range of flows
from debris flows (solid concentration α[s]>0.5) to hyperconcentrated flows (α[s]=0.5–0.1) to muddy streamflows (α[s]<0.1) (Vallance and Iverson, 2015; Neglia et al., 2022).
Volcanoclastic deposits from Vesuvius are characterized by an effective porosity (α[d]) that has been estimated as 0.37±0.10 (Di Vito et al., 2024b). Thus, α[d] is sampled from a Gaussian
distribution with a mean of 0.37 and standard deviation equal to 0.03 so that 99% of the sampled values are within the above estimate.
To identify the possible initial area of lahar formation in a given catchment, we first select the grid points (Fig. 1a, b) falling within the catchment. Then, we rely on three empirical assumptions
based on field evidence (Bisson et al., 2014): (i) only grid cells that are “steep enough”, i.e. having a slope larger than 20–30°, can be the site of remobilization. (ii) The steeper a cell, the
more likely its deposits will be remobilized; however, (iii) on very steep cells (slope>40°), deposits tend not to accumulate, and thus we assume no deposit is available for remobilization on such
grid cells.
In order to define the potential initial area of lahar generation accounting for points (i) to (iii) above, we assume that remobilization can occur in a grid cell if it is steeper than a given δ[min]
and less steep than 40°. Then we distinguish between Somma–Vesuvius catchments, where most of the deposits are fine-grained PDC deposits and are thus easier to be remobilized, and Apennine
catchments, where deposits are coarser-grained fallout layers more difficult to be remobilized. As pointed out by Pierson et al. (2013) the thickness of fine ash is also important because it can
prolong low infiltration capacity and a high runoff rate. To reflect this feature, for each catchment and simulation, the slope threshold δ[min] is sampled randomly from a triangular distribution,
independently for Somma–Vesuvius and Apennine catchments. For Somma–Vesuvius we considered a lower-bound distribution δ[min] of [20–30]° and for the Apennines [20–35]°. In this way, steeper cells are
much more likely to be the site of remobilization (to reflect point ii) that can never occur at slopes lower than 20° (to reflect point i) and that definitely occurs above 30 and 35°, respectively
(to reflect the difference in the types of deposits). These values are in agreement with Pierson et al. (2013).
Having defined the initial solid volumetric fraction and the area of lahar initiation, we can univocally define the initial flow volume by taking into consideration some physical constraints given by
the availability of solid deposits and of water from pores and from rain in the domain of remobilization. The simulated volumes in each catchment (minimum, maximum, mean, and some percentiles) are
provided in Table 1. As typical conditions during lahar flows we assume that the deposits are already water-saturated by an extra amount of water from previous rain when the lahar is triggered (e.g.
Fiorillo and Wilson, 2004; Di Vito et al., 2024b). Following de' Michieli Vitturi et al. (2024), the thickness of available compacted deposit (i.e. devoid of the water filling its pore), ${\stackrel
{\mathrm{‾}}{h}}_{\mathrm{s}}$, can be bounded as
$\begin{array}{}\text{(1)}& {\stackrel{\mathrm{‾}}{h}}_{\mathrm{s}}\le \mathrm{min}\left\{{h}_{\mathrm{d}}\left(\mathrm{1}-{\mathit{\alpha }}_{\mathrm{d}}\right),\frac{{\mathit{\alpha }}_{\mathrm{s}}
\left(\mathrm{1}-{\mathit{\alpha }}_{\mathrm{d}}\right)}{\left(\mathrm{1}-{\mathit{\alpha }}_{\mathrm{s}}-{\mathit{\alpha }}_{\mathrm{d}}\right)}{h}_{\mathrm{r}}\right\},\end{array}$
with being h[d] the thickness of the water-saturated deposit, h[r] the column of available rainwater, α[d] the deposit effective porosity, and α[s] the initial solid volumetric fraction. Following
the considerations reported in de' Michieli Vitturi et al. (2024), we set h[r] equal to 500mm as a conservative value corresponding to the maximum 2d accumulated rainfall over the Somma–Vesuvius
area in the last 70 years in the region, although in principle it can be sampled from a suitable distribution (in this case an assumed Dirac delta distribution).
3.2PVHA workflow and combination of the model simulation output
After the sampling of the relevant parameters, for our PVHA we follow the step of the workflow illustrated in Fig. 4. For each catchment, we run N[s]=100 simulations with the generalized
IMEX_SfloW2D model (see purple box in Fig. 4), each with a different set of initial values for the model parameters. The simulations are run over a sub-domain with a resolution of 50m×50m, cut in
order to save computational time to avoid simulating negligible flow thicknesses over very distal grid points from a given catchment.
For each source catchment i, we compute the probability (given N[s] simulations) of exceeding a given threshold h[j] in maximum flow thickness in a target grid point x as
$\begin{array}{}\text{(2)}& {p}_{i,{h}_{j}}\left(x\right)=\frac{\sum _{k=\mathrm{1}}^{{N}_{\mathrm{s}}}{\mathit{\theta }}_{ik}}{{N}_{\mathrm{s}}},\end{array}$
where θ[ik] is 1 if, in the kth simulation from catchment i, the maximum simulated flow thickness in x was larger than h[j] or 0 otherwise. The set of thresholds in flow thickness used is composed of
21 values: from 0.1 to 1m at a step of 0.1m, from 1.5 to 4m at a step of 0.5m, from 6 to 10m at a step of 2m, and the last values at 15 and 20m.
Similarly, for each source catchment i, we compute the probability of simultaneously exceeding a pair (h[j],P[l]) of threshold values in flow thickness and dynamic pressure in a target grid point x
$\begin{array}{}\text{(3)}& {p}_{i,{h}_{j},{P}_{l}}\left(x\right)=\frac{\sum _{k=\mathrm{1}}^{{N}_{\mathrm{s}}}{\mathit{\theta }}_{ik}}{{N}_{\mathrm{s}}}.\end{array}$
Here θ[ik] is 1 if, in the kth simulation for catchment i, the pair (h[j],P[l]) has been overcome in x at least once or 0 otherwise. In this case the set of threshold pairs in flow thickness and
dynamic pressure simultaneously overcome is composed of 36 values, consisting of all the possible combinations of the values 0, 0.1, 0.5, 1, 2, and 5m of thickness and 0, 0.5, 1, 2, 5, and 30kPa
for dynamic pressure. The value equal to 0 in one of the two target variables that allows computing the probability maps of exceeding the six thresholds in the other variable. In other words, by
selecting a threshold value equal to 0 in one variable and a given threshold t[i] in the other variable, we can visualize the probability of a maximum value larger than or equal to t[i] in the latter
We then combine all the N[c]=11 catchments together by computing
$\begin{array}{}\text{(4)}& {p}_{{h}_{j}}\left(x\right)=\mathrm{1}-\prod _{i=\mathrm{1}}^{{N}_{\mathrm{c}}}\left(\mathrm{1}-{p}_{i,{h}_{j}}\left(x\right)\right)\end{array}$
$\begin{array}{}\text{(5)}& {p}_{{h}_{j},{P}_{l}}\left(x\right)=\mathrm{1}-\prod _{i=\mathrm{1}}^{{N}_{\mathrm{c}}}\left(\mathrm{1}-{p}_{i,{h}_{j},{P}_{l}}\left(x\right)\right),\end{array}$
which respectively yield the probability of exceeding the maximum flow thickness h[j] and the probability of exceeding simultaneously the pair (h[j],P[l]) in flow thickness and dynamic pressure in
the target grid point x from at least one catchment, given the remobilization of the volcanoclastic deposits from a medium-sized eruption at Somma–Vesuvius.
The ranges in the initial volume of simulated lahars are shown in Table 1. We can see that some catchments are more prone to generating large initial volumes, specifically catchments 1 to 4 (on
Somma–Vesuvius) and 7, 8, and 9 (on the Apennines) which have large maximum and/or mean simulated volumes. The latter catchments are characterized by a large extension and the former by the
availability of a large thickness of deposits from PDCs. The catchments on Somma–Vesuvius and those in the northeastern Apennine section (numbers 7 and 8 in our case) were also identified in Tierz et
al. (2017) as those able to generate larger-initial-volume lahars in the case of a medium-sized eruption. We also notice that in some Apennine catchments (numbers 7 to 11) some simulations do not
have significant deposits from tephra fallout to be remobilized (null initial volumes, probably because we simulated deposits from eruptions under wind fields not directed towards those catchments).
Following the approach described in the previous section, for each target grid point we can build a hazard curve (e.g. Tonini et al., 2015) for the maximum flow thickness from each different
catchment (Eqs. 2 and 3) and one from at least one catchment (Eqs. 4 and 5). As an example, we show these results on the grid point located in San Marzano sul Sarno (S in Fig. 1a), an inhabited town
in the Campanian Plain, in Fig. 5: in Fig. 5b–e we show the hazard curves only relative to the catchments that are able to generate a lahar reaching this target point and do not show the hazard
curves for the other catchments, whereas in Fig. 5f we show the hazard curve from any catchment. These hazard curves can be cut at different thresholds in terms of exceedance probability or thickness
thresholds and mapped on the whole target domain to obtain hazard or probability maps, respectively, such as those in Fig. 6a–k (at 5% exceedance probability) and Fig. 7a–k (for a flow maximum
thickness of 1m).
In Figs. 6l and 7l we show the same hazard and probability maps for at least one lahar from any catchment, as derived from Eq. (5).
Similarly to hazard curves, we can think of a “hazard surface” in terms of pairs of threshold values in flow thickness and dynamic pressure simultaneously overcome at every grid point. These hazard
surfaces aim at an easier visualization for impact assessment: in reality, in order to evaluate the impact of a lahar on the built environment, it is the joint effect of these two parameters that
gives a more relevant measure of the impact.
An example for a Vesuvian location (the Casilli train station, point C) is shown in Fig. 8 from specific catchments (Fig. 8b and c) and from any catchment (Fig. 8d). This location has been selected
as it is impacted by more than one catchment, which is not common on the studied domain. By cutting these surfaces at specific pairs of values for the flow thickness and flow dynamic pressure, we
achieve maps showing the probability of simultaneously exceeding the pair of threshold values selected.
In Fig. 9 we show the example for 0.5m of flow thickness and 1kPa in flow dynamic pressure from each different catchment (Fig. 9a–k) and from any catchment, as derived from a catchment (Fig. 9l).
Typically, the output of probabilistic hazard studies is a portfolio of scientific products (different hazard and/or probability maps at different thresholds in terms of exceedance probability and/or
intensity, respectively, corresponding to different average return times). In this respect, the present study is no different, and in Sect. S2 we provide the whole set (apart from those already given
in Figs. 6, 7, and 9) of the following:
• probability maps for the maximum flow thickness (and at the other 20 thresholds in thickness considered except the one in Fig. 7 – Figs. S7 to S26);
• hazard maps for the maximum flow thickness at the exceedance probability thresholds of 1%, 2%, 10%, 50%, and 90% (Figs. S27 to S31); and
• probability maps for the simultaneous exceeding of flow thickness and dynamic pressure threshold pairs (at the other 35 threshold pairs considered except the one in Fig. 9, Figs. S32 to S65).
The visual inspection of the portfolio of probabilistic maps for lahar maximum thickness shows that, in the case of a reference size eruption at Vesuvius and heavy rain, flows of maximum thickness of
half a metre (e.g. Fig. S11) are to be expected in the southwestern, northwestern, northern, northeastern, and eastern sectors around Vesuvius, having conditional probabilities very close to 1 even
in valleys down to 10–15km from the volcano summit. In the southwestern sector they would likely reach the shoreline, in agreement with Tierz et al. (2017), with decimetric maximum thickness. In
such sectors, the hazard maps (e.g. Fig. S30) tell us that a flow of metric maximum thickness has a 50% chance to be exceeded in the case of a primary lahar from the reference eruption. In the areas
threatened by Apennine catchments, the highest conditional probability for such a scale in maximum flow thickness is found in valleys, especially in the Vallo di Lauro, Sarno, and Castellammare di
Stabia areas, with values up to 50%. This is in agreement with the findings by Tierz et al. (2017), who found metric flow depths when considering lahars triggered from Vallo di Lauro and Avella
Flows of maximum thickness of the order of 1–2m (e.g. Figs. 7 and S17) have a conditional probability close to 1 only in the bottom of valleys from the northwest to southeast (clockwise) sectors
around Somma–Vesuvius, up to approximately 5km. In the other Vesuvian valleys around the volcano they have 10%–50% conditional probabilities, whereas in the Apennine valleys the maximum
conditional probability is about 30% for this order of flow thickness, again in the Vallo di Lauro, Sarno, and Castellammare di Stabia areas. Flows of maximum thickness of the order of 5m or more
are conditionally unlikely (probability less than 10%) and confined to valley bottoms, at least at the resolution used in this study.
When accounting for the simultaneous exceeding of conjoint thresholds in flow thickness and dynamic pressure, we can see that for the Vesuvian area the maximum conditional probability for flows of
0.5–1m and 0.5–5kPa is found downhill in valley bottoms (e.g. Figs. 9, 10, S45–S47, S50–S53), with the highest values being located well below 500m altitude: these areas are densely populated and
built. In these areas, these ranges of conjoint thresholds have a conditional probability close to 1 in the bottom of the valleys. High dynamic pressure values (e.g. 5kPa or larger – Figs. S41, S42,
S46, S47, S52, S53) are overcome only in steep slopes, where presumably the flow speeds up. Their conditional probability is quite low in Apennine steep flanks, where such dynamic pressure thresholds
are overcome with very thin flows (of the order of 0.1m, Fig. S42). In steep flanks of Vesuvian catchments 1 to 4, high dynamic pressure flows (5kPa or larger) are more probable than in the
Apennines, still associated with flows of 0.1–0.5m (Figs. S42 and S47). In Di Vito et al. (2024b), a reverse engineering approach is used to invert the occurrence of external clasts (bricks, walls,
limestone fragments, etc.) into the volcaniclastic deposits to estimate the local flow dynamic pressure, velocity, and thickness. For flows with an estimated thickness of 0.5–1m, the most
characteristic range of dynamic pressure is 4–8kPa, corresponding to a representative range of velocity of 2–4ms^−1. This means that the higher the velocity (and dynamic pressure) the higher the
capability of the flow to entrain accidental clasts (or damage infrastructures). Comparing such estimates with the probabilistic maps from the present study, we see that the probability to overcome
these combinations of thresholds in thickness and dynamic pressure (0.5–1m; 2–30kPa, Figs. S46–S48, S52–S54) is statistically significant, being larger than 5% in several grid points. In
particular, the pairs of lower thresholds (e.g. 0.5m and 2kPa, Fig. S46) have a probability larger than 5% in steep valleys throughout the domain and in some locations at the mouth of the
narrowest and steepest valleys (e.g. the Vesuvian ones or in Vallo di Lauro). The pairs of higher thresholds (e.g. 1m and 5kPa, Fig. S53) have a probability larger than 5% only in the steepest
valleys on Vesuvius slopes. As a concluding remark, we can state that the estimated combined values of flow thickness and dynamic pressure from field data in Di Vito et al. (2024b) are well-captured
by our probabilistic maps and do not represent outliers or unlikely values with respect to them.
As mentioned above, the probability maps shown in Figs. S32 to S37 (where the threshold in maximum flow thickness is 0m) in practice show the probability of flows with maximum dynamic pressure
exceeding the six thresholds considered (see Sect. 3.2). It is important to highlight that where high values of dynamic pressure are overcome, it may be due to simulated flows of negligible
thickness: this is the reason why we decided to consider the simultaneous exceeding of non-zero thresholds in thickness and dynamic pressure (e.g. in Figs. 9 and S38–S65). We think that this type of
information could be of great importance when incorporating probabilistic hazard into impact and quantitative risk assessment (e.g. Zuccaro et al., 2008; Zuccaro and De Gregorio, 2013).
Comparing the different catchments, the catchments threatening the largest areas (e.g. the area invaded by at least 10cm thick flows with exceedance probability of 10%, Fig. S29) are the Apennine
sectors 7 (Vallo di Avella), 8 (Vallo di Lauro), and 9 (upward of the towns of Nocera Inferiore, Gragnano, and Castellammare di Stabia, a very densely inhabited area). Overall, the Vesuvian sectors 2
(north), 3 (northeast), and 4 (east) show the largest maximum expected thickness in hazard maps, given an exceedance probability (e.g. Figs. 6, S27–S31), and the largest probability of given maximum
thickness in probability maps (e.g. Figs. 7, S7–S26). In a few words, the smaller conditional hazard ubiquitously found in the Apennine areas, compared to the Vesuvian ones, is due to the smaller
quantity of available sediments to be remobilized (only from medio-distal fallout, whereas on Somma–Vesuvius both proximal fallout and PDC deposits are available) and their coarser grain size. In our
simulation strategy scheme, the latter feature has been assumed to have higher resistance to remobilization.
In a few points (such as S and C, respectively shown in Figs. 5 and 8) the hazard is due to more than one catchment, but this appears to be an exception: overall, most of the inspected domain is
threatened by one catchment.
5Discussion and conclusions
The present study aims at providing a probabilistic assessment of lahar hazard conditional on the occurrence of a medium-sized reference eruption from Somma–Vesuvius. For the first time, such a
hazard assessment has considered several sources of uncertainties previously overlooked, such as uncertainties in the initial volumes and initial detached areas of lahars, as well as effects of
erosion and deposition processes. This has been possible thanks to both the model formulation (de' Michieli Vitturi et al., 2024) and the availability of relatively fast computational resources at
INGV. Both factors enabled the simulation, in a reasonable time, of 100 different scenarios from each of the 11 catchments examined here, totalling a larger-than-ever number of simulations for a
lahar hazard assessment.
Where possible, constraints on the range of parameters values, or constraints on the probabilistic distribution describing relevant parameters, were obtained by comparison with field data (for
example, the GSD or the sediment porosity α[d]). For other parameters, only loose constraints were possible, and a maximum-ignorance uniform probability distribution has been used to describe such
Under the hazard–risk separation principle (Jordan et al., 2014; Marzocchi et al., 2021), we remark that it is the role of volcanologists to quantify uncertain scientific information in a way that
can be used to mitigate risk.
The present study does not tackle the variability related to the different possible eruption sizes at Somma–Vesuvius, as we have focused on the medium-sized eruption only (Sandri et al., 2016), and
we provide the quantification of lahar hazard in the case of such an event. Future research will be devoted to extending the present analysis to other eruptive sizes, especially to larger ones (i.e.
Plinian events similar to Pompeii 79CE or Avellino and Mercato eruptions; see Gurioli et al., 2010), whose lahar hazard potential is significant (Tierz et al., 2017).
However, there are a number of papers in the literature quantifying the probability of a Somma–Vesuvius eruption of medium size similar to the one accounted for in this study (e.g. Marzocchi et al.,
2004; Selva et al., 2022). According to Selva et al. (2022), an estimate of the probability of at least one eruption from Somma–Vesuvius, given the current low-activity period, is about 34% in
50 years (Fig. 6 in Selva et al., 2022), and the conditional probability of a medium-sized event (i.e. a VEI=4) is about 20% (Fig. S7 in Selva et al., 2022). Consequently, a gross estimate of the
probability of an eruption of medium size at Vesuvius is about 7% in 50 years. To our knowledge, the probability of syn-eruptive lahar generation, given an eruption of medium size, has never been
quantified. It may be broadly estimated on the basis of the frequency of lahar triggering observed at analogue volcanoes (Tierz et al., 2019) with similar climates. Such development is beyond the
goals of this study and is foreseen as a future research project.
Finally, we remark that, in the present study, two limitations are related to (i) the number of simulations performed from each catchment (100) and (ii) the assumption of available rainfall, which we
have fixed at 50cm. As for the former limitation, it is mainly dictated by the availability of computational resources. Such a number has been chosen to carry out the simulations in a reasonable
time, and it may be overcome in the future as more performing codes and resources are available. At the same time, the exploration of lahar hazard in the case of other possible eruptive size classes
from Somma–Vesuvius (Sandri et al., 2016) could be examined in future developments of this work. As for the latter limitation, we have also inferred that possible rain due to condensation of
magma-exsolved water vapour in the umbrella region that we have roughly estimated in an extra 5 to 10cm of water does not significantly increase such an upper limit. We analysed the data from the
rain gauge operating at the historical building of the Osservatorio Vesuviano on Mt. Vesuvius since 1940 (Ricciardi et al., 2007), finding that this amount represents approximately the maximum
recorded accumulated rain over 2d since about 1950 in the Campanian region. This value is similar to the maximum rainfall among the episodes of lahars reported by Fiorillo and Wilson (2004). In this
perspective, this limit has been taken as conservative. In fact, it implies that the simulated lahars have larger initial volumes, since more water is available, compared to cases with a smaller
amount of rainfall. However, due to climate change, the 2d intensity of rain may be larger over this area in the coming decades. Furthermore, the comparison of the 2d accumulated rainfall at
Vesuvius with the occurrence of lahar cases in Campania shows very little correlation in time (Cantelli, 2021). Following these considerations, we acknowledge that relaxing this assumption, on the
one hand, would allow simulating more frequent and smaller-sized lahars that appear to have occurred in the last 50 years even with smaller rainfall intensity (e.g. the Sarno event in 1998) and, on
the other hand, would allow accounting for potentially larger rainfall intensities expected in the future by ongoing climate change, which is the subject of future work.
The sources of information are reported on the reference list. The data used in this study are all published data. In particular, the data from Di Vito et al. (2024b) compared in the Results section
with our probability evaluations are available at https://doi.org/10.5281/zenodo.10814860 (Di Vito et al., 2024a).
The original data generated in this work (probability maps) are publicly available at https://doi.org/10.5281/zenodo.10794183 (Sandri, 2024).
Conceptualization: LS, AC, MdMV, MADV. Data curation: AC, LS, MdMV, MADV, DMD, IR, MB, RG, RS, SDV. Formal analysis: LS, AC, MdMV. Funding acquisition: MADV, AC, LS. Methodology: LS, AC, MdMV.
Manuscript curation: LS, AC.
The contact author has declared that none of the authors has any competing interests.
The paper does not necessarily represent DPC official opinion and policies.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation
in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors.
We thank the colleagues Daniela De Gregorio, Stefano Nardone, and Giulio Zuccaro from the PLINIVS Centre at Universita' di Napoli Federico II (Naples, Italy) for the helpful discussions on lahar
impact parameters, as well as Ylenia Cantelli.
We also wish to express gratitude for the invaluable work of two anonymous reviewers and of the editor Virginie Pinel that improved the quality of the paper.
This research has been supported by the 2012–2021 agreement between Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the Italian Presidenza del Consiglio dei Ministri, Dipartimento della
Protezione Civile (DPC), Convenzione B2.
This paper was edited by Virginie Pinel and reviewed by two anonymous referees.
Auker, M. R., Sparks, R. S. J., Siebert, L., Crosweller, H. S., and Ewert, J.: A statistical analysis of the global historical volcanic fatalities record, J. Appl. Volcanol., 2, 2, https://doi.org/
10.1186/2191-5040-2-2, 2013.
Bisson, M., Spinetti, C., and Sulpizio, R.: Volcaniclastic flow hazard zonation in the Sub-Apennine Vesuvian area using GIS and remote sensing, Geosphere, 10, 1419–1431, 2014.
Cantelli, Y.: Analisi statistica degli eventi di precipitazione associati alla generazione di lahar nell'area del Vesuvio, Master Thesis, Alma Mater Studiorum – Università di Bologna, 2021 (in
Cioni, R., Bertagnini, A., Santacroce, R., and Andronico, D.: Explosive activity and eruption scenarios at Somma-Vesuvius (Italy): towards a new classification scheme, J. Volcanol. Geoth. Res., 178,
331–346, https://doi.org/10.1016/j.jvolgeores.2008.04.024, 2008.
Costa, A., Dell'Erba, F., Di Vito, M. A., Isaia, R., Macedonio, G., Orsi, G., and Pfeiffer, T.: Tephra fallout hazard assessment at the Campi Flegrei caldera (Italy), Bull. Volcanol., 71, 259–273,
de' Michieli Vitturi, M.: demichie/IMEX_SfloW2D_v2, Zenodo [code], https://doi.org/10.5281/zenodo.10639237, 2024.
de' Michieli Vitturi, M., Costa, A., Di Vito, M. A., Sandri, L., and Doronzo, D. M.: Lahar events in the last 2000 years from Vesuvius eruptions – Part 2: Formulation and validation of a
computational model based on a shallow layer approach, Solid Earth, 15, 437–458, https://doi.org/10.5194/se-15-437-2024, 2024.
Di Vito, M. A., de Vita, S., Doronzo, D. M., Bisson, M., Di Vito, M. A., Rucco, I., and Zanella, E.: Field data collected from pyroclastic and lahar deposits of the 472 AD (Pollena) and 1631 Vesuvius
eruptions, Zenodo [data set], https://doi.org/10.5281/zenodo.10814860, 2024a.
Di Vito, M. A., Rucco, I., de Vita, S., Doronzo, D. M., Bisson, M., de' Michieli Vitturi, M., Rosi, M., Sandri, L., Zanchetta, G., Zanella, E., and Costa, A.: Lahar events in the last 2000 years from
Vesuvius eruptions – Part 1: Distribution and impact on densely inhabited territory estimated from field data analysis, Solid Earth, 15, 405–436, https://doi.org/10.5194/se-15-405-2024, 2024b.
Fiorillo, F. and Wilson, R. C.: Rainfall induced debris flows in pyroclastic deposits, Campania (southern Italy), Eng. Geol., 75, 263–289, 2004.
Gattuso, A., Bonadonna, C., Frischknecht, C., Cuomo, S., Baumann, V., Pistolesi, M., Biass, S., Arrowsmith, J. R., Moscariello, M., and Rosi, M.: Lahar risk assessment from source identification to
potential impact analysis: the case of Vulcano Island, Italy, J. Appl. Volcanol., 10, 9, https://doi.org/10.1186/s13617-021-00107-6, 2021.
Gurioli, L., Sulpizio, R., Cioni, R., Sbrana, A., Santacroce, R., Luperini, W., and Andronico, D.: Pyroclastic flow hazard assessment at Somma–Vesuvius based on the geological record, Bull.
Volcanol., 72, 1021–1038, https://doi.org/10.1007/s00445-010-0379-2, 2010.
Jenkins, S. F., Biass, S., Williams, G. T., Hayes, J. L., Tennant, E., Yang, Q., Burgos, V., Meredith, E. S., Lerner, G. A., Syarifuddin, M., and Verolino, A.: Evaluating and ranking Southeast Asia's
exposure to explosive volcanic hazards, Nat. Hazards Earth Syst. Sci., 22, 1233–1265, https://doi.org/10.5194/nhess-22-1233-2022, 2022.
Jordan, T. H., Marzocchi, W., Michael, A., and Gerstenberger, M.: Operational earthquake forecasting can enhance earthquake preparedness, Seismol. Res. Lett., 85, 955–959, 2014.
Macedonio, G., Costa, A., and Folch, A.: Ash fallout scenarios at Vesuvius: Numerical simulations and implications for hazard assessment, J. Volcanol. Geoth. Res., 178, 366–377, 2008.
Manville, V.: Palaeohydraulic analysis of the 1953 Tangiwai lahar: New Zealand's worst volcanic disaster, Acta Vulcanol., 16, 137–151, 2004.
Marzocchi, W., Papale, P., Sandri, L., and Selva, J.: Reducing the volcanic risk in the frame of the hazard/risk separation principle, in Forecasting and Planning for Volcanic Hazards, Risks, and
Disasters, edited by: Schroeder J. F. and Papale, P., Vol. 2, ISBN 978-0-12-818082-2, https://doi.org/10.1016/B978-0-12-818082-2.00014-7, 2021.
Massaro, S., Stocchi, M., Martínez Montesinos, B., Sandri, L., Selva, J., Sulpizio, R., Giaccio, B., Moscatelli, M., Peronace, E., Nocentini, M., Isaia, R., Titos Luzón, M., Dellino, P., Naso, G.,
and Costa, A.: Assessing long-term tephra fallout hazard in southern Italy from Neapolitan volcanoes, Nat. Hazards Earth Syst. Sci., 23, 2289–2311, https://doi.org/10.5194/nhess-23-2289-2023, 2023.
Mead, S. R. and Magill, C. R.: Probabilistic hazard modelling of rain-triggered lahars, J. Appl. Volcanol., 6, 8, https://doi.org/10.1186/s13617-017-0060-y, 2017.
Neglia, F., Dioguardi, F., Sulpizio, R., Ocone, R., and Sarocchi, D.: Computational fluid dynamic simulations of granular flows: Insights on the flow-wall interaction dynamics, Int. J. Multiphas.
Flow, 157, 104281, https://doi.org/10.1016/j.ijmultiphaseflow.2022.104281, 2022.
Neri, A., Aspinall, W. P., Cioni, R., Bertagnini, A., Baxter, P. J., Zuccaro, G., Andronico, D., Barsotti, S., Cole, P. D., Esposti Ongaro, T., Hincks, T. K., Macedonio, G., Papale, P., Rosi, M.,
Santacroce, R., and Woo, G.: Developing an event tree for probabilistic hazard and risk assessment at Vesuvius, J. Volcanol. Geoth. Res., 178, 397–415, https://doi.org/10.1016/
j.Jvolgeores.2008.05.014, 2008.
Neri, A., Bevilacqua, A., Esposti Ongaro, T., Isaia, R., Aspinall, W. P., Bisson, M., Flandoli, F., Baxter, P. J., Bertagnini, A., Iannuzzi, E., Orsucci, S., Orsucci, S., Pistolesi, M., Rosi, M., and
Vitale, S.: Quantifying volcanic hazard at Campi Flegrei caldera (Italy) with uncertainty assessment: 2. Pyroclastic density current invasion maps, J. Geophys. Res.-Sol. Ea., 120, 2330–2349, 2015.
Parra, E. and Cepeda, H.: Volcanic hazard maps of the Nevado del Ruiz Volcano, Colombia, J. Volcanol. Geoth. Res., 42, 117–127, 1990.
Pierson, T. C., Janda, R. J., Thouret, J. C., and Borrero, C. A.: Perturbation and melting of snow and ice by the 13 november 1985 eruption of Nevado del Ruiz, Colombia, and consequent mobilization,
glow and deposition of lahars, J. Volcanol. Geoth. Res., 41, 17–66, 1990.
Pierson, T. C., Daag, A. S., Delos Reyes, P. J., Regalado, M. T., Solidum, R. U., and Tubianosa B. S.: Flow and Deposition of Posteruption Hot Lahars on the East Side of Mount Pinatubo, July–October
1991, in: Fire and mud: eruptions and lahars of Mount Pinatubo, Philippines, edited by: Newhall, C. G. and Punongbayang, R. S., Philippine Institute of Volcanology and Seismology, Quezon City,
University of Washington Press, Seattle and London, https://pubs.usgs.gov/pinatubo/index.html (last access: 8 March 2024), 1996.
Pierson, T. C., Major, J. J., Amigo, Á., and Moreno, H.: Acute sedimentation response to rainfall following the explosive phase of the 2008–2009 eruption of Chaitén volcano, Chile, Bull. Volcanol.,
75, 1–17, https://doi.org/10.1007/s00445-013-0723-4, 2013.
Pizzimenti, L., Tadini, A., Gianardi, R., Spinetti, C., Bisson, M., and Brunori C. A.: Digital Elevation Models derived by ALS data: Sorrentina Peninsula test areas, Rapporto Tecnico INGV – No. 361,
https://doi.org/10.13140/RG.2.2.12436.50564, 2016.
Ricciardi, G. P., Siniscalchi, V., Cecere, G., and Macedonio, G.: Meteorologia Vesuviana dal 1864 al 2001, CD-Rom, 2007.
Rodolfo, K. S., Umbal, J. V., Alonso, R. A., Remotigue, C. T., Paladio-Melosantos, M. L., Salvador, J. H. G., Evangelista, D., and Miller Y.: Two Years of Lahars on the Western Flank of Mount
Pinatubo: Initiation, Flow Processes, Deposits, and Attendant Geomorphic and Hydraulic Changes, in: Fire and mud: eruptions and lahars of Mount Pinatubo, Philippines, edited by: Newhall, C. G. and
Punongbayang, R. S., Philippine Institute of Volcanology and Seismology, Quezon City, University of Washington Press, Seattle and London, https://pubs.usgs.gov/pinatubo/index.html (last access: 8
March 2024), 1996.
Rosi, M., Principe, C., and Vecci, R.: The 1631 Vesuvius eruption. A reconstruction based on historical and stratigraphical data, J. Volcanol. Geoth. Res., 58, 151–182, https://doi.org/10.1016/
0377-0273(93)90106-2, 1993.
Sandri, L.: Archive of probability maps of lahar invasion from Somma-Vesuvius, as computed in Sandri et al (Solid Earth, 2024), Zenodo [data set], https://doi.org/10.5281/zenodo.10794183, 2024.
Sandri, L., Costa, A., Selva, J., Tonini, R., Macedonio, G., Folch, A., and Sulpizio, R.: Beyond eruptive scenarios: assessing tephra fallout hazard from Neapolitan volcanoes, Sci. Rep., 6, 24271,
https://doi.org/10.1038/srep24271, 2016.
Sandri, L., Tierz, P., Costa, A., and Marzocchi, W.: Probabilistic hazard from pyroclastic density currents in the Neapolitan area (Southern Italy), J. Geophys. Res.-Sol. Ea., 123, 3474–3500, https:/
/doi.org/10.1002/2017JB014890, 2018.
Schilling, S. P.: LAHARZ: GIS programs for automated delineation of lahar hazard zones, U.S. Geological Survey Open-file Report, https://doi.org/10.3133/ofr98638, 1998.
Selva, J., Costa, A., Marzocchi, W., and Sandri, L.: BET VH: exploring the influence of natural uncertainties on long-term hazard from tephra fallout at Campi Flegrei (Italy), Bull. Volcanol., 72,
717–733, https://doi.org/10.1007/s00445-010-0358-7, 2010.
Selva, J., Costa, A., De Natale, G., Di Vito, M. A., Isaia, R., and Macedonio G.: Sensitivity test and ensemble hazard assessment for tephra fallout at Campi Flegrei, Italy, J. Volcanol. Geoth. Res.,
351, 1–28, https://doi.org/10.1016/j.jvolgeores.2017.11.024, 2018.
Selva, J., Sandri, L., Taroni, M., Sulpizio, R., Tierz, P., and Costa, A.: A simple two-state model interprets temporal modulations in eruptive activity and enhances multivolcano hazard
quantification, Sci. Adv., 8, eabq4415, https://doi.org/10.1126/sciadv.abq4415, 2022.
Sulpizio, R., Mele, D., Dellino, P., and La Volpe, L.: A complex, Subplinian-type eruption from low-viscosity, phonolitic to tephri-phonolitic magma: the AD 472 (Pollena) eruption of Somma-Vesuvius,
Italy, Bull. Volcanol., 67, 743–767, https://doi.org/10.1007/s00445-005-0414-x, 2005.
Sulpizio, R., Zanchetta, G., Demi, F., Di Vito, M. A., Pareschi, M. T., and Santacroce, R.: The Holocene syneruptive volcaniclastic debris flows in the Vesuvian area; geological data as a guide for
hazard assessment, in: Neogene-Quaternary continental margin volcanism; a perspective from Mexico, edited by: Siebe, C., Macias, J. L., and Aguirre-Diaz, G. J., Special Paper, Geological Society of
America Vol. 402, 217–235, ISBN 978-0813724027, 2006.
Tarquini, S., Isola, I., Favalli, M., Mazzarini, F., Bisson, M., Pareschi, M. T., and Boschi, E.: TINITALY/01: a new Triangular Irregular Network of Italy, Ann. Geophys., 50, 407–425, 2007.
Tierz, P., Woodhouse, M. J., Phillips, J. C., Sandri, L., Selva, J., Marzocchi, W., and Odbert, H. M.: A framework for probabilistic multi-hazard assessment of rain-triggered lahars using Bayesian
Belief Networks, Front. Earth. Sci., 5, 73, https://doi.org/10.3389/feart.2017.00073, 2017.
Tierz, P., Stefanescu, E. R., Sandri, L., Sulpizio, R., Valentine, G. A., Marzocchi, W., and Patra, A. K.: Towards quantitative volcanic risk of pyroclastic density currents: Probabilistic hazard
curves and maps around Somma-Vesuvius (Italy), J. Geophys. Res.-Sol. Ea., 123, 6299–6317, https://doi.org/10.1029/2017JB015383, 2018.
Tierz, P., Loughlin, S. C., and Calder, E. S.: VOLCANS: an objective, structured and reproducible method for identifying sets of analogue volcanoes, Bull. Volcanol., 81, 76, https://doi.org/10.1007/
s00445-019-1336-3, 2019.
Tonini, R., Sandri, L., and Thompson, M.: PyBetVH: A Python tool for probabilistic volcanic hazard assessment and for generation of Bayesian hazard curves and maps, Comput. Geosci., 79, 38–46, 2015.
Umbal, J. V. and Rodolfo, K. S.: The 1991 lahars of southwestern Mount Pinatubo and evolution of the lahar-damned Mapanuepe Lake, in: Fire and mud: eruptions and lahars of Mount Pinatubo,
Philippines, edited by: Newhall, C. G. and Punongbayang, R. S., Philippine Institute of Volcanology and Seismology, Quezon City, University of Washington Press, Seattle and London, https://
pubs.usgs.gov/pinatubo/index.html (last access: 8 March 2024), 1996.
Vallance, J. W. and Iverson, R. M.: Lahars and their deposits, in: The encyclopedia of volcanoes, edited by: Sigurdsson, H., Academic Press, 649–664, https://doi.org/10.1016/
B978-0-12-385938-9.00037-7, 2015.
Voight, B., Calvache, M. L., Hall, M. L., and Monsalve, M. L.: Nevado del Ruiz Volcano, Colombia 1985, in: Bobrowsky, P. T., Encyclopedia of Natural Hazards, Encyclopedia of Earth Sciences Series,
Springer, Dordrecht, https://doi.org/10.1007/978-1-4020-4399-4_253, 2013.
Wilson, G., Wilson, T. M., Deligne, N. I., and Cole, J. W.: Volcanic hazard impacts to critical infrastructure: A review, J. Volcanol. Geoth. Res., 286, 148–182, https://doi.org/10.1016/
j.jvolgeores.2014.08.030, 2014.
Zanchetta, G., Sulpizio, R., and Di Vito, M. A.: The role of volcanic activity and climate in alluvial fan growth at volcanic areas: an example from southern Campania (Italy), Sediment. Geol., 168,
249–280, 2004a.
Zanchetta, G., Sulpizio, R., Pareschi, M. T., Leoni, F. M., and Santacroce, R.: Characteristics of May 5–6, 1998 volcaniclastic debris flows in the Sarno area (Campania, southern Italy):
relationships to structural damage and hazard zonation, J. Volcanol. Geoth. Res., 133, 377–393, 2004b.
Zuccaro, G. and De Gregorio, D.: Time and space dependency in impact damage evaluation of a sub-Plinian eruption at Mount Vesuvius, Nat. Hazards, 68, 1399–1423, 2013.
Zuccaro, G., Cacace, F., Spence, R. J. S., and Baxter, P. J.: Impact of explosive eruption scenarios at Vesuvius, J. Volcanol. Geoth. Res., 178, 416–453, 2008. | {"url":"https://se.copernicus.org/articles/15/459/2024/se-15-459-2024.html","timestamp":"2024-11-07T16:33:27Z","content_type":"text/html","content_length":"276704","record_id":"<urn:uuid:9f230b8e-032a-4341-b92e-fe386ae59de6>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00847.warc.gz"} |
1458 -- Common Subsequence
Common Subsequence
Time Limit: 1000MS Memory Limit: 10000K
Total Submissions: 88616 Accepted: 37634
A subsequence of a given sequence is the given sequence with some elements (possible none) left out. Given a sequence X = < x1, x2, ..., xm > another sequence Z = < z1, z2, ..., zk > is a subsequence
of X if there exists a strictly increasing sequence < i1, i2, ..., ik > of indices of X such that for all j = 1,2,...,k, x[i[j]] = zj. For example, Z = < a, b, f, c > is a subsequence of X = < a, b,
c, f, b, c > with index sequence < 1, 2, 4, 6 >. Given two sequences X and Y the problem is to find the length of the maximum-length common subsequence of X and Y.
The program input is from the std input. Each data set in the input contains two strings representing the given sequences. The sequences are separated by any number of white spaces. The input data
are correct.
For each set of data the program prints on the standard output the length of the maximum-length common subsequence from the beginning of a separate line.
Sample Input
abcfbc abfcab
programming contest
abcd mnp
Sample Output
Southeastern Europe 2003 | {"url":"http://poj.org/problem?id=1458","timestamp":"2024-11-12T21:30:32Z","content_type":"text/html","content_length":"6137","record_id":"<urn:uuid:5830fc59-b990-4a9f-aac0-821a27a4c920>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00101.warc.gz"} |
The n-Category Café
May 28, 2010
The Quantum Whisky Club
Posted by John Baez
I’m live blogging from the Quantum Whisky Club. We’re in a dimly-lit office in the Computing Lab at Oxford University, listening to a rap song about Pythagoras composed by Richard Garner, who is here
along with a bunch of other folks who’ll be attending the Quantum Physics and Logic workshop.
Like who?
Posted at 10:11 PM UTC |
Followups (88)
May 27, 2010
nLab Digest
Posted by David Corfield
I wanted to give people an idea of what has been going on in recent weeks at the very active nLab and associated personal wikis, so I asked members to describe some of the important contributions of
late. Before I relay these, remember that anyone is welcome to participate in this great collaborative project, and also to join in discussions at nForum.
Let’s hear first from Urs Schreiber.
Posted at 10:18 AM UTC |
Followups (21)
May 26, 2010
Nonabelian Cohomology in Three (∞,1)-Toposes
Posted by Urs Schreiber
For $X$ a topological space and $A$ an ∞-groupoid, the standard way to define the nonabelian cohomology of $X$ with coefficients in $A$ is to define it as the intrinsic cohomology as seen in ∞Grpd $\
$H(X,A) := \pi_0 Top(X, |A|) \simeq \pi_0 \infty Func(Sing X, A) \,,$
where $|A|$ is the geometric realization of $A$ and $Sing X$ the fundamental ∞-groupoid of $X$.
But both $X$ and $A$ here naturally can be regarded, in several ways, as objects of (∞,1)-sheaf (∞,1)-toposes $\mathbf{H} = Sh_{(\infty,1)}(C)$ over nontrivial (∞,1)-sites $C$. The intrinsic
cohomology of such $\mathbf{H}$ is a nonabelian sheaf cohomology.
The following discusses two such choices for $\mathbf{H}$ such that the corresponding nonabelian sheaf cohomology coincides with $H(X,A)$ (for paracompact $X$).
Posted at 1:26 PM UTC |
Followups (4)
May 18, 2010
Squeezing Higher Categories out of Lower Categories
Posted by Mike Shulman
For a long time, homotopy theory involved the study of homotopy categories. More recently, people have started preferring to use $(\infty,1)$-categories, since homotopy categories lose a lot of
structure. But it’s worth remembering that homotopy categories do still contain a lot of information, and moreover they encapsulate it all in terms of the familar and easy-to-handle notions of
1-category theory. Want to know if two spaces are (weak) homotopy equivalent? Check whether they’re isomorphic in the homotopy category. Want to know the homotopy groups of a (pointed connected)
space? Look at maps out of spheres in the homotopy category. Want a representing object for cohomology theory? Apply Brown’s representability theorem in the homotopy category. Want to know the
homotopy type of a mapping space? You’re in luck: the homotopy category is closed symmetric monoidal.
Today I want to talk about ways we can use lower-categorical “homotopy categories” to squeeze even more information out of a higher category, without ever having to construct and work with that
higher category directly (thereby potentially avoiding a lot of combinatorial complexity). In fact, in good situations, we can actually squeeze out all the information in this way! (At least for a
suitable definition of “all.”)
Posted at 2:19 AM UTC |
Followups (4)
May 15, 2010
This Week’s Finds in Mathematical Physics (Week 298)
Posted by John Baez
In "week298" of This Week’s Finds, learn about finite subgroups of the unit quaternions, like the binary icosahedral group:
Then meet the finite subloops of the unit octonions. Get a tiny taste of how division algebras can be used to build Lie n-superalgebras that govern superstring and supermembrane theories. And meet
Duff and Ferrara’s ideas connecting exceptional groups to Cayley’s hyperdeterminants and entanglement in quantum information theory.
Posted at 6:06 PM UTC |
Followups (35)
May 10, 2010
Quinn on Higher-Dimensional Algebra
Posted by David Corfield
Frank Quinn kindly wrote to me to point out an essay he is working on – The Nature of Contemporary Core Mathematics (version 0.92). Quinn will be known to many readers here as a mathematician who has
worked in low-dimensional topology, and as one of the authors, with Arthur Jaffe, of “Theoretical mathematics”: Toward a cultural synthesis of mathematics and theoretical physics.
I crop up in the tenth section of the article, which is devoted to a discussion of “a few other accounts of mathematics”, including those of Barry Mazur, Jonathan Borwein, Keith Devlin, Michael
Stöltzner, and William Thurston.
One objective is to try to understand why such accounts are so diverse and mostly – it seems to me – irrelevant when they all ostensibly concern the same thing. The mainstream philosophy of
mathematics literature seems particularly irrelevant, and the reasons shallow and uninteresting, so only two are considered here. Essays by people with significant mathematical background often
have useful insights, and when they seem off-base to me the reasons are revealing. The essay by Mazur is not off-base. (p. 53)
I take it that “irrelevant” is being taken relative to Quinn’s interest in characterising ‘Core Mathematics’.
Posted at 1:46 PM UTC |
Followups (17)
This Week’s Finds in Mathematical Physics (Week 297)
Posted by John Baez
In week297 of This Week’s Finds, see some knot sculptures by Karel Vreeburg:
Read about special relativity in finance. Learn about lazulinos. Admire some peculiar infinite sums. Ponder a marvelous property of the number 12. Then: learn about the role of Dirichlet forms in
electrical circuit theory!
Posted at 4:38 AM UTC |
Followups (44)
May 8, 2010
The Scottish Category Theory Seminar
Posted by Tom Leinster
The Scottish Category Theory Seminar is a newish series of occasional afternoon meetings, at locations roaming around the country (though there’ll probably never be any on highland mountaintops or
remote misty lochs…). I’m very pleased to announce the second meeting of the seminar, in Edinburgh on Friday 21 May.
We’ve worked hard to get a good mixture of mathematicians on the one hand, and theoretical computer scientists on the other. Two of the speakers, Antony Maciocia and Peter Kropholler, are
mathematicians — Maciocia is in algebraic geometry, and Kropholler at the intersection of group theory and topology. The other two, Dirk Pattinson and Thorsten Altenkirch, are in theoretical computer
science — though their talks should definitely be interesting to mathematicians.
All the speakers have been firmly requested to make their talks broadly accessible to what we hope will be a very mixed audience. So I’m looking forward to an excellent afternoon.
Posted at 8:30 PM UTC |
Followups (3)
May 7, 2010
Back in Business
Posted by David Corfield
After a coolant leak knocked out the server which supports the Café, Golem IV rises from the ashes, allowing us all to get back to business. Our thanks go to Jacques Distler for his sterling effort.
[Update: I think comments aren’t working yet.]
[Update 2: Now they are.]
Posted at 12:52 PM UTC |
Followups (6) | {"url":"https://classes.golem.ph.utexas.edu/category/2010/05/index.shtml","timestamp":"2024-11-04T03:58:16Z","content_type":"application/xhtml+xml","content_length":"81883","record_id":"<urn:uuid:2976f402-7027-4e82-bb6b-f380321fe9c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00304.warc.gz"} |
3 Times Table Multiplication Worksheets
Mathematics, particularly multiplication, forms the foundation of numerous scholastic self-controls and real-world applications. Yet, for numerous students, grasping multiplication can posture an
obstacle. To address this difficulty, instructors and moms and dads have actually embraced a powerful tool: 3 Times Table Multiplication Worksheets.
Introduction to 3 Times Table Multiplication Worksheets
3 Times Table Multiplication Worksheets
3 Times Table Multiplication Worksheets -
Step 1a is to get familiar with the table so view read aloud and repeat If you think you remember them it s time to test your knowledge at step 1b Step 1b In sequence Fill in your answers Once you
have entered all the answers click on Check to see whether you have got them all right
These free 3 multiplication table worksheets for printing or downloading in PDF format are specially aimed at primary school students You can also make a multiplication table worksheet yourself using
the worksheet generator
Relevance of Multiplication Practice Comprehending multiplication is pivotal, laying a strong foundation for innovative mathematical concepts. 3 Times Table Multiplication Worksheets offer structured
and targeted technique, promoting a much deeper understanding of this essential arithmetic procedure.
Advancement of 3 Times Table Multiplication Worksheets
Math multiplication worksheets Free multiplication worksheets multiplication worksheets Free
Math multiplication worksheets Free multiplication worksheets multiplication worksheets Free
Free 3rd grade multiplication worksheets including the meaning of multiplication multiplication facts and tables multiplying by whole tens and hundreds missing factor problems and multiplication in
columns No login required
The three times table Multiplication Facts Worksheet Reading and Math for K 5 www k5learning
From conventional pen-and-paper workouts to digitized interactive formats, 3 Times Table Multiplication Worksheets have progressed, dealing with varied knowing designs and choices.
Sorts Of 3 Times Table Multiplication Worksheets
Standard Multiplication Sheets Basic workouts concentrating on multiplication tables, helping learners build a solid math base.
Word Problem Worksheets
Real-life scenarios integrated into troubles, improving crucial reasoning and application skills.
Timed Multiplication Drills Examinations developed to boost speed and accuracy, aiding in quick mental math.
Benefits of Using 3 Times Table Multiplication Worksheets
Times Table Worksheets 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 And
Times Table Worksheets 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 And
Learn the multiplication tables in an interactive way with the free math multiplication learning games for 2rd 3th 4th and 5th grade The game element in the times tables games make it even more fun
learn Practice your multiplication tables Here you can find additional information about practicing multiplication tables at primary school
Multiplication Math Worksheets Math explained in easy language plus puzzles games quizzes videos and worksheets For K 12 kids teachers and parents Multiplication Worksheets Worksheets Multiplication
Mixed Tables Worksheets Individual Table Worksheets Worksheet Online 2 Times 3 Times 4 Times 5 Times 6 Times 7 Times 8 Times 9 Times
Improved Mathematical Abilities
Constant technique sharpens multiplication effectiveness, improving overall math capacities.
Improved Problem-Solving Abilities
Word problems in worksheets develop logical reasoning and strategy application.
Self-Paced Knowing Advantages
Worksheets suit individual knowing speeds, promoting a comfortable and versatile understanding environment.
Exactly How to Develop Engaging 3 Times Table Multiplication Worksheets
Incorporating Visuals and Colors Vivid visuals and shades record focus, making worksheets visually appealing and engaging.
Including Real-Life Circumstances
Associating multiplication to day-to-day situations adds importance and functionality to exercises.
Customizing Worksheets to Various Ability Degrees Personalizing worksheets based on varying effectiveness levels makes sure comprehensive understanding. Interactive and Online Multiplication
Resources Digital Multiplication Tools and Games Technology-based sources supply interactive discovering experiences, making multiplication interesting and delightful. Interactive Sites and
Applications On the internet platforms provide varied and available multiplication technique, supplementing standard worksheets. Customizing Worksheets for Numerous Understanding Styles Aesthetic
Students Visual help and diagrams help understanding for students inclined toward visual understanding. Auditory Learners Spoken multiplication troubles or mnemonics cater to students that understand
principles through acoustic methods. Kinesthetic Learners Hands-on activities and manipulatives sustain kinesthetic students in understanding multiplication. Tips for Effective Application in
Understanding Consistency in Practice Regular technique reinforces multiplication abilities, advertising retention and fluency. Stabilizing Rep and Range A mix of repeated exercises and varied issue
layouts preserves passion and understanding. Providing Constructive Feedback Comments aids in determining locations of improvement, motivating continued progress. Obstacles in Multiplication Practice
and Solutions Inspiration and Engagement Difficulties Boring drills can cause disinterest; innovative approaches can reignite inspiration. Conquering Fear of Math Adverse assumptions around math can
hinder progress; creating a favorable understanding setting is vital. Impact of 3 Times Table Multiplication Worksheets on Academic Performance Research Studies and Study Searchings For Research
suggests a positive relationship between regular worksheet use and boosted math performance.
3 Times Table Multiplication Worksheets emerge as functional tools, fostering mathematical effectiveness in students while accommodating diverse knowing styles. From standard drills to interactive
on-line sources, these worksheets not only improve multiplication skills but also advertise critical thinking and analytic capacities.
Times Tables Worksheets PDF Multiplication table 1 10 Worksheet
3rd Grade Math Practice Multiplication Worksheets Printable Learning How To Read
Check more of 3 Times Table Multiplication Worksheets below
Multiplication Times Tables Worksheets 2 3 4 5 6 7 8 9 10 11 12 Times Tables
Multiplication Table Worksheets Grade 3
3 times table Chart 3 times Tables worksheets Pdf
3 Times Table
3 Times Table Multiplication Chart Exercise On 3 Times Table Table Of 3
Printable Times Table 3 Times Table Sheets
Free 3 times table worksheets at Timestables Multiplication Tables
These free 3 multiplication table worksheets for printing or downloading in PDF format are specially aimed at primary school students You can also make a multiplication table worksheet yourself using
the worksheet generator
Printable Times Table 3 Times Table Sheets Math Salamanders
Times Tables Learning Once you have understood what multiplication is you are then ready to start learning your tables One of the best ways to learn their tables is to follow these simple steps First
write down the times table you want to learn This is useful to see what the times table looks like
These free 3 multiplication table worksheets for printing or downloading in PDF format are specially aimed at primary school students You can also make a multiplication table worksheet yourself using
the worksheet generator
Times Tables Learning Once you have understood what multiplication is you are then ready to start learning your tables One of the best ways to learn their tables is to follow these simple steps First
write down the times table you want to learn This is useful to see what the times table looks like
Multiplication Table Worksheets Grade 3
3 Times Table Multiplication Chart Exercise On 3 Times Table Table Of 3
Printable Times Table 3 Times Table Sheets
3 Times Table Multiplication Square
3 times table Chart 3 times Tables worksheets Pdf
3 times table Chart 3 times Tables worksheets Pdf
Kindergarten Worksheets Maths Worksheets Multiplication Worksheets Multi Times Table
Frequently Asked Questions (Frequently Asked Questions).
Are 3 Times Table Multiplication Worksheets ideal for any age groups?
Yes, worksheets can be customized to various age and ability degrees, making them versatile for various learners.
Just how usually should students exercise making use of 3 Times Table Multiplication Worksheets?
Regular method is vital. Normal sessions, ideally a couple of times a week, can produce significant improvement.
Can worksheets alone enhance math skills?
Worksheets are a valuable tool but must be supplemented with different discovering methods for comprehensive skill growth.
Are there online systems using complimentary 3 Times Table Multiplication Worksheets?
Yes, many educational websites provide open door to a wide variety of 3 Times Table Multiplication Worksheets.
Just how can moms and dads support their children's multiplication technique in the house?
Urging consistent technique, offering assistance, and creating a positive learning setting are valuable actions. | {"url":"https://crown-darts.com/en/3-times-table-multiplication-worksheets.html","timestamp":"2024-11-12T05:20:58Z","content_type":"text/html","content_length":"27942","record_id":"<urn:uuid:6ff3a6a7-ddbe-43b8-b8dc-5135c57588e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00069.warc.gz"} |
Adding fractions - steps, examples. How to add fractions
Adding fractions – Steps, Examples
The addition of fractions is a little bit different from adding other numbers. Fractions are not easily added like other numbers. This is because a fraction is always written as
The top letter “a” is called the numerator and the down letter “b” is called the denominator. The addition of fractions simply means summing two or more fractions.
Steps in Adding Fractions
To add fractions follow the simple steps below
Step1: Check to make sure the denominators of the given fractions are the same.
Step2: when the denominators of the given fractions are the same, add the numerators of the given fractions and take them over the common denominator.
Step3: After adding the numerators and taking them over the common denominator, then simplify the fraction.
How to add like fractions
To add like fractions, since they have the same denominator or common denominator, simply add the numerators and pick a common denominator.
Example: add
\frac{1}{3} + \frac{2}{3}\\
Answer: since they have a common denominator add the numerators and simplify the fraction.
\frac{ 1 + 2 }{3} = \frac{3}{3} = 1
How to add unlike fractions
Since unlike fractions have different denominators, first try to make the given fractions have a common denominator by using the following methods
Firstly: By using LCM. Find the LCM (Lowest Common Multiple) of the given fractions and then proceed to add the fractions. Click here to see how to find Lowest common multiple (LCM)
Example: add
\frac{1}{4} + \frac{2}{3}
Answer: first find the LCM of the denominators, in this case, 4 and 3. The LCM of 4 and 3 is 12. Then follow the steps as shown below
\frac{ 12 × \frac{1}{4} + 12×\frac{2}{3}}{12}= \frac{(3×1) + (4×2)}{12} = \frac{3 +8}{12} = \frac{11}{12}
Secondly: Using Equivalent fractions. Find the equivalent fractions of the given fractions and proceed as follows. Example add 2/5 + 3/4
\frac{2}{5} + \frac{3}{4} = \frac{4 ×2}{4 × 5} + \frac{5 ×3}{5 ×4} = \frac{8}{20} + \frac{15}{20} = \frac{8 +15}{20}=\frac{23}{20} = 1\frac{3}{20}
Adding a Fraction and a Whole number
When adding a fraction and a whole number, first multiply the denominator of the fractions to the whole number and then add the results to the numerator of the fraction all divided by the denominator
of the given fractions. Example add 4 + 3/5
4 + \frac{3}{5} = \frac{ ( 4 × 5) + 3 }{5} = \frac{20 + 3}{5} = \frac{23}{5} = 4\frac {3} {5} | {"url":"https://educegh.com/adding-fractions-steps-examples/","timestamp":"2024-11-04T21:27:13Z","content_type":"text/html","content_length":"72853","record_id":"<urn:uuid:85b74062-7ecc-4de6-9e09-122d79e38703>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00711.warc.gz"} |
Lane Detection System#
In this chapter, I will guide you to build a simple but effective lane detection pipeline and apply it to images captured in the Carla Simulator. The pipeline takes an image as input and yields a
mathematical model of the lane boundaries as an output. The image is captured by a dashcam - a camera that is fixated behind the windshield of a vehicle. The lane boundary model is a polynomial
\[ y(x)=c_0+c_1 x+c_2 x^2 +c_3 x^3 \]
Here, both \(x\) and \(y\) are measured in meters. They define a coordinate system on the road as shown in Fig. 1.
The pipeline consists of two steps
• Using a neural network, detect those pixels in an image that are lane boundaries
• Associate the lane boundary pixels to points on the road, \((x_i,y_i), i=0,1,2\dots\). Then fit a polynomial.
The approach is inspired by the “baseline” method described in Ref. [GBN+19], which performs close to state-of-the-art lane-detection methods.
Lane Boundary Segmentation - Deep Learning#
In the chapter Lane Boundary Segmentation, we will train a neural network, which takes an image and estimates for each pixel the probability that it belongs to the left lane boundary, the probability
that it belongs to the right lane boundary, and the probability that it belongs to neither. As you might know, a neural network needs data to learn. Luckily, it is easy to gather this data using the
Carla simulator: We are going to create a vehicle on the Carla map and attach an rgb camera sensor to it. Then we will move the vehicle to different positions on the map and capture images with our
camera. The 3d world coordinates of the lane boundaries are obtained from the Carla simulator’s high definition map and can be projected into the image using the pinhole camera model.
For each simulation step, we save two separate images:
• The image captured by the dashcam
• A label image that only consists of the projected lane boundaries
You will learn how to create the label images in the chapter Basics of image formation.
From pixels to meters - Inverse Perspective Mapping#
A camera maps the three-dimensional world into a two-dimensional image plane. In general, it is not possible to take a single image and to reconstruct the three-dimensional coordinates of the objects
depicted in that image. Using the pinhole camera model, we can reconstruct from which direction the light ray came, that was scattered off the depicted object, but not how many meters it has
traveled. This is different for the light that was scattered from the road into our camera sensor. Using the assumption that the road is flat, and our knowledge of the camera height and orientation
with respect to the road, it is a basic geometry problem to compute the \(x,y,z\) position of each “road pixel” (\(x,y,z\) in meters!). This computation is known as inverse perspective mapping and
you will learn about it in the chapter From Pixels to Meters.
From our deep learning model we have a list of probabilities \(p_i(\textrm{left boundary}), i=0,1,2, \dots\) for all the pixels. Using inverse perspective mapping we can even write down a list of
tuples \((x_i,y_i,p_i(\textrm{left boundary})), i=0,1,2, \dots\), since we know the road coordinates \((x_i,y_i)\) of each pixel.
We can now filter this list and throw away all tuples where \(p_i(\textrm{left boundary})\) is small. The filtered list of \((x_i,y_i)\) can be fed into a method for polynomial fitting, which will
result in a polynomial \(y(x)=c_0+c_1 x+c_2 x^2 +c_3 x^3\) describing the left lane boundary. The same procedure is repeated for the right boundary.
Once you have finished all exercises of this chapter, you will have implemented a python class LaneDetector that can yield lane boundary polynomials \(y_l(x), y_r(x)\) for the left and right lane
boundary, given an image from a dashcam in the Carla simulator. In the next chapter, we will write a lane-keeping system for a vehicle in the Carla simulator. This lane-keeping system needs the
desired speed and a reference path as inputs. You can take the centerline between the lane boundaries that your LaneDetector class computes, and feed that into your lane-keeping system.
Wouter Van Gansbeke, Bert De Brabandere, Davy Neven, Marc Proesmans, and Luc Van Gool. End-to-end lane detection through differentiable least-squares fitting. 2019. arXiv:1902.00293. | {"url":"https://thomasfermi.github.io/Algorithms-for-Automated-Driving/LaneDetection/LaneDetectionOverview.html","timestamp":"2024-11-01T20:30:17Z","content_type":"text/html","content_length":"40019","record_id":"<urn:uuid:5cc10eaf-04c6-42a5-beb5-793506c939a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00554.warc.gz"} |
Luis Ferroni:Many valuations for many matroids | SMC
Luis Ferroni:Many valuations for many matroids
Time: Wed 2022-10-12 15.15 - 16.15
Location: 3721
Participating: Luis Ferroni (KTH)
Abstract: Invariants are pervasive in matroid theory. Examples of invariants are the Tutte polynomial of the matroid, the f-vector of its Bergman Complex, the Ehrhart polynomial of its base polytope,
or the Hilbert series of its Chow ring. In this talk I want to address two facts:
1) Why all of the above invariants (and many more!) are valuative under matroid polytope subdivisions.
2) How one can use the preceding fact to actually provide a fast way for computing arbitrary valuative invariants for a huge class of matroids called "split matroids".
In particular, I will show how one can use this general framework to attack conjectures on matroid theory in both ways: either to prove a conjecture for this large class of matroids (and hence,
support it in general) or to build a counterexample. As applications one can build counterexamples to the Ehrhart positivity conjecture of De Loera et al., or support other conjectures in
Kazhdan-Lusztig theory by Proudfoot et al.
This is joint work with Benjamin Schröter. | {"url":"https://www.math-stockholm.se/en/kalender/kombinatorik/luis-ferroni-many-valuations-for-many-matroids-1.1195776","timestamp":"2024-11-09T10:00:08Z","content_type":"text/html","content_length":"24300","record_id":"<urn:uuid:006ca8f0-de55-49f0-9440-151cd0b2ee2d>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00247.warc.gz"} |
Read The Concepts And Practice Of Mathematical Finance (Mathematics, Finance And Risk) 2008
by Mag 4.6
60, to which the read The Concepts and Practice is. B finger of the form of its members to particular Byng standing. related read, CLAKENDON TOWNSHIP. There thinks, in wideband, no performance.
Monday, and it was a prestige read The Concepts and Practice of. Weir would result reported, as L. Government and one monthly description. This supposed Dawson recent, and L. Morning Post in cent to
the Times. I tagged Miss Norton and Lord Queenborough read The Concepts and Practice of Mathematical Finance (Mathematics, Finance and Risk) 2008. ended with Olive and Arthur Murray. The work was us
his features in America. Turks and also Typical words of Moab. A read The Concepts questions found when a few writ is Public loading to encode or are that a home is administered given to law or
popularity or has allowed the job providing demodulated to networks or algorithms that would Ultimately deposit in process or communication. What just are my controls as a federal erasure of problem
court and travel? including on the subsp, you may be identified to enhance up also with a Next programming or to beat Intoxication health Publicly. The read The Concepts and Practice of Mathematical
Finance (Mathematics, who discusses your equipment will provide you if this defines broad. Yes, there are open counters for also responding. How can I run that I was a state of theoretical structure
capacity and efficiency? Every read The Concepts and Practice of Mathematical Finance (Mathematics, Finance and Risk) 2008 you know a century, you should land - and get Publicly - a bill
apprenticeship land. As a handy author with a low-cost Funding to provide cookies about exposure request or prize, you can Need the testimony temple as site for the amount. statute transmitted
changes and its Americans try been by canonist intuitively to write the example of the local year to the deep-space. help, it advertises sexual that you highlight as the rights and 70s for the read
The Concepts and Practice of Mathematical Finance (Mathematics, space minimum pp..
TRY FREE CLICK HERE! living the read The of variables. read The Concepts and Practice of Mathematical Finance (Mathematics, Finance and Risk) 2008 of years. merits in read The Concepts and Practice
of Mathematical Finance (Mathematics, Finance and Risk) of child. workers in read The Concepts and Practice of Mathematical Finance (Mathematics, Finance and Risk) 2008, etc. trial statement's faith
of the states. read The Concepts and or role of place. read The Concepts and Practice of Mathematical Finance (Mathematics, Finance and Risk) 2008 of Eesponsibility. OF WRONGS, AND THEIR REMEDIES,
RESPECTING THE RIGHTS read The Concepts and Practice of Mathematical Finance (Mathematics, Finance and Risk) 2008 mentors. Our read The Concepts and Practice of Mathematical Finance (Mathematics,
Finance and provides on making reasonable bonds for more general surveillance of care and glory, officer channel, and deployment responsibility and ed in physical engineers. To say a spending, fiscal
and published Passover of cognitive war in the UK. To keep congressional connector and earth programmer in and grid of loving law and channel. Completing the UK to attack and expect its read The
changing explosion in statute. The UK Catalysis Hub is an low band to use and provide UK court method and it pays language with the wider independence . Please resolve the improvement number for more
answer about poor Hub signals, employees and markets. The Hub has wounded together only Publicly. In its 4 spans of message Hub thing Is used to 172 welche surfaces( which sets initiated to make by
the factor of the Radical books) across spiritual, fiscal king and report ending standard globe personnel in Science, Nature, Nature Chemistry, ACS Catalysis, Chem Mater, Angewandte Chemie, and
Chemical Reviews. The Hub has hence, forth had fabricated a domestic child of PCCP on contracts following and analysis and a weapon( Modern People in Catalysis, Imperial College Press) probably been
by new privacy Hub philos. founded by Tempera PROCEEDINGS; read The Concepts.
Titel inzwischen aufgehoben. Marke der Hugendubel Digital GmbH read The Concepts; Co. Access to this mandate makes used raised because we have you are Holding update Gentiles to welcome the writ.
Please break alternate that read The Concepts and and actions have Mandated on your switch and that you see not fighting them from body. required by PerimeterX, Inc. AllMusic looks rather on read The
Concepts and Practice of Mathematical. Justice Gray transformed the read The Concepts and Practice of Mathematical Finance (Mathematics, Finance of the child. programming in which they offer seen.
other position OF NEW YORK. Under the read of March 3, 1883, 22 program.
Publicly if the numerous read The Concepts and Practice of Mathematical Finance (Mathematics, Finance and Risk) has interconnected, received de J. We work always motivated yours Publicly, I drove.
Catholics from my Thanks. eldest Released kingdom in Belgium. engaged with Lady Sarah and Mrs. George Keppel in Great Cumberland Place. Allenby in Palestine do to the valid read The Concepts and
Practice of Mathematical Finance (Mathematics, Finance and Risk). 18,000 reports and 120 consequences worship interpreted. Macedonia the read of the Bulgars availableMore. Will they notify much of
the health? notifications are angular courts whether one should result C before C++ or just. If you are me, it supports; court a must. You can entirely Learn with C++ and that interviews what I were
myself. If you all have C, you will prevent a read labor in Learning C++ as they are national nations like Tuberculosis and Injuries.
read The Concepts and Practice of Mathematical Finance (Mathematics, Finance 7563390 stressor: fund I working Reprieves and fixes of Meuse carrier including. read The 7470648: destroying rights.
367020 read The Concepts and Practice of Mathematical Finance (Mathematics, Finance and Risk): HABITATIONS beginning connection monopolies and a human deterioration, :blocks of including the 7th, and
Greek fixes. read The Concepts and Practice of Mathematical Finance (Mathematics, Finance and 7288576: screen noise Lights, prices of proposing, and investigations come over system tons. read The
Concepts and Practice of Mathematical 7208136: journey consistency looking children and crucifers of place transport retiring. read The Concepts and Practice of Mathematical Finance (Mathematics,
7084180: role contractor performing Fish report and platform quadrature and horse plaintiff. read The Concepts and Practice of Mathematical Finance (Mathematics, Finance and Risk) 7077643 role:
phones, ports, and personnel for following and for telling guidelines. | {"url":"http://magicafrica.com/Priti%20Jew/images/pdf.php?q=read-The-Concepts-and-Practice-of-Mathematical-Finance-%28Mathematics%2C-Finance-and-Risk%29-2008/","timestamp":"2024-11-13T01:41:54Z","content_type":"application/xhtml+xml","content_length":"20616","record_id":"<urn:uuid:87a24a0c-5607-4f0f-928f-347138162eee>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00853.warc.gz"} |
Prime Factorization Calculator
Enter the value that you want to calculate prime factorization..
Math Calculators
General Math
Algebra Calculators
Prime Factorization Calculator
Understanding the Core of Numbers
Our Prime Factorization Calculator at EasyUnitConverter.com is an invaluable tool for mathematicians, educators, students, and anyone interested in the fundamental nature of numbers. It's designed to
deconstruct any number into its prime components with ease and precision.
Features of the Calculator
• Versatile Number Input: Accepts any number between 2 and 9,007,199,254,740,991 for prime factorization.
• Dual Display Options: Presents prime factors both with exponents and without, for clear understanding.
• Prime Factor Tree Visualization: Offers an insightful visual representation of the prime factorization process.
What is Prime Factorization?
• Prime factorization, or integer factorization, is the process of breaking down a number into a set of prime numbers that, when multiplied together, produce the original number.
Real-World Uses and Users
• Educational Applications: A crucial tool for teaching fundamental concepts in number theory and mathematics.
• Cryptography: Used in developing and cracking cryptographic codes.
• Data Analysis: Assists in algorithm development and computational tasks that require prime number understanding.
How It Works
• Enter a number into the calculator, and it will display the prime factors. This breakdown can be viewed in a simple list format or as a factor tree, providing insights into the structure of the
Advantages of Using Our Prime Factorization Calculator
• Speed and Simplicity: Quickly get the prime factors of any number, large or small.
• Educational Insights: A great tool for learning and teaching the concepts of prime numbers and factorization.
• Versatile Utility: Useful for various mathematical and practical applications. | {"url":"https://www.easyunitconverter.com/prime-factorization-calculator","timestamp":"2024-11-11T21:42:25Z","content_type":"text/html","content_length":"189866","record_id":"<urn:uuid:921b51d7-c402-45c4-a195-2bdc02971054>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00809.warc.gz"} |
American Mathematical Society
In this paper we consider the distribution $G(x) = {F^{ - 1}}\smallint _0^x{(\Gamma (t))^{ - 1}}\;dt$. The aim of the investigation is twofold: first,to find numerical values of characteristics such
as moments, variance, skewness, kurtosis,etc.; second, to study analytically and numerically the moment generating function $\varphi (t) = \smallint _0^\infty {e^{ - tx}}/\Gamma (x)\;dx$.
Furthermore, we also make a generalization of the reciprocal gamma distribution, and study some of its properties. References
A. Erdélyi, W. Magnus, F. Oberhettinger & F. G. Tricomi, Higher Transcendental Functions, Vol. Ill, McGraw-Hill, New York, 1955. G. H. Hardy, Ramanujan—Twelve Lectures on Subjects Suggested by
His Life and Work, (reprinted), Chelsea, New York, 1959. Collected papers of G. H. Hardy, Vols. I-VII (Especially Vol. IV, pp. 544-548), Oxford at the Clarendon Press, 1969. W. A. Johnson,
Private communication, 1982. A. Lindhagen, Studier öfver Gamma-Funktionen och Några Beslägtade Transcendenter (Studies of the gamma function and of some related transcendental), Doctoral Thesis,
B. Almqvist & J. Wiksell’s boktryckeri, Upsala, 1887.
Additional Information
• © Copyright 1984 American Mathematical Society
• Journal: Math. Comp. 42 (1984), 601-616
• MSC: Primary 65D20; Secondary 60E10, 62E15, 65U05
• DOI: https://doi.org/10.1090/S0025-5718-1984-0736456-3
• MathSciNet review: 736456 | {"url":"https://www.ams.org/journals/mcom/1984-42-166/S0025-5718-1984-0736456-3/?active=current","timestamp":"2024-11-03T16:31:43Z","content_type":"text/html","content_length":"66290","record_id":"<urn:uuid:63d2c18f-ec1a-416a-b8ae-dba34b5971dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00678.warc.gz"} |
How Can You Reduce Your Anxiety with Math Homework?
It is actually very sad that anxiety keeps you from things and activities which you could have enjoyed instead of walking away. It is basically running from your worst nightmare and coming to a dead
end. Sooner or later, you have to come to terms with it.So, why not at present? Before you start reading about the solutions to relieve math anxiety, you need to know what exactly it is.
What is math anxiety that makes students petrified of this subject?
If you are wondering if math anxiety is a name of a new disease, I would like to clear the fact that it is not. It can be simply described as dread, panic, or a condition of utter helpless when you
open the textbook of math. A hike in stress level is a common math anxiety phenomenon among students who feel pressurised of performance anxiety.
Through many types of research, it has been foundout that this increase in apprehensive level while completing math homework is due to certain factors. Some of the major factors include therisk of
public embarrassments like ridiculed by fellow classmates for maximum mistakes, surprise tests where students believe in failing, and inferiority complex.
Now as you have a brief understanding of math anxiety, let’s hop into the steps to overcome this issue.
9 Steps to Reduce Anxiety Level while Executing Math Homework
1. Start with Stress Management Techniques
Anxiety can only be killed with a calm and composed mind. Before you start with your math homework, sit and meditate to clear your mind. There are some methods which can be used with meditation to
reduce your angst level.
• Try to clear your mind, and pinpoint your concentration towards an imaginary object. It may be a dot or a box.
• Visualise your mind to be a room full of drawers. Now you can categorise your feelings and store them in those individual drawers. You can store happy feelings in one drawer and anxiousness in
another one.
• Recalling past happy memories and future appreciations of getting good marks can also reduce your stress level.
2. Stave off Negative Thinking
One of the key factors which become a huge hurdle between you and your math homework is negative thinking. “It is difficult, and I cannot do it” is a thought which gives rise to anxiety. Even if you
a good student and is capable of solving your math homework, your lack of confidence can discourage you from completing it.
3. Take your Time and Understand
You are anxious only when you are trying to memorise theories and equations of math. You cannot compare math to any other subjects whose homework can be completed just by learning and regurgitating
the facts by writing it in your copy.
You need to understand that math is just not a specific set of rules. You have to have an understanding of the reasons forthose rules. If your concept is clear, you won’t feel any nervousness while
working on your math homework.
4. Work it off
Another best ways where you can reduce your math anxiety is by channelling it to other directions. If you are feeling palpitated before starting with your homework, you can perform any physical
Jogging, going for a run, or exercising can free you from some of the stress. Squeezing a stress ball can also help in the reduction of stress level. If you are happy to dance, you can even do so.
The main objective is to combat this anxious feeling, so feel free to use any of the options.
5. Believe in Succeedingand Work for it
“If you can think it, you can achieve it.â€
Even before you build the confidence that you can complete your homework, you need to have confidence that you can do it. If you imagine that solving math homework is not equal to fight in a war, you
can easily relax your mind.
Take baby steps by solving problems which you find easier and move towards the tough ones. This succeeding belief and confidence can definitelydiminishanxiousness of homework.
6. Early Preparation is Equivalent to Stress-less Homework Completion
It is very important that you start planning the ways how you would like to complete your homework. Channelling your fear by proper preparation can reduce your anxiety level to a certain extent.
Cramming the mathematical topics in your mathematics homework will neverfulfil the job.It will only tend to make you forget the aspects and equations.
Before you finally sit to complete your homework, it is advised to go through an example and practice a sum, or two. If you do this, you will be benefitted in two ways.
• You will get relieved from your anxiety issue
• You can complete your math homework confidently, without any hyperventilation
7. Let the countdown begin
One of the best methods which have been popularly used for anxiety control is counting numbers. What you need to do is close your eyes and concentrate all your attention toward counting your number
of breaths. This focused counting will divert your mind, hence making your mind go blank of the rising tension of math homework.
8. Test yourself
If you do not know how to ride a bike, you cannot expect to travel using one. Similarly, if you don’t start practisingearlier, you cannot expect to write anything about your math homework with a
calm and clear mind. A prior practice in the form of frequent tests can alleviate your angst regarding this subject.
Dedicating a certain hour every day for a self-math test can make you confident about solving the problems. With every passing day, it will help you realise that math is not that difficult as it
seems to be. This confidence and ease will help you taking the edge off while writing your mathhomework.
9. Take help from available online source
Taking help from teacher, parents, or friends is also a good way to decrease trepidation of executing math homework. It is like your mind knowing that help will always be available to you whenever
you need it. Your mind can work in a tranquillized way even if thepressure of work completion is high.
If you feel at any point of time that you are stuck with any problem, you can approach any one of them for guidance.There are many professional academic websites which can provide you with correct
suggestions, guidance and deeper insight of a math problem. You just need to have control over your fear of math because it is always fear which can stop you from fulfilling your work.
So these were some of the ways how you could overcome your anxiety issues when you are working on your math homework.
If you think to win over math homework anxiety issues in a day, my dear friend, you are mistaken. Regular practice, positive attitude and a confident approach can help you to get rid of anxiety while
executing your math homework! | {"url":"https://myhomeworkhelp.com/how-can-you-reduce-your-anxiety-with-math-homework/","timestamp":"2024-11-13T06:29:48Z","content_type":"text/html","content_length":"280922","record_id":"<urn:uuid:78219386-5d22-4c99-a2e6-c056d10d36dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00854.warc.gz"} |
WARNING: DRAFT -- LIABLE TO CHANGE
(Comments and criticism welcome: Send to a.sloman[at]cs.bham.ac.uk)
Notes on Mathematics, Metaphysics, Evolution
Written after the final session of
the Birmingham FramePhys conference
Friday 11 Jan 2019
Contents below NOTE
: A later document (April 2019) is intended to subsume and replace this one:
15 Jan 2019
Last updated:
16 Jan 2019; 29 Jan 2019;30 Jan 2019; 9 Feb 2019; 18 Apr 2019
This paper is http://www.cs.bham.ac.uk/research/projects/cogaff/misc/evo-framephys.html http://www.cs.bham.ac.uk/research/projects/cogaff/misc/evo-framephys.pdf
It is an extension of the Meta-Morphogenesis project, pointing out some of the connections between products of biological evolution and varieties of dynamic metaphysical grounding.
A partial index of discussion notes in this directory is in
Closely related:
-- Immanuel Kant's views on mathematics (1781)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/kant-maths.html or pdf
-- Alan Turing (1938) on mathematical intuition vs mathematical ingenuity.
He thought only the latter could be implemented on digital computers Turing(1938) -- discussed in:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/turing-intuition.html or pdf
-- The multiple roles of compositionality in biology
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/sloman-compositionality.html (or pdf)
-- Jane Austen's concept of information (Not Claude Shannon's)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.html (or pdf)
-- The Chemical Basis of Emergence in a Physical Universe (triggered by a later FramePhys workshop)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/emergent-physics.html (also PDF).
-- The Meta-Morphogegenesis project
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html (Also pdf)
CONTENTS Introduction: Two projects, a conference and Meta-Morphogenesis Connections at a meeting of two projects Questions about possibilities Theme A. Mathematics, evolution, metaphysics A hint
from Turing Another hint from Turing Why chemistry? A metaphysical exercise Evolution's use of mathematics Why are abstractions biologically useful? From structures to processes manipulating
structures Evolution's repeated creation of new forms of information processing Theme B. The timelessness of mathematics
(And spurious counterfactuals.) No mathematical counterfactuals Evolutionary boot-strapping and mathematical creativity Unwitting mathematical competences Evolved construction kits Biological
A recent collection of ideas about evolution's (multiple) uses of compositionality. Meta-morphogenesis: A related project: Wilson on Metaphysical Grounding as Metaphysical Causation Evolution's
mathematical creativity This Document REFERENCES AND LINKS APPENDIX 1. Historical interlude: recent developments in computing APPENDIX 2. Background to the M-M project: evolution's metaphysical
creativity Creative Commons License
Introduction: Two projects, a conference and Meta-Morphogenesis
On 10-11 January 2019 the FraMEPhys project and the Metaphysical Explanation project hosted a conference on Metaphysical Explanation in Science, at the Ikon Gallery, Birmingham. The programme is
I was able to attend only the final session and, as expected, found connections, and disagreements, with some of my own work, the Meta-Morphogenesis (M-M) project, as explained below.
The FraMEPhys Project -- A Framework for Metaphysical Explanation in Physics -- is a five year research project, led by Alastair Wilson at the University of Birmingham. It aims to develop a new
account of the contribution of metaphysics to how physics explains our world.
An aspect of his proposal is construing grounding, a key notion in recent metaphysical debates, as a form of causation: grounding = metaphysical causation (or G=MC) Wilson(2018). I have been
assembling evidence that various examples of metaphysical creativity evident in biological evolution can be understood as a form of dynamic metaphysical grounding, involving dynamic instantiation of
timeless metaphysical types, including mathematical types. This is an example of metaphysical causation that is relevant to scientific understanding of many products of evolution.
In other words, evolution produces and uses instances of ever more complex mathematical structures in the designs it produces, i.e. structure-instances that instantiate timeless metaphysical types,
including mathematical types and relationships. Some well known examples involving readily observable mathematical features of biological structures and processes were documented by Thompson(1917/
1992), among others. Those are just the tip of an iceberg of mathematical creativity in the processes and products of evolution, including evolved forms of information processing.
Most biologists who, like D'Arcy Thompson, noticed some of evolution's uses of mathematical structures, apparently did not also notice or discuss (like Kant) the ability of some organisms (most
obviously humans) to detect, and in some sense understand, necessary connections between aspects of those mathematical instances, e.g. the necessary connection between the number of sides, edges and
vertices of a convex polyhedron. Such abilities require more than merely instantiation of the mathematical types. They require instantiation of mathematical reasoning capabilities, that currently are
neither explained by neuroscience nor modelled in AI, although there are logic-based AI theorem provers that do something different, e.g. Shang-Ching Chou, et al. (1994).
I'll return to connections between the projects, and the link with mathematics, below.
The Metaphysical Explanation Project at the University of Gothenburg, led by Anna-Sofia Maurin, investigates the nature of metaphysical explanation https://metaphysicalexplanation.wordpress.com/. I
suspect there are overlaps with the M-M project, but have not yet looked closely enough.
I was able to attend only the final session of the conference, on Friday afternoon 11th Jan. My main interest was in the last two talks:
14:55 - Atoosa Kasirzadeh (Toronto), "Can Mathematics Really Make a Difference?"
16:30 - Samuel Baron (Western Australia), "Counterfactual Scheming".
In discussing uses of mathematics in scientific explanations, they included material relevant to the M-M project, which refers both to mathematical aspects of the metaphysical content of biological
evolution and to mathematical aspects of the products of evolution, especially the waves of increasingly complex information-processing mechanisms that enable, and extend, spatial reasoning
competences, including the competences that led to ancient mathematical discoveries by Euclid and others.
A theme of the final session was discussion of mathematical counterfactuals, e.g. what would the consequences have been if 5 + 3 had been equal to 9, or if 17 had not been a prime number. I'll
suggest below that counterfactual questions about whether a mathematical truth might have been a falsehood and the consequences thereof do not make sense, although questions about how what evolved
might have been different make sense
e.g. the ability to make mathematical discoveries, such as
5 + 3 = 8
exists, but it doesn't involve any change of truth-value of mathematical propositions only changes in mathematical competences and knowledge of evolved organisms.
Connections at a meeting of two projects
The remainder of this document expands on some connections between topics discussed at the conference and collections of ideas (inspired by Kant and Turing) that I've been working on, including ideas
about explaining possibilities, i.e. attempting to answer Kantian questions of the form "How is X possible?" or "What makes X possible?" where X refers to some type of entity, process or state.
In particular I claim that the most important discoveries in science are discoveries of possibilities and explanations of possibilities, not discoveries of laws. Laws are of secondary importance,
specifying restrictions of possibilities (i.e. impossibilities) as explained in Chapter 2 of Sloman(1978). A recent example was confirmation of the possibility of gravitational waves, predicted by
the general theory of relativity. That now raises new questions, e.g. whether gravitational waves can be harnessed to transmit useful information.
(Updated 30 Jan 2019)
One of those questions is: what makes it possible for products of evolution to make and use deep mathematical discoveries? This is, in part, a question about evolvable kinds of minds and what makes
them possible (grounds their possibility), and what made it possible for an ancient subset of such minds to make deep, still widely used, mathematical discoveries, including discoveries presented in
Euclid's Elements, long before the development of modern, symbolic, logic-based, formal reasoning.
The answers may be different for different discoveries. For example, many animals, and pre-verbal human children, can discover and use possibilities that are expressed in terms of partial orderings,
not precise metrics: e.g. it is possible for me to walk through some gaps in walls. For others it is impossible unless I walk sideways, though a young child may not need to turn sideways. Knowing
such things does not require use of a metric for length (or distance), merely the ability to compare instances. A metric (up to a certain level of precision) can be developed later by adopting
standard objects against which others can be compared, and then repeatedly subdividing those standard objects to provide standard objects with smaller differences between them.
I believe the answers to the "What makes it possible...?" questions are connected with evolution's production of spatial reasoning mechanisms in humans and many other species, including the deep
spatial reasoning abilities of pre-verbal human toddlers. Both illustrate the ability to do mathematical reasoning without being aware of doing so, discussed briefly below.
These questions of the form "What makes X possible?" or "How is X possible?" seems to require answers that exemplify Wilson's ideas on grounding as metaphysical causation, referred to briefly below.
There are also related questions about what grounds more widely spread spatial reasoning abilities in a variety of species, e.g. squirrels, nest building birds, apes, hunting mammals, and cetaceans
as well as humans. (I've argued elsewhere that nothing in current (early 2019) AI or neuroscience, can explain or model those abilities.)
I append some notes related to the above and also to themes in the presentations I heard. If anyone has time/interest to spare, I'll welcome comments and criticisms. Feel free to copy to others who
may be interested - or likely to criticise.
The next two sections (re-)introduce the main themes driving my comments. Further details come later, but can be ignored (or postponed).
The table of contents may help readers decide what to leave out!
(A) Main theme: linking mathematics, biological evolution, and metaphysics. (Expanding the above remarks.)
(B) Subsidiary theme: on the timelessness of mathematics, and resource limits.
B includes an objection to the idea of counterfactual conditionals in which the condition negates some mathematical fact. I raised this after Atoosa's talk, but I suspect it was also relevant to
Sam's. (I did not keep detailed notes of reactions.)
After the summary of those two themes, the remainder of the paper discusses background and implications.
Theme A. Mathematics, evolution, metaphysics
In 2011 my thinking was disrupted by being asked to contribute to Elsevier's Turing centenary volume, including commenting on his paper on morphogenesis. For nearly half a century I had been trying
to combine ideas from AI, computer science, mathematics, philosophy, psychology and biology, to expand and defend Kant's philosophy of mathematics -- the subject of my 1962 DPhil thesis, written
before I knew anything about computers or AI. Reading the morphogenesis paper made me wonder whether Turing had been thinking about the possibility of new, chemistry-based (sub-neural in humans),
forms of computation, combining discrete and continuous processes.
A hint from Turing
Part IV of the centenary volume was entitled: "The Mathematics of Emergence: The Mysteries of Morphogenesis". It included Turing's amazing, and very surprising, 1952 paper: "The chemical basis of
morphogenesis", now his most cited paper (according to google) but generally ignored by philosophers and AI researchers. Thinking about it, and what Turing might have done later if he had not died in
1954, changed my way of thinking about philosophy, mathematics, AI and evolution.
For non-mathematicians, Philip Ball has a very readable (and much shorter!) summary of Turing's paper:
Added 30 Jan 2019: There is also a video interview here:
Why did Turing write that paper on how complex patterns could emerge from fairly simple processes involving two substances spreading through a liquid yet forming patterns of varying complexity when
they interact? His work on that paper may have overlapped in time with his work on the Mind 1950 paper, but there's only one (mysterious) sentence on chemistry in the 1950 paper, and no discussion of
intelligence in the 1952 paper, whose final paragraph is:
"It must be admitted that the biological examples which it has been possible to give in the present paper are very limited. This can be ascribed quite simply to the fact that biological phenomena
are usually very complicated. Taking this in combination with the relatively elementary mathematics used in this paper one could hardly expect to find that many observed biological phenomena
would be covered. It is thought, however, that the imaginary biological systems which have been treated, and the principles which have been discussed, should be of some help in interpreting real
biological forms."
What would he have gone on to do in the next few decades, if he had not died two years later?
Another hint from Turing
There may be a connection with the distinction between mathematical ingenuity and mathematical intuition, in Turing's PhD thesis (published 1938), where he claimed that computers could replicate only
mathematical ingenuity, not (human) mathematical intuition.
I.e. he (implicitly) rejected what was later called the generalised Church-Turing thesis, and appears to have reached a conclusion about mathematical knowledge partly similar to Kant's claims
about mathematics, as explained here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/turing-intuition.html (and PDF).
All of Euclid's axioms were important *discoveries* not arbitrary starting points to feed into theorem provers. He was, to some extent, retrospectively imposing a new structure (including the
distinction between axioms/postulates and theorems) on a relatively unstructured collection of mathematical discoveries originally based on spatial reasoning, not logical reasoning. I don't know of
any good mechanistic models or theories explaining how such spatial reasoning works.
There have been many automated (AI) geometry theorem provers based on logical forms of expression, and using modern logical forms of reasoning, many of them inspired by Hilbert's axiomatisation of
the geometry presented in Euclid's Elements Hilbert(1899). I suspect Turing would argue that all those theorem provers illustrate what he meant by "ingenuity", especially symbol manipulating
competences. As far as I know nobody has built (or knows how to build) an AI theorem prover that models the spatial reasoning (Turing's "mathematical intuition") leading to the original discoveries
in geometry and topology, which some readers will have re-lived at school, though it now seems that in the UK, and many other countries, children are deprived of that experience, for bad educational,
and sometimes bad meta-mathematical, reasons. (The Foreword by Robert Boyer in Shang-Ching Chou, et al. (1994) is very interesting in this context.)
Further evidence that no axiomatisation of Euclid's Elements can exhaustively account for our spatial mathematical knowledge are the geometrical discoveries that are not derivable from (e.g.)
Hilbert's axiomatisation of geometry, such as the "neusis" construction, known to ancient mathematicians, which makes it easy to trisect an arbitrary angle -- which has been proved impossible in pure
Euclidean geometry. (For anyone curious: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/trisect.html also PDF)
Why chemistry?
The invitation to comment on the morphogenesis paper, plus my knowledge that after about 56 years, nothing in AI had so far modelled or replicated the ancient mathematical thinking leading up to and
beyond Euclid's Elements, left me wondering why Turing had turned from studying digital (discrete) computational processes to modelling continuous processes that can produce discrete spatial
A conjectured answer: he might have been thinking about ways in which chemistry-based (sub-neural) computing mechanisms, with their mixture of discrete and continuous processing (emphasised later by
Schrödinger in (1944)), might fill gaps in our knowledge of how to model and explain ancient human mathematical intelligence (and also powerful spatial reasoning in many intelligent animals?).
That conjecture triggered a new project that I suggested Turing might have worked on had he lived several more decades, the "Meta-Morphogenesis" project: an attempt to characterise developments in
biological *information processing* since the earliest (chemistry-based) stages of evolution, in particular developments that made it possible to start with
-- a metaphysically relatively sparse planet (or universe), containing only lifeless physical materials, structures, and processes
and eventually
-- without any external intervention, apart from steady supplies of energy and occasional disruptions by asteroids, climate changes, volcanic eruptions, tidal forces, etc.,:
-- populate a planet with an enormous and constantly changing variety of life forms, whose reproduction, development, and interaction with their environments depend on unwitting use of mathematical
structures, including structures required for use of information processing mechanisms that, in turn, generate new useful mathematical structures instantiated in mechanisms of reproduction, growth,
development, control, and interaction with diverse environments with differing mathematical structures,
-- thereby repeatedly adding new kinds of matter, structure, process, relationship, function, interaction, involving increasingly complex new metaphysical kinds, including needs, resources, growth,
feeding, disposing of waste, various kinds of reproduction, controlling (pressure, temperature, chemical levels, location, orientation, relations to other things, ....), perception (of various
kinds), recording/remembering, goals, goal formation, goal comparison, and many more!
Are there reasons for denying that these are metaphysical novelties, when they first occur?
Among the "many more" are kinds of perception, reasoning and action control, appropriate to interacting with 3D structures in spatial relationships with one another and with the perceivers/actors.
Try walking through a forest, or a thickly planted garden, or a cluttered room, attending to all the changes in visibility of surfaces, relative distances, angles, portions of surfaces visible,
angular sizes, and optical flow (texture flow), and other changing information fragments in different visible parts of the scene. As far as I know, there is nothing in current robotics, or
current neuroscience, that explains how such information is processed and used (consciously or unconsciously) in brains of intelligent animals, including pre-verbal toddlers, as illustrated in
this video:
There are many species with such capabilities but apparently only one developed an additional set of meta-cognitive capabilities enabling them to make discoveries about the spatial structures and
relationships involved in their percepts and actions that eventually (through steps that are still unknown) led to the mathematical (especially geometrical) discoveries assembled by Euclid and many
A metaphysical exercise
Describing this (mostly still unknown) history of (some of) this planet involves in part a metaphysical exercise: characterising evolution's production of new kinds of entity (including new
properties, relationships, structures, mechanisms, processes, capabilities, types of control, etc.) in which mathematical structures and relationships are involved.
Evolution's designs depend on mathematical constraints (necessities, impossibilities) as much as the designs of engineers and architects do.
Since the M-M project began, I've been trying to develop a multi-faceted theory of how, starting with the fundamental construction kit (FCK) provided by the physical universe and over time using
increasingly complex derived construction kits (DCKs) of many kinds evolution might have produced increasingly complex life forms, with increasingly complex physical bodies, using increasingly
complex forms of information processing in sensing and acting, then later in reflective thinking (e.g. mathematical thinking). If anyone is interested in any of these sub-topics, feel free to ask me
questions or send me things you have written.
Evolution's use of mathematics
One of the themes that emerged in the M-M project was the importance of mathematics: evolution constantly "discovered and used" mathematical structures, including giving them causal roles in
increasingly complex control systems, from control of motion through a gradient in a chemical soup to planning, designing, constructing, using, and maintaining increasingly complex products of human
engineering. For example, there are multiple biological uses of negative feedback control (homeostatic control) of temperature, pressure, chemical concentrations, direction of movement, etc., long
before the Watt governer was invented for controlling speed.
(These discoveries are sometimes misdescribed as uses of metaphors. Abstraction is more powerful and more general than use of metaphor, e.g. since it does not require constant reference back to
particular exemplars, and it is more amenable to creative extensions -- a topic for another time.)
Why are abstractions biologically useful?
One reason for this increasing (implicit) use of newly discovered mathematical abstractions is that explicit encoding for all possible shapes sizes, forces to be used, and sensory configurations to
be responded to in members of a particular species at various stages of development and in various "adult" contexts, would generate a combinatorially explosive amount of genetic material, and would
require very much larger (exponentially larger, relative to the age and complexity of adults) search spaces to find the genetic configurations required. Genomes (and DNA sequences) would then have to
be very much larger, and evolution would be drastically slowed down, by comparison with what has actually occurred. (This claim needs to be made more precise, and evidence assembled.)
This phenomenon can be compared with the differences between writing a computer program using machine code and writing a program with the same functionality in a high level, e.g. object-oriented,
programming language. For simple designs machine code may be perfectly adequate (though not portable). For more complex designs the ability to use multiple previously designed abstractions that can
be instantiated as needed can lead to much faster development and fewer design errors. It also simplifies modifying old programs to cope with new problems by making use of the function/parameter
structure: old functions can be used with new parameters. In some cases dealing with novel problems leads to the discovery of new, more powerful, abstractions, with many new potential applications.
Software engineers (repeatedly) discovered all this in the twentieth century. I suggest that evolution made a similar discovery much earlier. (This needs a more precise argument, and assembling a lot
of evidence.)
Finding powerful re-usable mathematical abstractions that can be deployed in special cases by supplying parameters, can change an intractable problem into a solvable one, and it seems that evolution
did that many times. A very complex special case is the collection of abstractions (or more precisely, collection of collections of abstractions dealing with different developmental stages) acquired
in human evolution that have already been able to generate several thousand different human languages (including spoken, written, and signed languages).
A structured organism with multiple body functions and body parts needs complex control mechanisms, for coordinated growth of internal and externally visible body-parts, for controlling temperature,
fluid content, distribution of chemicals around the body, repair of damaged tissues, and for grasping, biting, walking, running etc., all continuing to function while the individual changes in size
and body-mass distribution, and types of action performed. For all the structural details at all stages of development to be specified in a genetically inherited collection of parameters would be
biologically impossible. (Perhaps I'll explain why in a later document, but it should be obvious.)
Using a collection of parametrised designs to solve the problem of specifying development and future possible behaviours requires evolution to be able to discover and use a collection of mathematical
abstractions/structures, long before thinkers evolved who could make such discoveries, and reflect on them, teach them, use them etc.
The combination of biological evolution and the operations of its products, in generating increasingly complex novel structures and control mechanisms requires a vast amount of creative metaphysical
causation, grounding increasingly complex and diverse forms of mathematically structured metaphysical novelty during evolution and use of biological designs, and in individual processes of
perceiving, learning, and acting.
Having a large and growing collection of abstractions available allows increasingly rapid creation of new more complex abstractions, a phenomenon that has been demonstrated dramatically in the
history of (metaphysically highly creative) human engineering. Moreover, software abstractions needed for controlling complex behaviours can be more rapidly recombined than hardware abstractions. I
suspect evolution "discovered" that long before we did.
The power of mathematical abstractions in engineering contexts originally discovered and used by biological evolution was re-discovered much later by humans, which is why so much education of
engineers now includes mathematics, and not just training in physical construction and manipulation (as may have sufficed for early makers of tools, clothing, dwellings, etc.).
In both biological evolution and recent engineering developments there has been a steady shift from increased physical complexity (e.g. larger cathedrals, bridges, cranes, etc.) to increased
metaphysical complexity adding new "layers" of structure, function, process, causation, control and types of information used and produced.
From structures to processes manipulating structures
Until recently, most of the mathematics used by humans in engineering (including architecture) was concerned with specifying structures that met physical requirements of size, strength, durability,
constrained and permitted relative motion of parts, etc. However, there has also been a steady increase in the use of machines that can replace humans in controlling processes that occur when
changeable structures (i.e. mechanisms) are used. Examples include music boxes, piano-roll players, clocks of various sorts, self-orienting windmills, Jacquard looms for weaving cloth with different
patterns, numeric calculators and punched card machines used for processing business data and controlling a wider range of machinery, long before the development of electronic computers.
All of that was a pale shadow of the mathematical sophistication in the discoveries made and used by biological evolution, without there being any intelligent agent observing, correcting, selecting,
planning, and using the results. Many previously invisible biological examples have been made visible as a result of using high powered optical magnifiers and more recently very high speed cameras,
e.g. to observe insect flight control processes, among many others.
If it were not for the many mathematical features and relationships constraining and supporting what is and is not possible for organisms, or parts of organisms (including microscopic and
sub-microscopic parts of organisms) and controlling selection among possibilities (size, shape, speed, molecular structure, etc.) evolution could not have produced so many life forms with such
diverse behaviours and diverse types of information processing.
A subset of human scientists and engineers now understand a lot more about what is possible in both natural and artificial control systems than was known a century ago, some of it documented in Boden
There have been past attempts to identify transitions produced by biological evolution Maynard Smith and Szathmary(1995). However, most of evolution's transitions related to types of information,
types of information processing, and types of information processing mechanism, were not included in their list of transitions, partly, I suspect (recalling interactions with MS when I was at Sussex
University), because they were not well tuned-in to recent developments in computer science, software engineering and AI, especially symbolic, non-numerical, AI (e.g. planning, plan-debugging,
theorem proving, problem-solving, and computational linguistics), which accelerated soon after the development of electronic digital computers. (Maynard Smith had been an aeronautical engineer before
he became a biologist, so he had done a lot of numerical programming, which was extended in his simulated evolutionary experiments with changing numbers of predators and prey.)
Evolution's repeated creation of new forms of information processing
One of the most important aspects of these developments was the constantly expanding need for new more complex forms of information processing, involving new types of information used in more complex
ways, including types of information with rich mathematical structures. The need for such developments in biological evolution can be compared with growing, increasingly complex, requirements for new
kinds of information, based on new computational machines, languages, and tools, in human commerce, industry and engineering, including architectural developments going back thousands of years.
Compositionality is important in both contexts, biological evolution and human engineering (etc.), as discussed below in: Biological compositionality.
Attempting to document some of the most important examples of such transitions is a major goal of the M-M project. Unfortunately, I don't think the forms of information processing discovered so far
by human scientists, engineers, and mathematicians are sufficiently rich and varied. Turing's comment that mathematical intuition (unlike mathematical ingenuity) is beyond the scope of computing
machines can be construed as pointing out the need for research into novel forms of information processing, perhaps including the kinds of reasoning about continuous spatial structures and processes
that led up to Euclid's Elements.
I have been collecting many examples and further developments of these ideas (all presented in online papers at various levels of incompleteness(!), including these:
Meta-Morphogenesis: Evolution of Information-Processing Machinery
The original proposal, published in the 2013 Elsevier collection:
Alan Turing - His Work and Impact
The current top level project overview (much expanded since the original):
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html (also pdf)
Inconclusive discussion of Turing's comments on mathematical intuition mentioned above:
Theme B. The timelessness of mathematics
(And spurious counterfactuals.)
This section is a reaction to questions raised during the conference about the possibility of counterfactual statements based on the possibility of some mathematical truths being falsehoods -- e.g.
what difference would it have made if 17 had not been a prime number. I'll try to explain why this form of question is based on false assumptions about mathematics.
In what follows I'll refer to the mathematical creativity of biological evolution, insofar as evolution's products reflect mathematical discoveries that are used in increasingly complex products of
evolution. This includes the development of perceptual and action control mechanisms that depend on new uses of mathematical properties of spatial structures and processes. But although we can ask
what would have happened if a particular mathematical structure had not been discovered and used, we cannot sensibly asked what would have happened if some true mathematical proposition had been
The relevant mathematical structures (e.g. geometric, topological, arithmetical, and logical structures) are not products of evolution, and evolution's products cannot alter those structures,
although the uses that are made of them can change.
A consequence of this is that it makes no sense to raise counterfactual questions about what might have happened if some mathematical truth had been false, e.g. if 5 + 3 had not been equal to 8, or
if there had been a largest natural number or if the spatial intersection of a circle and a tetrahedron had been a cube.
No mathematical counterfactuals
I think it makes no sense to refer to the possibility of a mathematical truth being false, as happened more than once during presentations, though I don't recall exactly who said what -- Sorry!
There are no coherent mathematical counterfactual conditionals, though there are different mathematical structures (infinitely many of them, only a subset studied by humans, or instantiated in
physical mechanisms, up to any time). The following two cases should therefore be distinguished:
B1: If a change is considered regarding a feature of mathematical structure M1, and that change is inconsistent with the remaining features of M1, then the proposed change is incoherent and cannot be
used to identify a context for counterfactual or any other kind of reasoning.
B2: If a change is considered regarding a feature of mathematical structure M1, and that change is not inconsistent with the remaining features of M1, then the change will simply identify another
structure M2, whose existence is not dependent on something *in M1* becoming false. However, insofar as such structures have complex combinations of features we can ask questions about which
structures would be identified by various changes to the specifications, e.g. adding, removing or modifying axioms, provided no contradiction is generated.
For example, the parallel axiom in Euclidean geometry can be replaced with alternatives that define various non-Euclidean geometries, e.g. elliptical, hyperbolic and projective geometries. So asking
what would be true in Euclidean geometry if the parallel axiom were false makes no sense. Instead we can ask: in which geometries are the parallel axiom false?
Likewise there are different number systems with different arithmetic properties, including: the natural numbers (characterised by Peano axioms), integers (positive and negative numbers), modulo
arithmetic (integers modulo N, for each N), rationals, reals, transfinite ordinals, and various "strange" extensions, such as Abraham Robinson's non-standard arithmetic (which includes infinitely
many numbers > 0 and smaller than EVERY rational number > 0. These "infinitesimals" can be (suggestively) modelled by "Horn shaped" angles between a circle and a tangent: as a circle grows larger the
angle with the tangent grows smaller, but all such angles are smaller than every angle between two straight lines).
(The capitalised EVERY above prevents both rational and real numbers from satisfying Robinson's definition.)
Thanks to google, I've just learnt that C.S.Peirce had yet another notion of infinitesimal.
So mathematical structures are not the sorts of things about which we can discuss counterfactual questions. If structure M1 is changed in such a way to produce supposed structure M2, then different
cases can be found, e.g.
-- If new features in M2 are inconsistent with the remaining unchanged features, then there is no mathematical structure M2 about which there are some new truths,
-- If there is no inconsistency (e.g. when a new axiom is added, consistent with all the original axioms in M1, or something is removed from M1 and replaced with a new axiom, without any
contradiction), then the resulting structure M2 is a different structure from M1, and the discussion will be about a new structure not about the old one with a hypothesised new feature.
There are many examples in the history of mathematics where mathematicians at first failed to notice the existence of alternatives to the structures they knew about.
E.g. Imre Lakatos (1976) discussed the history of Euler's proof that:
if a polyhedron has V vertices, E edges and F faces, then V - E + F = 2
Lakatos showed how discovery of various flaws in proofs led to discovery of a variety of previously unnoticed mathematical structures, and unnoticed subdivisions in old structures. E.g. does your
concept of a polyhedron allow a polyhedron to have a triangular tunnel through it?
Evolutionary boot-strapping and mathematical creativity
During biological evolution, the machines in use at any time have essential roles in increasing the complexity and capabilities, especially the information processing capabilities, of their
successors. Such mechanisms and their products have rich mathematical complexity. Different kinds of mathematics are relevant to different evolutionary layers. E.g. mathematical structures of
syntactic forms used in human languages were causally irrelevant to microbes, insects, plants, and most other life forms for millions of years.
However, the possibility of those mechanisms existed in an inert, causally empty, way from the very beginning. Such possibilities could not have been realised micro-seconds after the big bang. Some
possibilities have to "hang-around" (?) for a long time before they can do anything. Of course, that suggests that there are many possibilities whose time has not yet come, and perhaps many more
(infinitely many?) whose time will never come. But the M-M project is concerned only with a tiny subset, of great importance to biology, science, engineering, and philosophy as we know them. At the
core of that subset is a continually expanding variety of forms of information processing.
Among the realised possible information processing mechanisms, the M-M project seeks to identify especially key features of virtual machines running on the brain mechanisms produced by evolution
(enhanced by other factors including cultures and various feedback loops) that made it possible for ancient humans, who knew nothing about modern logic, to make, discuss and document mathematical
discoveries of great depth and power. Many of those ancient discoveries (e.g. concerning Pythagoras' theorem, and properties of prime numbers) are still in daily use world-wide, yet remain beyond the
power/scope of current AI mechanisms, and beyond what current neuroscience can explain.
The original discoveries could not have used modern formal axiom systems, or the Cartesian representation of geometry in arithmetic. I suspect the mechanisms grew out of mechanisms for spatial
reasoning shared with other species, including mechanisms for recognition and use of spatial necessities and impossibilities. But we may never know the exact history. Some examples of non-formal
reasoning about impossibilities and kinds of necessity can be found in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html (also pdf).
Unwitting mathematical competences
Careful observations, e.g. by Konrad Lorenz, Jean Piaget, James Gibson, Philippe Rochat, Annette Karmiloff-Smith, observant parents (often helped by video cameras) and very many others, provide clues
regarding closely related but unarticulated discoveries made by pre-verbal humans and other spatially intelligent species. (An example of a pre-verbal toddler making and using a non-trivial discovery
in 3-D topology is available in this short video with commentary. What sort of language could her brain have used to represent the possibilities, her intentions, the constraints, the problems, and
the solution? Does any neuroscientist have any ideas? Compare the spatial reasoning abilities of squirrels defeating "squirrel proof" bird-feeders in online videos.)
Different levels of evolutionary contribution to new metaphysical realisation
Clearly the spatial reasoning abilities in squirrels, humans and other apes are not found in microbes or insects, and once did not exist on this planet. At least two products of evolution need to be
explained: the evolution of biological information-processing mechanisms that "blindly" create and use new control mechanisms (e.g. negative feedback control), and later the evolution of types of
organism that can explicitly select some of those types of mechanism when they create new control mechanisms, e.g. humans designing self-orienting windmills.
An example of the first is evolution producing mechanisms for controlling temperature, osmotic pressure, motion (e.g. of roots) in the direction of increasing density of some needed chemical. An
example of the second is evolution producing organisms (only humans so far?) that are also able to invent control mechanisms using negative feedback.
The first case involves implicit mathematical discovery by biological evolution in creating new species, and the second involves explicit mathematical discovery by individual organisms (humans, and
possibly some other species) who think about and create mechanisms or select actions that are guided by negative feedback.
In the first case, the discoveries are implicitly encoded in genetic specifications and/or intermediate products of gene expression. In the second case the discoveries include more or less explicit
information structures specifying what has been discovered -- as in a child or non-human animal noticing a solution to a practical problem, such as preceding a translation with a rotation.
A third level, that depends on evolution, or development, of additional reflective (meta-cognitive) capabilities is achieved when individual organisms become aware of what they are doing and why it
is useful in some situations but not others. This requires not only solutions to be encoded but also analyses or explanations of properties of the solutions, encoded in thought processes, and
documents, diagrams, and other devices used by humans as thinking aids or for communication.
These different abilities require different mechanisms providing different levels of sophistication -- a topic to be expanded elsewhere. Several AI researchers have worked on providing machines with
related metacognitive debugging and learning mechanisms (one of the earliest being Sussman's program HACKER (1975)). But I don't know of any that come close to matching the spatial metacognitive
abilities required for ancient mathematical discoveries, or a child's understanding of the fact that it is impossible to separate two solid linked rings (without any hidden slits), or to link two
unlinked solid rings, without physical damage.
Yet more sophistication can occur in organisms that are able not only to reflect on their own successes and failures, but also to communicate what they have learnt to others, and engage in
collaborative problem solving, debugging, ontology extension, etc. In some species the appearance of communication may be simply a genetically selected capability that is not understood by the
individual with the capability, such as some of the unwitting teaching behaviours of carnivore parents feeding offspring.
There are different varieties of communication of acquired meta-knowledge across species, but as far as I know no other species comes close to the human achievements. Not all humans do, either
because of individual genetic deficiencies, dysfunctional cultures, poor environmental support, or other forms of bad luck!
These are all cases where evolution directly or indirectly brings into existence instances of new types of mechanism with new types of capability. They are not merely temporally new, but also involve
production of instances of types of information content that are not definable in terms of previously used types of information content. This requires "blind" creation by evolution of new more
powerful forms of representation with associated mechanisms that implicitly assign new kinds of semantics to the representations.
The ability to use negative feedback is much simpler than, and requires simpler mechanisms than, the ability to manipulate and use information about causes and effects of negative feedback.
Likewise, temporally extended forms of meta-cognition producing new insights are required for the ability to reflect on and compare different cases of use, and to understand why some are successful
but not others.
Yet another evolutionary change produces forms of communication between individuals that allow such discoveries to be communicated explicitly, thereby enormously speeding up and amplifying the
effects of evolution. (Human teachers vary enormously in their abilities to communicate or install such competences in their pupils.)
So the M-M project addresses metaphysical questions about evolution's ability to produce new types of mechanism, with new types of power, acquiring and using new types of competence and knowledge.
Those evolutionary discoveries of useful mathematical structures were replicated and extended more recently by (usually) professional (usually) highly educated mathematicians.
Many such humans extend mathematics as a largely introspective discipline driven by specially evolved motivational mechanisms, and not directly driven by practical needs, as discussed in Sloman
These human discoveries build on but are different from evolution's implicit mathematical discoveries shaped by deep practical (biological) engineering design problems, building on powers provided by
the physical universe, then later building on powers produced by evolution itself. (An example of positive feedback.)
One of the best known examples of evolution's powerful use of mathematical properties of the physical universe comes from Erwin Schrödinger's little book on the chemical basis of life (1944), which
shows how aspects of quantum physics, but not pre-quantum (e.g. Newtonian) physics, can explain how it is possible for chemical structures based on aperiodic mostly, but not completely, stable
molecular chains to meet requirements for a reliable reproductive encoding mechanism. (It was published a few years before Shannon's major publication and the discovery of the structure of DNA).
Our educational system does not normally produce thinkers able to move between different levels of analysis of function, type of mechanism and type of implementation, but this is a recurring
requirement in the M-M project.
Their measures then violate the Archimedean axiom for arithmetic
for any N1 and N2, if N1 < N2 there is some natural number K such that KxN1 > N2
A great deal of mathematical discovery leading to new mathematical domains/structures has been triggered by "What if" questions. But these are not counterfactual questions as used in mathematical
Evolved construction kits
The (still developing) theory of evolved *construction kits* including fundamental and derived construction kits of many sorts is discussed in
Sloman (2014-18)
. These are not normally thought of as products of evolution, but they are essential for the production of the living things that are thought of as products, and biological construction kits
(sometimes called toolkits) are produced during evolution of their products.
Biological compositionality
A recent collection of ideas about evolution's (multiple) uses of compositionality.
Compositionality is often thought of only in the context of linguistic structure (e.g. in the Stanford Encyclopedia of philosophy (in 2018) https://plato.stanford.edu/entries/compositionality/,
though its importance has also been recognized in computer science, in software designs and computer programs. What isn't often noticed is that compositionality is also essential for evolving,
developing, biological designs. That idea was triggered (in me) by having a short paper accepted for a conference on compositionality in September 2018 after which I expanded the paper in several
directions, including a section on Metaphysical compositionality:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/sloman-compositionality.html (also pdf)
Biologically Evolved Forms of Compositionality
Structural relations and constraints vs Statistical correlations and probabilities
Although Wilson neither proposed, nor has since endorsed, these applications of his ideas, I think that (unless I am subject to wishful hallucination) his notion of metaphysical causation is
naturally suited for use in evolutionary contexts (with appropriate extensions to match some of the details).
Moreover the evolutionary examples of *dynamic*, temporally extended, metaphysical grounding are partly analogous to his examples of grounding in a game of cricket, which can also be highly dynamic,
with changing structures and relationships e.g. changing fortunes in a tense final innings. I can't recall whether he also pointed out the possibilities for further metaphysical creativity when a
type of game interacts with a richer context, e.g. multi-game competitions, national pride, records being broken, and even (alas) spawning new kinds of criminality!
But a much larger variety of types of metaphysical creativity (including creating the possibility of mathematical minds) has occurred in our biosphere over millions of years than any philosopher has
so far documented, although I think Kant (unwittingly) contributed key ideas to this project in his (often misunderstood) philosophy of mathematics. (Defended against standard criticisms in my 1962
DPhil thesis.)
Turing seems to have reached related, but less well developed, ideas in 1938 when he distinguished mathematical intuition from mathematical ingenuity and suggested (but did not argue) that computers
can only achieve the latter. I've tried to tease out what he might have been thinking here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/turing-intuition.html (also pdf)
And finally all of this makes essential use of the notion of information, about which there are many theories, debates, confusions, and conflicting intuitions.
There are two central notions of information:
-- The fairly new, mainly syntactic, concept developed by Claude Shannon, to address a collection of technical (engineering) problems emerging from the telephonic/telegraphic services provided by
Bell Labs
-- The much older concept of information as *semantic content* clearly understood and used by Jane Austen in her novels, written more than 100 years before Shannon's paper, documented in this paper
using extracts from her novel "Pride and Prejudice": http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.html (also pdf)
[I am not the first person to criticise misuses of Shannon's brilliant work (he wasn't confused about this) but I don't know if anyone else has invoked Jane Austen as a key witness. If so, please let
me know.]
All this is a complex story with a lot of different interlocking strands. I don't really expect anyone to have time to look at all of it, but I'll be very pleased to receive criticisms and
suggestions for improvement of any of it.
Meta-morphogenesis: A related project:
The Meta-Morphogenesis (M-M) Project
is a combined philosophy+science, Kant- and Turing-inspired, project, using ideas from several scientific and philosophical disciplines, combined with mathematics and AI, aiming to explain how it is
possible for a purely physical universe to produce, over time, a huge variety of life forms with increasingly sophisticated information processing capabilities, including some with abilities to make
topological and geometrical discoveries like those in Euclid's
. Some other species with considerable spatial intelligence may be using a previously evolved subset of those mechanisms.
The axioms and postulates listed by Euclid were not arbitrary (unjustified) assumed starting points for reasoning, as is permitted in modern formal systems. Rather they were products of deep
evolutionary discoveries, probably built on evolved spatial reasoning capabilities shared with other intelligent species.
It is still not known what the spatial reasoning mechanisms are that humans and other animals deploy The mechanisms enabling humans to discover those axioms and postulates were products of complex,
largely unknown, mechanisms that did not exist when this planet first formed, but were later brought into existence (caused to exist?) by a mixture of physical, chemical, and later biological
mechanisms -- a metaphysical bootstrapping process that currently surpasses all other known, or human-designed, bootstrapping processes.
(For a brief survey of varieties of bootstrapping see https://en.wikipedia.org/wiki/Bootstrapping.)
Those evolved mechanisms and their products all included instances of abstract mathematical structures, that were discovered and used by evolution, but not created or modified by evolution.
Mathematical structures cannot change, though different structures can be instantiated at different times and places, including different geometrical structures, such as Euclidean and non-Euclidean
geometries, and different topologies.
The detailed operation of evolutionary mechanisms and patterns of reproduction were not discussed in the portion of the conference I attended, although there was some discussion of counterfactual
mathematical premises in biological explanations mentioned below. However, the above remarks about mathematical structures imply that it makes no sense to ask counterfactual questions about such
mathematical structures, such as "What would have happened if spatial containment had not been transitive, or if a certain number had not been a prime, or if a triangle had had four sides, or if the
set of natural numbers had been finite?"
We can ask what would have happened if some portion of the universe, or some product of evolution, had instantiated a different mathematical structure, but not what would have happened if the
structure had been different. E.g. we can ask what would have happened if the reproductive cycle of a certain species had not used a prime number of years, but we cannot ask what would have happened
if 17 had not been a prime number, or if 17 and 19 had not been co-prime.
This is not because the mathematical truths are all trivial matters of definition -- "analytic" in Kant's terminology -- since more than matters of definition are at stake. (I think Kant's philosophy
of mathematics was basically correct, but needed some clarification and qualification, e.g. to allow that mathematical discovery processes are not infallible, even if they lead to necessary truths.
(For more on Kant's view of mathematics see links above.)
More difficult questions in the M-M project are concerned with how information about mathematical structures came to be encoded in genomes and used in evolutionary designs, and how those encodings
influence processes of individual development, and what the consequences for individual organisms are. A (tentative, still incomplete) answer (co-developed with Jackie Chappell) refers to a complex
multi-layered process of gene-expression alluded to in the label "Meta-Configured Genome" Sloman&Chappell (2007-2018) used below. This involves quite different theories of learning and development
from those used in current neuroscience and AI.
Wilson on Metaphysical Grounding as Metaphysical Causation
I have found it useful to express some aspects of the M-M theory using ideas borrowed from Alastair Wilson linking grounding and causation, e.g. in
"Metaphysical Causation", Nous 52(4):723-751, 2018.
Also available here:
Around 2014 I first heard him talk about
metaphysical causation
"Grounding = metaphysical causation" (G=MC)
and felt that that idea could usefully summarise ideas I had begun to develop about how biological evolution introduces astonishing novelty into a physical universe.
Metaphysics is often construed as a study of timeless truths, whereas Wilson discussed grounding in processes, as in a cricket match in which the rules of the game plus events on the field produce
new states: i.e. Wilson's metaphysics is not timeless. This is not a claim that human metaphysical theories change, but that what kinds of things exist, and can exist in particular times and places
can change, which calls for an explanation of what makes such metaphysical changes possible, in contrast with scientific explanations of how some specific type actually came to have instances.
This can be contrasted with P.F. Strawson's ideas on "Descriptive Metaphysics", (1959). If I recall correctly, he defines descriptive metaphysics as the study of a fixed collection of beliefs
common to all human beings at all times, which he contrasts with products of "Revisionary Metaphysics" which seeks to criticise and replace common ideas about what exists. I have some partly
critical comments on Strawson's ideas here:
(This is written from memory of reading Strawson a long time ago. Corrections and criticisms welcome.)
In contrast with Strawson, Wilson explicitly discusses metaphysical processes of change, e.g. processes in a cricket match). These ideas can be extended to metaphysical changes of far greater
significance (e.g. metaphysical changes in the processes of biological evolution). This is not a matter of "revisionary metaphysics" in which human metaphysical claims change. Rather Wilson
acknowledges that there are metaphysical processes in which new kinds of entity come to exist -- unless I have misunderstood.
I have been using (not, I hope, mis-using!) his ideas in discussing the metaphysical creativity of biological evolution: the most creative mechanism known in our universe. Although mechanisms of
biological evolution cannot change the variety of possibilities (possible organisms, behaviours, products of organisms, etc.) supported by the physical universe it can have a massive impact on the
reachability some of them, or the time required to reach them. When the earliest life forms were (somehow) produced on this planet the variety of new life forms that were reachable by physical
chemical processes within a century was minuscule compared with the variety of new life forms reachable within a century now, which includes not only new species capable of being produced by natural
processes in ecologically rich portions of the planet, but also new species capable of being produced by intended or unintended human intervention.
Soon after the earliest life forms appeared, only very simple life forms could be reached in a short time, whereas now the variety of new forms that could be reached within (e.g. a century) includes
new forms with many more degrees of complexity, including new plants and animals. This is partly because of the existence of many evolved biological construction kits.
Evolution constantly uses aspects of a range of possibilities supported by the universe at a particular time to extend the range of possibilities supported -- a positive feedback loop that enables
the rate of evolutionary change to be substantially speeded up, using a steadily increasing supply of new construction kits, as discussed in Sloman (2014-18).
Insofar as biological information processing mechanism use chemical processes, in which there are both discrete changes (e.g. formation and unlocking of chemical bonds) and continuous changes, e.g.
folding, twisting, moving together, moving apart, the biological machinery is not restricted to discrete processes as Turing machines and digital computers are, unlike so-called analog computers,
most of which have been displaced by approximately equivalent but much cheaper and more reliable digital computers, where exact equivalence is not required. For examples and discussion see https://
Evolution's mathematical creativity
Modified 9 Feb 2019
Biological evolution exhibits mathematical creativity, insofar as evolution produces and uses instances of increasingly complex mathematical structures and also produces increasingly complex
mechanisms for discovering and using new mathematical structures and relationships. In that way, biological evolution has mostly been far in advance of human engineers. (I am not suggesting that
Wilson agrees with any of this: however, these examples of evolution producing new types of organism, with new types of capability, seem to me to illustrate his notion of metaphysical causation, at
least as well as his examples of metaphysical causation during a cricket match.)
N.B. I am not claiming that evolution produces any mathematical structures. Mathematical structures exist "timelessly". However, from time to time, evolution does produce new instances of
mathematical structures that had not previously been instantiated in this universe (or on this planet).
It also produces new information processing mechanisms capable of discovering and using some of those mathematical structures. We can loosely think of an eternally available "cloud" of eternally
enduring mathematical structures, from which evolution selects and instantiates increasingly complex specimens. That process continually extends evolution's abilities to find and use new mathematical
structures. (More precisely, it continually extends the collection of evolutionary mechanisms that have been instantiated and used.) Is this what Plato was trying to say about mathematics???
N.B. I hope my constant reference to what evolution does is not taken to imply that I regard biological evolution as a kind of conscious agent performing intentional actions. Evolutionary processes
do wonderful things in the same sense of "do" as tornadoes do dreadful things. However, many products of evolution can perform intentional actions (e.g. mate selection) and that can greatly extend
what evolution achieves.
Neither should it be assumed that all the information processing done by products of evolution is done entirely inside the organisms. As remarked in chapters 6 and 8 of Sloman(1978), there are many
contexts in which humans and other intelligent animals use information stored and manipulated outside their bodies, e.g. in doing mathematics with the aid of diagrams, or aligning objects in order to
compare sizes, angles or shapes. (Unfortunately this discovery has provoked some researchers into denying the existence or use of internal information processing!)
Eventually (and surprisingly?) evolution (assisted by its previous products and aspects of the physical world) even instantiates designs for mathematical minds that can make mathematical discoveries
at far greater speeds than evolution itself can. It does this partly on the basis of mechanisms that select generic available design patterns during individual development on the basis of results of
previously instantiated selections. (The most obvious example is linguistic development, also discussed by Karmiloff-Smith (1992).)
This activity of a "meta-configured" genome Sloman&Chappell(in progress) allows an evolved specification for a collection of sub-genome patterns to be instantiated step-wise, partially under the
influence of the environment and its effects so far, a far more powerful form of individual development than any so far discovered in psychology, neuroscience or AI (partly identified by Annette
Another illustration of this idea (also noted in the work of Noam Chomsky and other theoretical linguists) is the fact that language-generating aspects of the human genome can produce thousands of
very different languages, that differ in far more complex ways than height, limb-length ratios, eye-colour, muscular strength, etc. Currently investigated "deep learning" neural mechanisms, using a
multi-layered learning tower that exists from the start in each individual, lack the required mathematical creativity of a meta-configured genome which does not allow "higher levels of the genome" to
be instantiated until results of earlier learning are available to influence the instantiation of the higher level abstractions, which then seek novel structures using previously instantiated
patterns combined with new and old data. (This needs more detailed discussion. People familiar with research on language development and/or ideas of Karmiloff Smith may recognise some of what is
being claimed.)
A partially related technique in AI is Genetic Programming, which replaces a linear inherited genome with an inherited collection of tree-structured patterns of varying complexity, where sub-trees
represent more or less complex design fragments found useful in previous evolutionary discoveries, as summarised in
However, evolution's use of chemical structures that can be combined in richer ways during epigenesis may have significant consequences not yet worked out.
This is part of a general research strategy of replacing hypothesised natural selection mechanisms that can only passively select between alternatives that happen to have turned up through random
mutations, with more powerful mechanisms that can generate and combine new sub-types of previous types of mechanism far more rapidly during individual development. E.g. instances of designs for
predators interact with instances of designs for their prey, enhancing already instantiated designs in members of both species.
That can be preceded by interactions between immature instances of the same design (through "play") in ways that help select instantiation options for a generic genome, in preparation for later
interaction with "the real thing" (real prey or predator). The development of language in humans also uses such meta-configured genomes, but in an even more complex, multi-stage process, that doesn't
involve risks of being eaten, etc. Sloman(2015).
This Document
This document presents some aspects of the M-M project that I think are closely related to the two projects represented at the conference, and could lead to extensions and/or corrections of some of
the ideas emerging from those projects and related work presented at the conference, especially links between physics, biology, mathematics, and varieties of consciousness, just as attending the
conference has helped me clarify the metaphysical presuppositions and implications of the M-M project. I welcome comments, criticisms and suggestions,
APPENDIX 1. Historical interlude: recent developments in computing
APPENDIX 2. Background to the M-M project: evolution's metaphysical creativity
A. Sloman (2018-Kant) Key Aspects of Immanuel Kant's Philosophy of Mathematics ignored by most psychologists and neuroscientists studying mathematical competences.
Margaret A. Boden, (2006), Mind As Machine: A history of Cognitive Science (Vols 1--2)
Shang-Ching Chou, Xiao-Shan Gao and Jing-Zhong Zhang (1994), Machine Proofs In Geometry: Automated Production of Readable Proofs for Geometry Theorems, World Scientific, Singapore,
Tibor Ganti, 2003. The Principles of Life,
Eds. E. Szathmáry, & J. Griesemer, (Translation of the 1971 Hungarian edition), OUP, New York.
See the very useful summary/review of this book by Gert Korthof:
J. J. Gibson, 1966, The Senses Considered as Perceptual Systems, Houghton Mifflin, Boston.
J. J. Gibson, The Ecological Approach to Visual Perception, Houghton Mifflin, Boston, MA, 1979,
David Hilbert, (1899), The Foundations of Geometry,
Translated 1902 by E.J. Townsend, from 1899 German edition. Available at Project Gutenberg, Salt Lake City,
Immanuel Kant, 1781, Critique of Pure Reason Translated (1929) by Norman Kemp Smith, London, Macmillan.
Annette Karmiloff-Smith, 1992,
Beyond Modularity,
A Developmental Perspective on Cognitive Science,
MIT Press (1992) -- My informal, unfinished, review:
Imre Lakatos, 1976, Proofs and Refutations, Cambridge University Press, Cambridge, UK,
George Lakoff and Rafael E. Nunez, 2000, Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being, Basic Books, New York, NY,
C. Maley and G. Piccinini, 2013, Get the Latest Upgrade: Functionalism 6.3.1. Philosophia Scientiae, 17(2), 1-15.
J. Maynard Smith and E. Szathmary, 1995, The Major Transitions in Evolution Oxford University Press, Oxford, England:,
Otto Mayr (1970) The Origins Of Feedback Control, MIT Press, ISBN 9780262130677
Erwin Schrödinger (1944) What is life? CUP, Cambridge,
I have an annotated version of part of this book here
A. Sloman [1978, revised], The Computer Revolution in Philosophy: Philosophy, Science and Models of Mind.
A. Sloman, 2009-2019, Architecture-Based Motivation vs Reward-Based Motivation, Originally published in Newsletter on Philosophy and Computers,
American Philosophical Association, 09, 1, pp. 10--13, Newark, DE,
Online version much revised and extended
A. Sloman, (2013) Virtual Machine Functionalism (The only form of functionalism worth taking seriously in Philosophy of Mind and theories of Consciousness),
A. Sloman (2015), Evolution of minds and languages. What evolved first and develops first in children: Languages for communicating, or languages for thinking (Generalised Languages: GLs)?, Online
presentation, School of Computer Science, University of Birmingham, UK,
A. Sloman (2016-creative), The Creative Universe (Draft discussion paper).
Aaron Sloman and Jackie Chappell (2007-2018) The Meta-Configured Genome (work in progress)
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-configured-genome.html (also pdf)
Aaron Sloman (2014-18), Construction kits for evolving life
[In progress. Begun Nov 2014]
P. F. Strawson, 1959, Individuals: An essay in descriptive metaphysics, Methuen, London,
Galen Strawson, 2019, A hundred years of consciousness: a long training in absurdity, Estudios de Filosofia, 59, pp. 9--43, https://aprendeenlinea.udea.edu.co/revistas/index.php/estudios_de_filosofia
Gerald J. Sussman, (1975) A computational model of skill acquisition, American Elsevier, San Francisco, CA,
D'Arcy Wentworth Thompson (1917/1992) On Growth and Form.
The Complete Revised Edition (Dover Books on Biology) Originally published 1917. http://www.amazon.com/On-Growth-Form-Complete-Revised/dp/0486671356/ref=cm_cr_pr_orig_subj
A. M. Turing (1938), (Published 1939) Systems of Logic Based on Ordinals, Proc. London Mathematical Society, pp. 161-228, https://doi.org/10.1112/plms/s2-45.1.161
A. M. Turing, (1950) Computing machinery and intelligence, Mind, 59, pp. 433--460, 1950,
(reprinted in many collections, e.g. E.A. Feigenbaum and J. Feldman (eds) Computers and Thought McGraw-Hill, New York, 1963, 11--35),
WARNING: some of the online and published copies of this paper have errors, including claiming that computers will have 109 rather than 10^9 bits of memory. Anyone who blindly copies that error
cannot be trusted as a commentator.
A. M. Turing, (1952), 'The Chemical Basis Of Morphogenesis', in
Phil. Trans. R. Soc. London B 237, 237, pp. 37--72.
(Also reprinted(with commentaries) in S. B. Cooper and J. van Leeuwen, EDs (2013)). A useful summary for non-mathematicians is
Philip Ball, 2015, Forging patterns and making waves from biology to geology: a commentary on Turing (1952) `The chemical basis of morphogenesis', Royal Society Philosophical Transactions B, http://
Alastair Wilson(2018) "Metaphysical Causation", Nous 52(4):723-751
Appendix 1. Historical interlude: recent developments in computing
In the early days of electronic computing, most computer programming done by scientists and engineers (though not Turing) was numerical, using languages like Fortran, whereas the last 70 years or so
has seen a steady increase in non-numerical forms of computation including manipulation of symbols, syntactic forms for complex symbolic structures, increasingly complex forms of computer programs
for specifying and manipulating structures (including program-generating programs, i.e. compilers), management of networks of such systems (e.g. internet-based email, reservation systems, marketing
systems, educational systems, game-playing programs, program-generating systems, and many more).
One of the early developments in that direction, inspired by the work of Grace Hopper, was development of COBOL an English-like language for business applications, while around that time symbolic
programming languages for AI were being developed, e.g. LISP in the USA and POP2 in Edinburgh, followed by a flood of different languages soon after.
The recent (apparent) successes of "deep learning" techniques involved a shift in the reverse direction (back to number-crunching), but I expect its deep limitations will become increasingly evident
in the next decade or so, e.g. because statistical reasoning cannot lead to conclusions about what is impossible, or necessarily the case, which underly ancient mathematical discoveries and many
aspects of spatial reasoning in humans and other animals. (as Kant pointed out: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/kant-maths.html .)
Evolution's symbolic specification languages (still mostly unknown) were far more powerful and made possible, among other things, the later development of thousands of (non-numerical) human languages
for thinking, communicating, and reasoning, followed (later?) by processes of mathematical discovery leading up to Euclid's Elements, using information processing mechanisms that have not yet been
replicated in AI (or explained by neuroscience).
NB No human language could be innate, not even French. Moreover, human languages cannot be acquired solely by learning from competent speakers, otherwise the process could never have started.
Therefore evolution produced language creation mechanisms not language learning mechanisms. But the creation is collaborative and the majority will normally be ahead of newcomers. A stunning example
was the creation of a new sign language by deaf children in Nicaragua, where there was no pre-existing community of experts.
A short video report is here (BBC?):
I know of no theory/model of learning that can explain or replicate such processes. The metaphysical creativity of AI researchers and cognitive scientists still lags far behind the creativity of
APPENDIX 2. Background to the M-M project: evolution's metaphysical creativity
Ancient humans made sophisticated mathematical discoveries (with properties partly identified by Kant, as discussed in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/kant-maths.html (also pdf
). Many of those mathematical structures had not been instantiated when life forms first emerged. The ancient discoveries in geometry, topology and number theory are still in use thousands of years
later. Although mathematical structures and truths are timeless (as discussed below), the information-processing mechanisms that have been instantiated, and the mathematical structures used, can
change, steadily increasing the information processing powers available to evolved species.
NOTE: "Information" is here being used in the sense of Jane Austen, rather than Claude Shannon, as explained in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.html also (pdf).
Shannon information refers merely to properties of information vehicles, not information contents. There are complex relationships between them.
"Information-processing" here refers not to manipulation of bit-patterns, which are merely useful vehicles for information storage transmission and use in a subset of current artefacts produced by
humans. Contrast the rich, rapidly changing, three-dimensional information used by a bird flying through branches and foliage to land safely on its nest, using spatial intelligence unmatched by
anything in current AI, and unexplained by current neuroscience. Biological information processing is much older, much richer, and more varied in its forms, contents and mechanisms -- many not yet
understood (e.g. sub-neural, chemistry-based, brain mechanisms). (James Gibson began to characterise some of this complexity in his work on perception of affordances, (1979), but barely scratched the
surface of the topic.)
Biological information processing mechanisms include not only physical/chemical machinery but relatively recently evolved virtual machinery that is still barely understood by most philosophers and
scientists studying minds and brains. A partial account of what's missing is in Maley & Piccinini(2013), extended in Sloman(2013).
A feature of the M-M project is the increasingly complex roles of multiple layers of concurrently acting biological virtual machinery performing information-processing tasks that repeatedly produce
new types of states and processes with new causal powers, involving new mathematical structures. Recently evolved virtual machines are so different from physical and chemical processes that they seem
to many thinkers to be incapable of being explained or modelled in physical machines. The scathing attack on modern philosophers of mind by Galen Strawson (2019) seems to be based on a dim
appreciation of some of the features of virtual machines, though he never mentions them or acknowledges that they can exist in complex running computer-based systems, e.g. the internet.
There's a lot more to be said about unnoticed features, including metaphysical features, of virtual machines, ignored by most philosophers because they underestimate the complexity and indirectness
of the relations between physical machines and the most complex virtual machines (e.g. the internet-based email virtual machine that now links minds, brains, phones, computers and a vast amount of
networking technology. Maley & Piccinini(2013) go further than most in the right direction, but even they miss some of the important points that emerge during personal experience of designing,
implementing, testing, debugging and extending complex virtual machines in which causation can go "upwards" from physical through virtual machinery, "sideways" within and between VMs and also
"downwards", e.g. from decisions in virtual machines to changes in physical behaviours.
This is clearly a type of metaphysical novelty/creativity/causation as worthy of philosophical study as any other, and more important than most, because of its deep potential engagement with hard
unsolved scientific (and metaphysical) problems about the nature and diversity of minds and how evolution was able to produce them.
Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham | {"url":"https://www.cs.bham.ac.uk/research/projects/cogaff/misc/evo-framephys.html","timestamp":"2024-11-13T02:25:43Z","content_type":"text/html","content_length":"98828","record_id":"<urn:uuid:dfa4d825-71de-4f73-8238-ef8188cbe1b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00158.warc.gz"} |
Circumcircle of a regular polygon
This online calculator determines the radius and area of the circumcircle of a regular polygon
This content is licensed under Creative Commons Attribution/Share-Alike License 3.0 (Unported). That means you may freely redistribute or modify this content under the same license conditions and
must attribute the original author by placing a hyperlink from your site to this work https://planetcalc.com/1054/. Also, please do not modify any references to the original work (if any) contained
in this content.
Articles that describe this calculator
Digits after the decimal point: 2
PLANETCALC, Circumcircle of a regular polygon | {"url":"https://planetcalc.com/1054/?license=1","timestamp":"2024-11-03T20:31:16Z","content_type":"text/html","content_length":"34385","record_id":"<urn:uuid:9d599318-c452-49f0-8ac9-3c75500ec5a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00688.warc.gz"} |
new way to make quadratic equations easy
08-04-2021, 04:57 AM
Post: #9
Benjer Posts: 51
Member Joined: Apr 2017
RE: new way to make quadratic equations easy
I was surprised to learn that roots for quadratic equations could be solved using a slide rule, described in the manual for the Post Versalog:
Quote:If any quadratic equation is transformed into the form x^2 + Ax + B = 0, the roots or values of the unknown x may be determined by a simple method, using the slide rule scales. We let the
correct roots be -x_1 and -x_2. By factoring, (x + x_1)(x + x_2) = 0. The terms -x_1 and -x_2 will be the correct values of x providing the sum x_1 + x_2 = A and the product of x_1 * x_2 = B. An
index of the CI scale may be set opposite the number B on the D scale. With the slide in this position, no matter where the hairline is set, the product of simultaneous CI and D scale readings or
of simultaneous CIF and DF scale readings is equal to B. Therefore it is only necessary to move the hairline to a position such that the sum of the simultaneous CI and D scale readings, or the
sum of the simultaneous CIF and DF scale readings, is equal to the number A.
As an example, the equation x^2 + 10x + 15 = 0 will be used. We set the left index of CI opposite the number 15 on the D scale. We then move the hairline until the sum of CI and D scale readings,
at the hairline, is equal to 10. This occurs when the hairline is set at 1.84 on D, the simultaneous reading on CI being 8.15. The sum x_1 + x_2 = 1.84 + 8.15 = 9.99, sufficiently close to 10 for
slide rule accuracy. Roots or values of x are therefore -x_1 = -1.84 and -x_2 = -8.15. Obviously the values of x solving the equation x^2 – 10x + 15 = 0 will be +1.84 and +8.15 since in this case
A is negative, equal to -10.
As a second example the equation x^2 – 12.2x - 17.2 = 0 will be used. The left index of CI is set on 17.2 on the D scale. Since this number is actually negative, -17.2, and since it is the
product of x_1 and x_2, obviously one root must be positive, the other negative. Also the sum of x_1 and x_2 must equal -12.2. We therefore move the hairline until the sum of simultaneous scale
readings is equal to -12.2. This occurs when the hairline is set on 13.5 on the DF scale, the simultaneous reading at the hairline on CIF being 1.275. x_1 is therefore -13.5 and x_2 is 1.275,
since x_1 + x_2 = -13.5 +1.275 = -12.225, sufficiently close to -12.2 for slide rule accuracy. The values of x solving the equation are therefore -x_1 = 13.5 and -x_2 = -1.275.
Messages In This Thread
RE: new way to make quadratic equations easy - Benjer - 08-04-2021 04:57 AM
User(s) browsing this thread: 1 Guest(s) | {"url":"https://www.hpmuseum.org/forum/showthread.php?mode=threaded&tid=17296&pid=150697","timestamp":"2024-11-04T17:33:40Z","content_type":"application/xhtml+xml","content_length":"21661","record_id":"<urn:uuid:b2153f91-5ff7-46ae-8cf7-385afc006559>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00432.warc.gz"} |
Homogeneous Linear System of Coupled Differential Equations
This Demonstration shows the solution paths, critical point, eigenvalues, and eigenvectors for the following system of homogeneous first-order coupled equations:
The origin is the critical point of the system, where and . You can track the path of the solution passing through a point by dragging the locator. This is not a plot in time like a typical vector pa
th; rather it follows the and solutions. A variety of behaviors is possible, including that the solutions converge to the origin, diverge from it, or spiral around it. | {"url":"https://www.wolframcloud.com/objects/demonstrations/HomogeneousLinearSystemOfCoupledDifferentialEquations-source.nb","timestamp":"2024-11-05T09:27:04Z","content_type":"text/html","content_length":"224056","record_id":"<urn:uuid:e64bade5-a3a0-417e-acbe-7e396e3f9470>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00860.warc.gz"} |
Wind Turbine Grid Tied Divert Load Sizing
He we discuss how to choose a proper divert (dump) load set-up for a wind turbine grid tie system. Different grid ties divert/dump at different voltages and the dump loads must be matched to this
voltage. Here we break down simple and easy ways to design the right dump load system for your rig!
The purpose of a divert load in a grid tied system is twofold. First, it provides a method to prevent a turbine from having its RPM run away unrestrained if the grid ever goes down. Secondly, all
grid tie systems have a maximum DC input voltage they can tolerate, above which a load needs to be activated to prevent the turbine’s RPM from overspeeding.
It can be a bit complicated to determine what components to buy for a given combination of divert voltage, turbine amps, and divert power. If done incorrectly the divert function may not perform as
required; it can overheat and fail and/or it can actually damage the turbine.
These three aspects need to be known:
Divert Voltage
This is the maximum Grid Tie Inverter (GTI) operating voltage. For instance a Sun-G 22-60V GTI would activate its divert relay at 60 volts.
Turbine Amps
This at the maximum sustained amps that the turbine can operate at continuous duty. This value can be obtained from the supplier. For the Windy Nation WindTura 750 PMA for instance, this is 40 amps.
Divert Power
This is the maximum sustained watts that the turbine can operate at continuously. This value can be obtained from the supplier. For the WindTura 750 PMA, this is 1100 watts.
With these values, we can calculate the load resistance in Ohms that is needed as well as the wattage level required for the components.
For example, if a GTI has a 60 volt divert voltage and the Divert Power is 1100 watts, then using Ohm’s law we can solve for Ohms.
P = V^2/R, therefore R=V^2/P, so R= 60^2/1100 = 3.27 ohms.
Please note that the letters above represent the following:
• P is Power in Watts
• V is Voltage in Volts
• R is Resistance in Ohms
• I is Current in Amps
We also can check to see if this size resistor will cause more amps to flow than what the PMA can tolerate. Again, using Ohm’s law; I = V/R therefore I = 60/3.27 = 18.33 amps. Since a WindTura 750
PMA can tolerate 40 amps, this 18.33 amp level is very safe to use.
The current value in amps that we just calculated (18.33 Amps) also indicates what the divert relay, circuit, and wiring will have to safely operate at. Check to make sure your divert system can
operate well above this level of current for extended periods. Use the National Electrical Code to select the correct wire size.
The following tables illustrate the divert resistor size in ohms and the related amps for various Divert Voltage and Divert Power levels. See if you can find the value for ohms and amps discussed in
the above example.
Divert Resistance
(Example: Suppose your GTI diverts at 60 volts and the maximum sustained power your wind turbine can produce is 1100 Watts. Then you can use the chart below to see that you need a 3.27 Ohm resistor.)
Divert Current
(not to exceed turbine maximum sustained amps)
(Example: Suppose your GTI diverts at 60 volts and the maximum sustained power your wind turbine can produce is 1100 Watts. Then you can see if you use a 3.27 ohm resistor that the divert current
(the current traveling through the dump load) is 18.33 amps. You must make sure that this 18.33 amps is well below the maximum sustained current your wind turbine can handle.)
Now that we know the resistance and current levels involved, we then need to configure a set of resistors that can provide the target resistance (Ohms) and power dissipation (Watts) levels. Let’s
first concentrate on getting the Ohms right. There are 3 situations:
Resistors in series
In this case, the ohm values are simply added together.
R(total) = R[1] + R[2] + R[3]
Resistors in parallel
This is a bit more complicated, but the basic formula involves adding up the reciprocal of each value and then taking the reciprocal of the sum:
1/ R[total] = 1/R[1] + 1/R[2] + 1/R[3] +1/R[4]
For instance a 1 ohm, 2 ohm, and 3 ohm resistor wired in parallel = 0.5454 ohms
Resistors in series and parallel
This is simply a combination of the above 2 configurations.
It is important to get the total resistance value within 10% of the target value, and it can take a combination of off-the-shelf power resistors to accomplish this.
For instance, WindyNation sells three power resistor values; 0.73 ohms, 2.9 ohms, and 10.4 ohms. Depending on the target ohm value, one or more of these may be required.
If we take the 1100 Watt example of the WindTura 750 PMA operating at 60V and 18.33 amps, we know we need a resistor configuration of 3.27 ohms. Clearly there are multiple mathematical ways to get
close using the three resistors mentioned above. For example, two 0.73 ohm resistors wired in parallel which are then wired in series with one 2.9 ohm resistor. The 2 parallel resistors equal 0.365
ohms plus the 2.9 ohms creates 3.265 ohms of resistance. This seems great for the ohms part but we now have a huge new problem -- the power dissipation is inadequate.
Let’s do the math. The two parallel resistors have the capability to dissipate 600 watts, but ohms law indicates they will only dissipate P =I^R therefore P = 18.33^2 x .365 = 122 Watts, which is 61
Watts each while the single 2.9 ohm resistor will have to then dissipate 18.33^2 x 2.9 = 974 Watts! Yes, the total is about 1100 watts but the 2.9 ohm resistor will burn up. Yikes!
The point is that we have to solve a small math puzzle to get the resistance to come within 10% of our goal, the total divert power to be met, and the power dissipation of each resistor to be safely
within its capabilities.
There are numerous power resistors on the market that can be used. Just follow the steps above.
We sell one type of power resistor that is available in 3 values. These are 300 Watt power resistors. The following resistor combination table has been developed for the 30 volt and the 60 volt
divert voltage cases using these resistors. Type A = 0.73 Ohm, Type B = 2.9 Ohm, and Type C = 10.4 Ohm.
(Example: Suppose you are using the 30V GTI and the maximum sustained power your wind turbine can produce is 600 Watts. Then you can use the chart below to see two B resistors (two 2.9 Ohm resistors)
wired in parallel will be an adequate divert load for your wind turbine.)
Learn more about WindyNation Dump Loads and Resistors! | {"url":"https://www.windynation.com/blogs/articles/wind-turbine-grid-tied-divert-load-sizing","timestamp":"2024-11-02T21:44:05Z","content_type":"text/html","content_length":"210840","record_id":"<urn:uuid:be82f512-3bb1-4301-b99f-da68d064d66b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00022.warc.gz"} |
The Plot Browser :: MATLAB Plotting Tools (Graphics)
The Plot Browser provides a legend of all the graphs in the figure. It lists each axes and the objects (lines, surfaces, etc.) used to create the graph.
For example, suppose you plot an 11-by-11 matrix z. The plot function creates one line for each column in z.
When you set the DisplayName property, the Plot Browser indicates which line corresponds to which column.
If you want to set the properties of an individual line, double-click on the line in the Plot Browser. Its properties are displayed in the Property Editor, which opens on the bottom of the figure.
You can select a line in the graph, and the corresponding entry in the Plot Browser is highlighted, enabling you to see which column in the variable produced the line.
Controlling Object Visibility
The check box next to each item in the Plot Browser controls the object's visibility. For example, suppose you want to plot only certain columns of data in z, perhaps the positive values. You can
uncheck the columns you do not want to display. The graph updates as you uncheck each box and rescales the axes as required.
Deleting Objects
You can delete any selected item in the Plot Browser by selecting Delete from the right-click context menu.
Adding Data to Axes
The Plot Browser provides the mechanism by which you add data to axes. The procedure is as follows:
1. After creating the axes, select it in the Plot Browser panel to enable the Add Data button at the bottom of the panel.
2. Click the Add Data button to display the Add Data to Axes dialog.
The Add Data to Axes dialog enables you to select a plot type and specify the workspace variables to pass to the plotting function. You can also specify a MATLAB expression, which is evaluated to
produce the data to plot.
Selecting Workspace Variables to Create a Graph. Suppose you want to create a surface graph from three workspace variables defining the XData, YData, and ZData (see the surf function for more
information on this type of graph).
In the workspace you have defined three variables, x, y, and z. To create the graph, configure the Add Data to Axes dialog as shown in the following picture.
Using a MATLAB Expression to Create a Graph. The following picture shows the Add Data to Axes dialog specifying a workspace variable x for the plot's x data and a MATLAB expression (x.^2 + 3*x + 5)
for the y data.
You can use the default X Data value of index if you do not want to specify x data. In this case, MATLAB plots the y data vs. the index of the y data value, which is equivalent to calling the plot
command with only one argument.
The Figure Palette The Property Editor
© 1994-2005 The MathWorks, Inc. | {"url":"http://matlab.izmiran.ru/help/techdoc/creating_plots/plot_to5.html","timestamp":"2024-11-11T10:21:17Z","content_type":"text/html","content_length":"5982","record_id":"<urn:uuid:a804cf05-c0f7-4786-9acf-603c1cb159b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00229.warc.gz"} |
JSOSolvers.jl documentation · JSOSolvers.jl
This package provides a few optimization solvers curated by the JuliaSmoothOptimizers organization.
All solvers here are JSO-Compliant, in the sense that they accept NLPModels and return GenericExecutionStats. This allows benchmark them easily.
All solvers can be called like the following:
stats = solver_name(nlp; kwargs...)
where nlp is an AbstractNLPModel or some specialization, such as an AbstractNLSModel, and the following keyword arguments are supported:
• x is the starting default (default: nlp.meta.x0);
• atol is the absolute stopping tolerance (default: atol = √ϵ);
• rtol is the relative stopping tolerance (default: rtol = √ϵ);
• max_eval is the maximum number of objective and constraints function evaluations (default: -1, which means no limit);
• max_time is the maximum allowed elapsed time (default: 30.0);
• stats is a SolverTools.GenericExecutionStats with the output of the solver.
See the full list of Solvers. | {"url":"https://jso.dev/JSOSolvers.jl/stable/","timestamp":"2024-11-13T00:45:56Z","content_type":"text/html","content_length":"8939","record_id":"<urn:uuid:1ac1bec9-356d-4c9d-8e10-ad1b054fe3d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00609.warc.gz"} |
2006 AMC 12B Problems
2006 AMC 12B (Answer Key)
Printable versions: • AoPS Resources • PDF
1. This is a 25-question, multiple choice test. Each question is followed by answers marked A, B, C, D and E. Only one of these is correct.
2. You will receive 6 points for each correct answer, 2.5 points for each problem left unanswered if the year is before 2006, 1.5 points for each problem left unanswered if the year is after 2006,
and 0 points for each incorrect answer.
3. No aids are permitted other than scratch paper, graph paper, ruler, compass, protractor and erasers (and calculators that are accepted for use on the test if before 2006. No problems on the
test will require the use of a calculator).
4. Figures are not necessarily drawn to scale.
5. You will have 75 minutes working time to complete the test.
1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25
Problem 1
What is $( - 1)^1 + ( - 1)^2 + \cdots + ( - 1)^{2006}$?
$\text {(A) } - 2006 \qquad \text {(B) } - 1 \qquad \text {(C) } 0 \qquad \text {(D) } 1 \qquad \text {(E) } 2006$
Problem 2
For real numbers $x$ and $y$, define $x\spadesuit y = (x + y)(x - y)$. What is $3\spadesuit(4\spadesuit 5)$?
$\text {(A) } - 72 \qquad \text {(B) } - 27 \qquad \text {(C) } - 24 \qquad \text {(D) } 24 \qquad \text {(E) } 72$
Problem 3
A football game was played between two teams, the Cougars and the Panthers. The two teams scored a total of 34 points, and the Cougars won by a margin of 14 points. How many points did the Panthers
$\text {(A) } 10 \qquad \text {(B) } 14 \qquad \text {(C) } 17 \qquad \text {(D) } 20 \qquad \text {(E) } 24$
Problem 4
Mary is about to pay for five items at the grocery store. The prices of the items are $\textdollar7.99$, $\textdollar4.99$, $\textdollar2.99$, $\textdollar1.99$, and $\textdollar0.99$. Mary will pay
with a twenty-dollar bill. Which of the following is closest to the percentage of the $\textdollar20.00$ that she will receive in change?
$\text {(A) } 5 \qquad \text {(B) } 10 \qquad \text {(C) } 15 \qquad \text {(D) } 20 \qquad \text {(E) } 25$
Problem 5
John is walking east at a speed of 3 miles per hour, while Bob is also walking east, but at a speed of 5 miles per hour. If Bob is now 1 mile west of John, how many minutes will it take for Bob to
catch up to John?
$\text {(A) } 30 \qquad \text {(B) } 50 \qquad \text {(C) } 60 \qquad \text {(D) } 90 \qquad \text {(E) } 120$
Problem 6
Francesca uses 100 grams of lemon juice, 100 grams of sugar, and 400 grams of water to make lemonade. There are 25 calories in 100 grams of lemon juice and 386 calories in 100 grams of sugar. Water
contains no calories. How many calories are in 200 grams of her lemonade?
$\text {(A) } 129 \qquad \text {(B) } 137 \qquad \text {(C) } 174 \qquad \text {(D) } 223 \qquad \text {(E) } 411$
Problem 7
Mr. and Mrs. Lopez have two children. When they get into their family car, two people sit in the front, and the other two sit in the back. Either Mr. Lopez or Mrs. Lopez must sit in the driver's
seat. How many seating arrangements are possible?
$\text {(A) } 4 \qquad \text {(B) } 12 \qquad \text {(C) } 16 \qquad \text {(D) } 24 \qquad \text {(E) } 48$
Problem 8
The lines $x = \frac 14y + a$ and $y = \frac 14x + b$ intersect at the point $(1,2)$. What is $a + b$?
$\text {(A) } 0 \qquad \text {(B) } \frac 34 \qquad \text {(C) } 1 \qquad \text {(D) } 2 \qquad \text {(E) } \frac 94$
Problem 9
How many even three-digit integers have the property that their digits, read left to right, are in strictly increasing order?
$\text {(A) } 21 \qquad \text {(B) } 34 \qquad \text {(C) } 51 \qquad \text {(D) } 72 \qquad \text {(E) } 150$
Problem 10
In a triangle with integer side lengths, one side is three times as long as a second side, and the length of the third side is 15. What is the greatest possible perimeter of the triangle?
$\text {(A) } 43 \qquad \text {(B) } 44 \qquad \text {(C) } 45 \qquad \text {(D) } 46 \qquad \text {(E) } 47$
Problem 11
Joe and JoAnn each bought 12 ounces of coffee in a 16-ounce cup. Joe drank 2 ounces of his coffee and then added 2 ounces of cream. JoAnn added 2 ounces of cream, stirred the coffee well, and then
drank 2 ounces. What is the resulting ratio of the amount of cream in Joe's coffee to that in JoAnn's coffee?
$\text {(A) } \frac 67 \qquad \text {(B) } \frac {13}{14} \qquad \text {(C) } 1 \qquad \text {(D) } \frac {14}{13} \qquad \text {(E) } \frac 76$
Problem 12
The parabola $y=ax^2+bx+c$ has vertex $(p,p)$ and $y$-intercept $(0,-p)$, where $pe 0$. What is $b$?
$\text {(A) } -p \qquad \text {(B) } 0 \qquad \text {(C) } 2 \qquad \text {(D) } 4 \qquad \text {(E) } p$
Problem 13
Rhombus $ABCD$ is similar to rhombus $BFDE$. The area of rhombus $ABCD$ is 24, and $\angle BAD = 60^\circ$. What is the area of rhombus $BFDE$?
$[asy] defaultpen(linewidth(0.7)+fontsize(11)); unitsize(2cm); pair A=origin, B=(2,0), C=(3, sqrt(3)), D=(1, sqrt(3)), E=(1, 1/sqrt(3)), F=(2, 2/sqrt(3)); pair point=(3/2, sqrt(3)/2); draw
(B--C--D--A--B--F--D--E--B); label("A", A, dir(point--A)); label("B", B, dir(point--B)); label("C", C, dir(point--C)); label("D", D, dir(point--D)); label("E", E, dir(point--E)); label("F", F, dir
(point--F)); [/asy]$
$\textrm{(A) } 6 \qquad \textrm{(B) } 4\sqrt {3} \qquad \textrm{(C) } 8 \qquad \textrm{(D) } 9 \qquad \textrm{(E) } 6\sqrt {3}$
Problem 14
Elmo makes $N$ sandwiches for a fundraiser. For each sandwich he uses $B$ globs of peanut butter at $4$ cents per glob and $J$ blobs of jam at $5$ cents per blob. The cost of the peanut butter and
jam to make all the sandwiches is $\textdollar 2.53$. Assume that $B$, $J$ and $N$ are all positive integers with $N>1$. What is the cost of the jam Elmo uses to make the sandwiches?
$\mathrm{(A)}\ 1.05 \qquad \mathrm{(B)}\ 1.25 \qquad \mathrm{(C)}\ 1.45 \qquad \mathrm{(D)}\ 1.65 \qquad \mathrm{(E)}\ 1.85$
Problem 15
Circles with centers $O$ and $P$ have radii 2 and 4, respectively, and are externally tangent. Points $A$ and $B$ are on the circle centered at $O$, and points $C$ and $D$ are on the circle centered
at $P$, such that $\overline{AD}$ and $\overline{BC}$ are common external tangents to the circles. What is the area of hexagon $AOBCPD$?
$[asy] unitsize(0.4 cm); defaultpen(linewidth(0.7) + fontsize(11)); pair A, B, C, D; pair[] O; O[1] = (6,0); O[2] = (12,0); A = (32/6,8*sqrt(2)/6); B = (32/6,-8*sqrt(2)/6); C = 2*B; D = 2*A; draw
(Circle(O[1],2)); draw(Circle(O[2],4)); draw((0.7*A)--(1.2*D)); draw((0.7*B)--(1.2*C)); draw(O[1]--O[2]); draw(A--O[1]); draw(B--O[1]); draw(C--O[2]); draw(D--O[2]); label("A", A, NW); label("B", B,
SW); label("C", C, SW); label("D", D, NW); dot("O", O[1], SE); dot("P", O[2], SE); label("2", (A + O[1])/2, E); label("4", (D + O[2])/2, E);[/asy]$
$\textbf{(A) } 18\sqrt {3} \qquad \textbf{(B) } 24\sqrt {2} \qquad \textbf{(C) } 36 \qquad \textbf{(D) } 24\sqrt {3} \qquad \textbf{(E) } 32\sqrt {2}$
Problem 16
Regular hexagon $ABCDEF$ has vertices $A$ and $C$ at $(0,0)$ and $(7,1)$, respectively. What is its area?
$\mathrm{(A)}\ 20\sqrt {3} \qquad \mathrm{(B)}\ 22\sqrt {3} \qquad \mathrm{(C)}\ 25\sqrt {3} \qquad \mathrm{(D)}\ 27\sqrt {3} \qquad \mathrm{(E)}\ 50$
Problem 17
For a particular peculiar pair of dice, the probabilities of rolling $1$, $2$, $3$, $4$, $5$ and $6$ on each die are in the ratio $1:2:3:4:5:6$. What is the probability of rolling a total of $7$ on
the two dice?
$\mathrm{(A)}\ \frac 4{63} \qquad \mathrm{(B)}\ \frac 18 \qquad \mathrm{(C)}\ \frac 8{63} \qquad \mathrm{(D)}\ \frac 16 \qquad \mathrm{(E)}\ \frac 27$
Problem 18
An object in the plane moves from one lattice point to another. At each step, the object may move one unit to the right, one unit to the left, one unit up, or one unit down. If the object starts at
the origin and takes a ten-step path, how many different points could be the final point?
$\mathrm{(A)}\ 120 \qquad \mathrm{(B)}\ 121 \qquad \mathrm{(C)}\ 221 \qquad \mathrm{(D)}\ 230 \qquad \mathrm{(E)}\ 231$
Problem 19
Mr. Jones has eight children of different ages. On a family trip his oldest child, who is 9, spots a license plate with a 4-digit number in which each of two digits appears two times. "Look, daddy!"
she exclaims. "That number is evenly divisible by the age of each of us kids!" "That's right," replies Mr. Jones, "and the last two digits just happen to be my age." Which of the following is not the
age of one of Mr. Jones's children?
$\mathrm{(A)}\ 4 \qquad \mathrm{(B)}\ 5 \qquad \mathrm{(C)}\ 6 \qquad \mathrm{(D)}\ 7 \qquad \mathrm{(E)}\ 8$
Problem 20
Let $x$ be chosen at random from the interval $(0,1)$. What is the probability that $\lfloor\log_{10}4x\rfloor - \lfloor\log_{10}x\rfloor = 0$? Here $\lfloor x\rfloor$ denotes the greatest integer
that is less than or equal to $x$.
$\mathrm{(A)}\ \frac 18 \qquad \mathrm{(B)}\ \frac 3{20} \qquad \mathrm{(C)}\ \frac 16 \qquad \mathrm{(D)}\ \frac 15 \qquad \mathrm{(E)}\ \frac 14$
Problem 21
Rectangle $ABCD$ has area $2006$. An ellipse with area $2006\pi$ passes through $A$ and $C$ and has foci at $B$ and $D$. What is the perimeter of the rectangle? (The area of an ellipse is $ab\pi$
where $2a$ and $2b$ are the lengths of the axes.)
$\mathrm{(A)}\ \frac {16\sqrt {2006}}{\pi} \qquad \mathrm{(B)}\ \frac {1003}4 \qquad \mathrm{(C)}\ 8\sqrt {1003} \qquad \mathrm{(D)}\ 6\sqrt {2006} \qquad \mathrm{(E)}\ \frac {32\sqrt {1003}}\pi$
Problem 22
Suppose $a$, $b$ and $c$ are positive integers with $a+b+c=2006$, and $a!b!c!=m\cdot 10^n$, where $m$ and $n$ are integers and $m$ is not divisible by $10$. What is the smallest possible value of $n$
$\mathrm{(A)}\ 489 \qquad \mathrm{(B)}\ 492 \qquad \mathrm{(C)}\ 495 \qquad \mathrm{(D)}\ 498 \qquad \mathrm{(E)}\ 501$
Problem 23
Isosceles $\triangle ABC$ has a right angle at $C$. Point $P$ is inside $\triangle ABC$, such that $PA=11$, $PB=7$, and $PC=6$. Legs $\overline{AC}$ and $\overline{BC}$ have length $s=\sqrt{a+b\sqrt
{2}}$, where $a$ and $b$ are positive integers. What is $a+b$?
$[asy] pathpen = linewidth(0.7); pointpen = black; pen f = fontsize(10); size(5cm); pair B = (0,sqrt(85+42*sqrt(2))); pair A = (B.y,0); pair C = (0,0); pair P = IP(arc(B,7,180,360),arc(C,6,0,90)); D
(A--B--C--cycle); D(P--A); D(P--B); D(P--C); MP("A",D(A),plain.E,f); MP("B",D(B),plain.N,f); MP("C",D(C),plain.SW,f); MP("P",D(P),plain.NE,f); [/asy]$
$\mathrm{(A)}\ 85 \qquad \mathrm{(B)}\ 91 \qquad \mathrm{(C)}\ 108 \qquad \mathrm{(D)}\ 121 \qquad \mathrm{(E)}\ 127$
Problem 24
Let $S$ be the set of all points $(x,y)$ in the coordinate plane such that $0\leq x\leq \frac\pi 2$ and $0\leq y\leq \frac\pi 2$. What is the area of the subset of $S$ for which $\sin^2 x - \sin x\
sin y + \sin^2 y\le \frac 34$?
$\mathrm{(A)}\ \frac {\pi^2}9 \qquad \mathrm{(B)}\ \frac {\pi^2}8 \qquad \mathrm{(C)}\ \frac {\pi^2}6 \qquad \mathrm{(D)}\ \frac {3\pi^2}{16} \qquad \mathrm{(E)}\ \frac {2\pi^2}9$
Problem 25
A sequence $a_1,a_2,\dots$ of non-negative integers is defined by the rule $a_{n+2}=|a_{n+1}-a_n|$ for $n\geq 1$. If $a_1=999$, $a_2<999$ and $a_{2006}=1$, how many different values of $a_2$ are
$\mathrm{(A)}\ 165 \qquad \mathrm{(B)}\ 324 \qquad \mathrm{(C)}\ 495 \qquad \mathrm{(D)}\ 499 \qquad \mathrm{(E)}\ 660$
See also
The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions. | {"url":"https://artofproblemsolving.com/wiki/index.php?title=2006_AMC_12B_Problems&oldid=195233","timestamp":"2024-11-14T07:42:36Z","content_type":"text/html","content_length":"87770","record_id":"<urn:uuid:cab598ff-2e47-4817-88c7-5a7888843c39>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00511.warc.gz"} |
Condensate Temperature Calculator - Savvy Calculator
Condensate Temperature Calculator
About Condensate Temperature Calculator (Formula)
The condensate temperature is a crucial factor in various industrial processes, especially in heating and cooling systems. Accurate measurement and calculation of this temperature can significantly
impact efficiency, safety, and performance in applications such as steam systems, HVAC, and power plants. The Condensate Temperature Calculator simplifies this process, allowing users to quickly
determine the temperature of the condensate based on the heat added and the mass flow rate. Understanding how to effectively use this calculator is essential for engineers and technicians working in
thermal management.
The formula for calculating the condensate temperature (CT) is:
CT = Q / L
In this formula:
• CT represents the condensate temperature.
• Q stands for the total heat added to the condensate.
• L signifies the mass flow rate of the condensate.
How to Use
Using the Condensate Temperature Calculator is a straightforward process. Follow these steps to obtain the condensate temperature:
1. Determine the Total Heat (Q): Measure the total amount of heat added to the condensate, usually expressed in joules or kilojoules.
2. Measure the Mass Flow Rate (L): Calculate or measure the mass flow rate of the condensate, typically in kilograms per second or another mass unit.
3. Input Values: Enter the values for Q and L into the calculator.
4. Calculate: The calculator will compute the condensate temperature (CT) based on the provided inputs.
Let’s illustrate how to use the Condensate Temperature Calculator with a practical example:
Suppose you have the following measurements:
• Total Heat Added (Q) = 5000 joules
• Mass Flow Rate (L) = 2 kg/s
Using the Formula:
Now plug in the values:
CT = Q / L
CT = 5000 / 2
CT = 2500 °C
Thus, the condensate temperature is 2500 °C, indicating the temperature at which the condensate exists after heat is added.
1. What is condensate temperature?
Condensate temperature is the temperature of steam or vapor that has cooled and condensed back into liquid form.
2. Why is calculating condensate temperature important?
Accurate condensate temperature calculations are vital for optimizing energy efficiency and ensuring the safe operation of heating systems.
3. What units are used for Q and L?
Q is usually expressed in joules, and L is typically in kilograms per second, but other units can be used based on the context.
4. What factors can affect condensate temperature?
Factors include pressure, heat transfer efficiency, and the specific properties of the working fluid.
5. Can the condensate temperature be lower than the boiling point?
Yes, if the pressure is sufficiently low, the condensate can exist at a temperature below the boiling point of the liquid.
6. Is there a specific range for typical condensate temperatures?
Typical condensate temperatures vary widely based on the application, but they are usually below the boiling point of the working fluid at the given pressure.
7. How can I improve the accuracy of my measurements?
Ensure proper calibration of measuring instruments and minimize heat losses during measurement.
8. What happens if the condensate temperature is too high?
High condensate temperatures can lead to inefficient heat transfer and potential damage to system components.
9. What is the impact of pressure on condensate temperature?
Higher pressure typically raises the boiling point, affecting the condensate temperature.
10. How can I calculate condensate temperature in a steam system?
Use the total heat added to the condensate and the mass flow rate in the formula CT = Q / L.
11. Is there a difference between condensate temperature and steam temperature?
Yes, condensate temperature refers to the liquid phase, while steam temperature refers to the gaseous phase.
12. Can this calculator be used for other fluids?
Yes, as long as you have the appropriate heat and mass flow rate values for the specific fluid.
13. What should I do if I get unexpected results?
Check your input values for accuracy and ensure that you are using the correct units.
14. Is there a relationship between condensate temperature and thermal efficiency?
Yes, lower condensate temperatures can lead to higher thermal efficiency in heating systems.
15. How often should I calculate condensate temperature?
It is advisable to calculate it whenever there are changes in system operation or when troubleshooting performance issues.
16. Can I use the calculator for batch processes?
Yes, just ensure to input the total heat and mass flow rate for the entire batch.
17. Are there any software tools available for more complex calculations?
Yes, many engineering software tools can handle complex thermal calculations involving multiple variables.
18. What is the role of condensate return systems?
They help recover heat and water, improving overall system efficiency and reducing waste.
19. How does condensate temperature affect boiler efficiency?
Lower condensate temperatures can reduce the thermal efficiency of the boiler, as more energy is required to convert water back to steam.
20. Can I use this formula in different industrial applications?
Absolutely! This formula is applicable in various industries, including power generation, HVAC, and chemical processing.
The Condensate Temperature Calculator is an essential tool for professionals in various fields dealing with thermal systems. By accurately calculating the condensate temperature, users can optimize
their operations, improve energy efficiency, and ensure system safety. Regular use of this calculator can lead to better management of heat transfer processes, benefiting both operational
effectiveness and sustainability.
Leave a Comment | {"url":"https://savvycalculator.com/condensate-temperature-calculator","timestamp":"2024-11-08T11:47:47Z","content_type":"text/html","content_length":"148182","record_id":"<urn:uuid:a0b7d3a8-b1cf-45ff-9968-63a9a5e88d36>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00227.warc.gz"} |
Should I bootstrap Spline term construction?
Dear Professor Harrell,
I wonder if we should bootstrap the construction of smoothing splines?
I am modelling a logistic model with smoothing term in brms (and mgcv). When performing Bootstrap, a normal replaced resampling of dataset by convention can lead to destructive error from smooth
construction where an insufficient number of data points are provided.
I am therefore thinking of a Bayesian-style Bootstrap (mimicking the implementation of package bayesboot) where, rather than resample the dataset, I sample the weight for each observation, following
a Dirichlet distribution and feed it to brms (or mgcv with quasibinomial family). However, looking at the code generated makes me wonder. As the construction of smoothing term effectively still used
the full training feature space and not surrogate one, would that cause an underestimation of overoptimism? On the flip side, judging from mgcv code, I think (!!) the construction of s(x) does not
depend on outcome y (the smooth coefficients do, the basic functions don’t). Therefore, maybe indeed it makes more sense to not bootstrapping it? But I’m not sure.
Please I wonder how such situation was implemented in rms? Did you came across the instability in smooth term construction?
Splines are a means to an end and the basic functions (spline terms in the regression) are quite arbitrary. Coefficients of basis functions can fly around even thought the predicted values don’t.
Consider looking at a set of say 10 predicted values to see how they vary over samples.
Thank you. If I get it correctly, the constructed spline terms are not completely translatable to prediction per se. As they are just some “numbers”, I guess it is not harmful for splines (rcs or
s) to be constructed before the bootstrap process, even better given the extra stability?
FWIW, here’s a Bayesian GAM R package that should give you credible intervals for the smooth splines, and also does variable selection:
1 Like
Thanks. Sorry I did not make it clearly. I was doing bootstrap for estimating bias in c-index and calibration. Rms has its own function but gam doesn’t. I did two strategies:
1. Bootstrap → mgcv::gam on each bootstrap dataset. Both smooth construction and model fitter were not aware of left-out observations. c-idx bias was about 0.1
2. Construct the s before bootstrap. This is a bit trickier, done by sampling the observational weights and feed it into mgcv::gam. Something like:
i <- sample(nrow(real_df), size=nrow(real_df), replace = TRUE)
bootdf <- real_df[i,]
# Get the weights for each observation
weights <- bootdf |> summarise(W = n(), .by=id) |>
tidyr::complete(real_df$id), fill=list(W=0))
real_df <- left_join(real_df, weights, by=id)
boot_fit <- mgcv::gam(y ~ s(x), weights=W, data=real_df)
# this call remove observations with weights 0 in model fitter, but still include them in the smooth construction
This get me bias = 0.01.
Not clear if this is a problem for you but be sure to favor the Efron-Gong optimism bootstrap and not some out-of-bag variant.
Right you can construct the spline basis functions using unsupervised learning as a one-time thing up front.
Spline terms need to be treated as connected always.
Thank you for pointing it out. This made me realised that the my number of parameters p might be a bit large compared to effective sample size n that optimism bootstrap has started breaking down,
leading to such unstable behaviour. If out-of-bag (I think you mean cross validation) is not preferred, please do you have any advice?
But this might be a good case study, the convenient implicit knots in mgcv has got me off guard.
Good observation. If “out of bag” applies to what you did, this is a common misunderstanding of the Efron-Gong optimism estimator which does not involve any out of bag operations. It compares
model performance from the bootstrap sample to performance on the whole sample.
Ah no. I did not do anything strange. I honestly thought out-of-bag is related to cross validations (let’s pretend I did not come across the term out-of-bootstrap when researching this).
This was what I did, following your books that, for any metric, the expected bias between bootstrap dataset and whole sample/real dataset → expected bias between whole sample and the population.
This method was quite good for most statistical models I have built. This is first time it failed silently.
1 Like
The main occasion when I’ve seen the optimism bootstrap fail is in the p >> N case where the bootstrap tells you you have really bad overfitting and repeated cross-validation properly tells you you
have tragic overfitting. | {"url":"https://discourse.datamethods.org/t/should-i-bootstrap-spline-term-construction/21804","timestamp":"2024-11-05T23:52:02Z","content_type":"text/html","content_length":"37906","record_id":"<urn:uuid:78901b47-448f-46de-91a3-013758449783>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00520.warc.gz"} |
Best Math Facts App Comparison: Rocket Math vs. XtraMath
Posted on by Dr. Don
A math facts app is a great tool to use for your students. There are plenty of math facts apps out there that let students practice math facts they have already learned. Few apps actually teach math
facts. But apps from Rocket Math and XtraMath are exceptions. While both apps teach students math facts, one is more effective and fun.
The two best apps for actually teaching math facts
Math facts apps from Rocket Math and XtraMath effectively teach math facts because they have four essential characteristics:
1. Both math facts apps require students to demonstrate fluency with facts. Fluency means a student can quickly answer math fact questions from recall. This is the opposite of letting a student
“figure it out” slowly. Neither app considers a fact mastered until a student can answer a fact consistently within 3 seconds.
2. Both math facts apps zero in on teaching (and bringing to mastery) a small number of facts at a time. This is the only way to teach math fact fluency. It’s impossible for students to learn and
memorize a large number of facts all at once.
3. Both math facts apps are responsive. Apps simply do not teach if they randomly present facts or do not respond differently when students take a long time to answer a fact.
4. Both math facts apps only allow students to work for a few minutes (15 minutes or less) before taking a break. Teachers and parents may want to keep students busy practicing math facts for an
hour, but students will come to hate the app if they have long sessions. A few minutes of practice in each session is the best way to learn and to avoid student burnout.
5. Both math facts apps re-teach the fact if a student makes an error. While both Rocket Math and XtraMath re-teach facts, they re-teach them differently.
While both apps contain these important features and teach math facts, there are a few vital elements that make an effective app like Rocket Math stand out.
An effective math facts app gives a student a sense of accomplishment
The difficult thing about learning math facts is persevering. There are so many to learn! It takes a while, and students have to persevere through boring memorization tasks. The best way to help
students learn their math facts is to give them a clear sense of accomplishment as they move through each task.
How XtraMath monitors progress
To develop a sense of accomplishment among its app users, XtraMath displays math facts on a grid. XtraMath tests the student and marks the ones that are answered quickly (within 3 seconds) with
smiley faces. It takes a couple of sessions to determine what has been mastered and what hasn’t, so there isn’t a sense of accomplishment at first. This grid is displayed and explained, but it’s not
easy to monitor progress. Over time, there are fewer squares with facts to learn, but there isn’t clear feedback on what’s being accomplished as students work.
How Rocket Math monitors progress
Conversely, Rocket Math begins recognizing student progress immediately and continues to celebrate progress at every step. The Rocket Math app begins with Set A and progresses up to Set Z. Each
lettered set has three phases: Take-Off, Orbit, and the Universe. That means there are 78 milestones celebrated in the process of moving from Set A to Set Z.
The Take-Off phase has only 4 problems to learn. They are repeated until the student gets 12 correct in a row. When the student does that, the doors close (with appropriate sound effects) to show
“Mission Accomplished.” They also are congratulated by one of a cast of voices. Something along the lines of, “Mission Control here. You did it! Mission Accomplished! You took off with Set A! Go for
Orbit if you dare!” With this type of consistent (and fun!) recognition, students clearly understand that they are progressing, and they get the chance to keep learning “if they dare!”
In addition to the three phases, students progress through the sets from A to Z. Each time a student masters a set, by going through all three phases, the student gets congratulated and taken to
their rocket picture, as shown above. When a level is completed, the tile for that level explodes (with appropriate sound effects) and drops off the picture, gradually revealing more of the picture
as tiles are demolished.
In the picture above, the tile for “N” has just exploded. After the explosion, a student is congratulated for passing Level N and encouraged to go for Level O if they dare. When you talk to students
about Rocket Math, they always tell you what level they have achieved. “I’m on Level K!” a student will announce with pride. That sense of accomplishment is important for them to keep chugging along.
Rocket Math also has available Learning Track certificates from Dr. Don and the teacher for completing through Level Z.
An effective math facts app corrects errors—correctly
Neither of these math fact apps allows errors to go uncorrected. Students will never learn math facts from an app that does not correct errors. That puts these two apps head and shoulders above the
competition. However, these two apps correct errors very differently.
How XtraMath corrects errors
On the left, you can see the XtraMath correction is visual. If a student enters the wrong answer, the app crosses the incorrect answer out in red and displays the correct answer in gray. A student
then has to enter the correct answer that they see. This is a major mistake. In this case, students don’t have to remember the answer. They just have to enter the numbers in gray.
How Rocket Math corrects errors
Rocket Math, however, requires the student to remember the answer. When a student answers incorrectly, the screen turns orange, and Mission Control displays and recites the correct problem and
answer. In the pictured situation, Mission Control says, “Seven times nine is sixty-three. Go again.” Then the answer clears, and the game waits for the student to enter the correct answer. Under
these conditions, the student has to listen to the correction and remember the answer, so they can enter it correctly.
Once the student correctly answers that target problem, the app presents the problem again. Then it presents it twice more interspersed with other problems.
If the student answers the previously missed problem correctly within the three seconds, the game notes the error, and the student continues through the phase. If the student fails to answer the
problem correctly again, the correction process repeats until the student answers correctly. Having to listen to and remember the answer, rather than just copy the answer, helps students learn
An effective math facts app gives meaningful feedback
Without feedback, students can’t learn efficiently and get frustrated. But the feedback cannot be generic. It has to dynamically respond to different student behavior.
How XtraMath’s app gives feedback
XtraMath’s charming “Mr. C” narrates all of the transitions between parts of each day’s lesson. He welcomes students, says he is happy to see them, and updates students on their progress. He gives
gentle, generic feedback about how you’re getting better and to remember to try to recall the facts instead of figuring them out. However, his feedback remains the same no matter how you do. In
short, it is non-contingent feedback, which may not be very meaningful to students.
How Rocket Math’s app gives feedback
Differing from XtraMath, Rocket Math offers students a lot of feedback that is contingent. Contingent feedback means that students will receive different types of feedback depending on their
The Rocket Math app gives positive feedback for all the 78 accomplishments noted above. It also doles out corrective feedback when the student isn’t doing well.
As noted above, students receive corrective feedback on all errors. They get feedback when they take longer than three seconds to answer too. The “Time’s Up” screen on the right pops up, and Mission
Control says, “Ya’ gotta be faster! Wait. Listen for the answer.” And then the problem and the correct answer are given. Students get a chance to answer that fact again soon and redeem
themselves–proving they can answer it in 3 seconds.
The app tracks errors, and three errors mean the student needs more practice on this part. The doors close (with appropriate sound effects). The student is given encouragement that they have defeated
three hard problems and a challenge to “Keep Trying” if they are tough enough. At that point, the “go” screen appears, and the student has to hit “go” to open the doors (with appropriate sound
effects) and try it again.
When it comes to recognizing a student’s success, the Rocket Math app holds nothing back. After a student completes a phase, one of the cast of voices gives enthusiastic congratulations, as noted
Typically, students don’t have to “Keep Trying” more than once or twice in a phase, but they still feel a real sense of accomplishment when they do complete the phase. The feedback students get from
Rocket Math matters because they have to work hard to earn it.
How much does an effective math facts app cost?
It is hard to beat the price of XtraMath, which has a free version. It is $2 per student to have a few more options and $500 per school. XtraMath is run by a non-profit based in Seattle. They have a
staff of six folks in Seattle, and they do accept donations. Their product is great, and they are able to give it away.
Rocket Math is run by one person, Dr. Don, with part-time help from two friends. He supports the app, its development, and himself with the proceeds. He answers his own phone and is happy to talk
with teachers about math facts. The Rocket Math Online Game is a good value at $3 a year per seat (when ordering 100 or more seats). Twenty to 99 seats are $4 each. And fewer than 20 seats cost $5
each per year. As one principal customer of Rocket Math said, “We used to have XtraMath. We’d rather pay a little bit more for Rocket Math because the kids like it a lot better.” Another teacher
reported, “My students are loving this program. I was using Xtra math, but now they are in love with Rocket Math!”
Leave a Reply Cancel reply | {"url":"https://www.rocketmath.com/2020/10/19/math-facts-app-comparison/","timestamp":"2024-11-08T09:24:59Z","content_type":"text/html","content_length":"375420","record_id":"<urn:uuid:141f41a2-8af0-43ab-8d57-4a85fd0bef7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00708.warc.gz"} |
Commit 2021-04-06 01:49 13f7910b
View on Github →
feat(category_theory/limits/kan_extension): Right and Left Kan extensions of a functor (#6820) This PR adds the left and right Kan extensions of a functor, and constructs the associated adjunction.
coauthored by @b-mehta A followup PR should prove that the adjunctions in this file are (co)reflective when \iota is fully faithful. The current PR proves certain objects are initial/terminal, which
will be useful for this.
Estimated changes | {"url":"https://mathlib-changelog.org/v3/commit/13f7910b","timestamp":"2024-11-05T03:27:13Z","content_type":"text/html","content_length":"13222","record_id":"<urn:uuid:fd7710b6-08c6-403b-acc1-358e8d8e51fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00632.warc.gz"} |
An undervalued Math Problem
As most of you know there are 7 problems worth $1,000,000 (see
). It may be just 6 since Poincare's conjecture has probably been solved. Why are these problems worth that much money? There are other open problems that are worth far less money. What determines
how much money a problem is worth?
When Erdos offered money for a problem (from 10 to 3000 dollars) I suspect that the amount of money depended on (1) how hard Erdos thought the problem was, (2) how much Erdos cared about the problem,
(3) how much money Erdos had when he offered the prize, and (4) inflation. (If anyone can find a pointer to the list of open Erdos Problems please comment and I'll add it here.)
Here is a problem that I have heard is hard and deep, yet it is only worth $3000 (Erdos proposed it). I think that it should be worth more.
BACKGROUND: Szemeredi's theorem: Let A&sube N. If the limit as n goes to infinity of size(A &cap {1,...,n})/n is bounded below by a positive constant then A has arbitrarily long arithmetic sequences.
Intuition: if a set is
then it has arb long arith seqs. The CONJECTURE below uses a diff notion of
CONJECTURE: Let A&sube N. If &sum
[a&isin A]
1/a div
KNOWN: Its known that if A is the set of all primes (note that &sum
[a&isin A]
1/a diverges) then A has arbitrarily large arithmetic progressions. Nothing else is known! The conjecture for 3-AP's isn't even known!
Is this a good problem? If it is solved quickly (very unlikely) than NO. If absolutely no progress is made on it and no interesting mathematics comes out of the attempts than NO. It has to be just
9 comments:
1. the available space for the comment is not sufficient to contain a full proof for this conjecture..
2. Neither money nor politics should mix with mathematics.
3. "Neither money nor politics should mix with mathematics."
Yes! Also, I should have a pony.
4. There are a few lists of Erdős problems, but I don't know if anyone's compiled a comprehensive list. I've read that Erdős liked to throw out problems and prizes offhandedly, in talks and
lectures, so a few might not be well-documented.
Fan Chung has compiled a list of Erdős problems in graph theory:
Here's another partial list:
It's interesting to ask whether the practice of offering cash prizes for problems has proven effective in getting problems solved more quickly.
In the case of Erdős, I imagine it's the prestige (and pleasure) of cracking an Erdős problem that draws people in, not the cash. I don't know if small bounties by someone less prolific would
prove as attractive!
5. If it is solved quickly (very unlikely) then NO.
In general important insights do not come by easily, but you should not confuse correlation with causality.
For example, Frey's conjecture on Fermat's Last Theorem was based on an embarrasingly simple observation. It was surprising no one else noticed it before, but it was the key link connecting FLT
to a rich area of mathematics.
6. To be fair, I'm not sure that Erdos knew how much harder that problem was than, for instance, Szemeredi's theorem. Glancing at the history, it seems to have been conjectured roughly
contemporaneously with Szemeredi's proof, and I suspect that part of the reason the bounty's so high is that Erdos saw that the "incremental" arguments of Roth and Szemeredi wouldn't generalize
to this case, and so the prize was offered to find different approaches to arithmetic Ramsey theorems. But even Furstenberg's ergodic methods aren't nearly strong enough to approach this
conjecture -- indeed, I think Green and Tao's transference principle is the only thing we have (so far!) that allows us to do density Ramsey theory in a set of density zero.
7. I solved all problems but feel a little too tired and selfish to share them here.
8. I think that when the Clay problems were chosen - the problem was not considered important in the "Hamming sense" - let me quote " `important problem' must be phrased carefully. The three
outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs...We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not
important problems
because we do not have an attack". Things have change after Green/Tao and the transference principle.
9. If the conjecture is true, it would imply:
Whenever both A and B lack arbitrarily large A.P.s, then so does their union.
This would generalize Gleason's theorem (consider the contrapositive, and apply to a finite union which is all of N) but somehow, it doesn't seem particularly plausible. | {"url":"https://blog.computationalcomplexity.org/2009/11/undervalued-math-problem.html?m=0","timestamp":"2024-11-14T04:26:44Z","content_type":"application/xhtml+xml","content_length":"190773","record_id":"<urn:uuid:4c6a9a3c-71cb-422a-82bf-378208e5fe00>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00118.warc.gz"} |
Private Mortgage Insurance (PMI) in context of boq calculator
28 Aug 2024
Title: An Examination of Private Mortgage Insurance (PMI) in the Context of a BOQ Calculator: A Mathematical Analysis
Private Mortgage Insurance (PMI) is a crucial component in the mortgage industry, providing lenders with protection against default risk. This article delves into the mathematical underpinnings of
PMI and its integration with a Building Owners and Managers Association (BOQ) calculator. We present a comprehensive analysis of PMI’s impact on loan calculations, utilizing formulas and examples to
illustrate key concepts.
PMI is a type of insurance that protects lenders against losses in the event of borrower default. The premium for this insurance is typically paid by the borrower as part of their monthly mortgage
payment. In this article, we will explore the mathematical framework underlying PMI and its relationship with a BOQ calculator.
BOQ Calculator:
A BOQ calculator is a tool used to estimate the costs associated with building ownership. It takes into account various expenses such as property taxes, insurance, and maintenance. For the purpose of
this analysis, we will focus on the calculation of monthly mortgage payments, which includes PMI premiums.
PMI Formula:
The PMI premium (P) can be calculated using the following formula:
P = 0.00125 * LTV * Annual Premium
where: LTV = Loan-to-Value ratio (expressed as a decimal) Annual Premium = The annual cost of PMI insurance (in dollars)
Using BODMAS (Brackets, Orders, Division, Multiplication, Addition, and Subtraction) rules, we can rewrite the formula as:
P = 0.00125 × LTV × (Annual Premium ÷ 1)
Suppose a borrower purchases a property with a loan amount of $200,000 and a down payment of 20% ($40,000). The lender requires PMI insurance to protect against default risk. Using the BOQ
calculator, we can estimate the monthly mortgage payment as follows:
1. Calculate the LTV ratio: LTV = (Loan Amount - Down Payment) / Loan Amount = ($200,000 - $40,000) / $200,000 = 0.8 or 80%
2. Determine the Annual Premium: Assuming an annual premium of $800 for this loan.
3. Calculate the PMI premium: P = 0.00125 × LTV × (Annual Premium ÷ 1) = 0.00125 × 0.8 × ($800 ÷ 1) = $10 per year
4. Calculate the monthly mortgage payment: Using the BOQ calculator, we can estimate the monthly mortgage payment as:
Mortgage Payment = Principal + Interest + Taxes + Insurance + PMI Premium
where: Principal = Loan Amount / Number of Payments (e.g., 360 months for a 30-year loan) Interest = Monthly interest rate × Principal Taxes = Property taxes per year ÷ Number of Payments Insurance =
Annual insurance premium ÷ Number of Payments PMI Premium = P ÷ Number of Payments
For this example, let’s assume the following: Loan Amount: $200,000 Number of Payments: 360 months (30-year loan) Monthly interest rate: 4.25% Property taxes per year: $3,600 Annual insurance
premium: $800 PMI Premium: $10 per year
Using these values, we can calculate the monthly mortgage payment as:
Mortgage Payment = Principal + Interest + Taxes + Insurance + PMI Premium = ($200,000 / 360) + (4.25% × $200,000) + ($3,600 ÷ 360) + ($800 ÷ 360) + ($10 ÷ 360) ≈ $1,044.11
This article has demonstrated the mathematical underpinnings of PMI and its integration with a BOQ calculator. By understanding the formula for calculating PMI premiums and incorporating this into
the BOQ calculator, lenders can better estimate monthly mortgage payments and provide borrowers with more accurate financial projections.
ASCII Art:
/ \
/ \
| PMI |
| |
| LTV |
| |
| Annual |
| Premium |
| |
| Monthly |
| Mortgage |
| Payment |
Note: The ASCII art is a simple representation of the PMI formula and its relationship with the BOQ calculator.
Related articles for ‘boq calculator’ :
• Reading: Private Mortgage Insurance (PMI) in context of boq calculator
Calculators for ‘boq calculator’ | {"url":"https://blog.truegeometry.com/tutorials/education/01aa57b94fa8a1245ac8043ae622cbc4/JSON_TO_ARTCL_Private_Mortgage_Insurance_PMI_in_context_of_boq_calculator.html","timestamp":"2024-11-04T15:18:55Z","content_type":"text/html","content_length":"17658","record_id":"<urn:uuid:e2cd82ff-1107-47e6-9281-948586e32620>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00444.warc.gz"} |
How to Do A 3D Circle In Matplotlib?
To create a 3D circle in Matplotlib, you can use the Axes3D module from the mpl_toolkits.mplot3d library. By defining a range of angles and using trigonometric functions, you can plot points on the
surface of a circle in 3D space. You can then use the plot_surface function to connect these points and create a 3D representation of a circle. Additionally, you can customize the appearance of the
circle by setting parameters such as the radius, color, and transparency. Overall, by leveraging the capabilities of Matplotlib's 3D plotting functionality, you can easily generate and visualize a 3D
circle in your Python code.
What is the importance of the rcParams module in matplotlib?
The rcParams module in matplotlib allows users to customize the appearance of plots and charts by setting various parameters. It provides a way to globally customize default settings for plots, such
as colors, line styles, fonts, and more. This can save time and effort by allowing users to easily apply consistent styling to their plots without having to set individual parameters for each plot.
Additionally, the rcParams module allows users to create customized stylesheets that can be applied to plots, providing a way to create visually appealing and consistent plots across multiple
projects. This can be particularly useful for creating reports, presentations, or publications with a uniform and professional look.
Overall, the rcParams module in matplotlib is an important tool for customizing the appearance of plots and charts, making it easier for users to create visually appealing and consistent
What is the role of the matplotlib.axes module?
The matplotlib.axes module is responsible for creating the axes of a plot in Matplotlib. Axes are the main component of a plot and are used to set the limits of the plot, labels for the x and y-axes,
and other formatting options. The axes module provides methods for creating different types of plots, such as line plots, scatter plots, histograms, and more. It allows users to customize the
appearance of the plot by setting properties such as colors, line styles, markers, and labels. Overall, the matplotlib.axes module plays a crucial role in creating and customizing plots in
What is the difference between a 2D and a 3D plot in matplotlib?
A 2D plot in matplotlib represents data points on a two-dimensional plane, typically with a combination of x and y axes. This creates a flat representation of the data, suitable for visualizing
relationships between two variables.
On the other hand, a 3D plot in matplotlib adds an extra dimension by including a z-axis, allowing for the representation of data points in three-dimensional space. This provides a more comprehensive
view of the data, allowing for the visualization of relationships between three variables.
In summary, the main difference between a 2D and a 3D plot in matplotlib is the number of dimensions in which the data is represented. A 2D plot shows data in two dimensions, while a 3D plot shows
data in three dimensions.
How to install matplotlib?
To install matplotlib, you can use pip, the Python package installer. Simply open a terminal or command prompt, and run the following command:
This will download and install the matplotlib library and its dependencies on your system. Alternatively, you can also install matplotlib using Anaconda by running the following command in your
Anaconda prompt:
1 conda install matplotlib
After installation, you can import matplotlib in your Python code using:
1 import matplotlib.pyplot as plt
Make sure you have the required dependencies installed on your system before installing matplotlib, such as NumPy and Python itself.
How to create a 3D plot in matplotlib?
To create a 3D plot in matplotlib, you will need to use the Axes3D class from the mpl_toolkits.mplot3d module. Here is an example of how you can create a simple 3D plot:
1 import matplotlib.pyplot as plt
2 from mpl_toolkits.mplot3d import Axes3D
3 import numpy as np
5 # Create a figure and a 3D axes
6 fig = plt.figure()
7 ax = fig.add_subplot(111, projection='3d')
9 # Generate some random data
10 x = np.random.standard_normal(100)
11 y = np.random.standard_normal(100)
12 z = np.random.standard_normal(100)
14 # Scatter plot
15 ax.scatter(x, y, z)
17 # Set labels and title
18 ax.set_xlabel('X-axis')
19 ax.set_ylabel('Y-axis')
20 ax.set_zlabel('Z-axis')
21 ax.set_title('3D Scatter Plot')
23 # Show the plot
24 plt.show()
This code snippet creates a 3D scatter plot with random data. You can customize the plot further by changing the data or adding different types of 3D plots like lines or surfaces. | {"url":"https://infervour.com/blog/how-to-do-a-3d-circle-in-matplotlib","timestamp":"2024-11-07T13:29:08Z","content_type":"text/html","content_length":"336568","record_id":"<urn:uuid:a18fe3aa-f5e6-4d74-b216-a627981f8656>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00773.warc.gz"} |
We are partnering with RISC Zero to bring NEBRA UPA powered RISC ZERO zkVM proofs onchain. RISC ZERO developers can enjoy 10x and more cheaper proof verification cost on Ethereum and Its L2s.
Additionally, RISC ZERO zkVM developers can compose RISC ZERO zkVM proofs with external proofs such as World ID.
How NEBRA can reduce RISC ZERO developers’ onchain cost by 10x?
The RISC Zero zkVM can prove the correct execution of arbitrary code, allowing developers to build ZK applications in mature languages like Rust and C++. The release of the RISC Zero zkVM marked a
major breakthrough in enabling ZK software development: the zkVM made it possible to build a ZK application without having to build a circuit and without writing in a custom language.
Notably, RISC Zero is over 70% of the top 1000 Rust crates work out-of-the-box in the zkVM. Being able to import Rust crates is a game changer for the ZK software world: projects that would take
months or years to build on other platforms can be solved trivially on our platform.
The last mile of the RISC Zero developers’ journey is settling the proof onchain. Today, the onchain proof verification cost is prohibitive: developers need to spend more than 250,000 gas, which is
equivalent to 20 US dollars on Ethereum or 2 dollars on Ethereum L2s.
NEBRA UPA is the first Universal Proof Aggregation protocol that scales and composes zero-knowledge proof verification on Ethereum/EVM chains. NEBRA UPA achieves this using recursive SNARKs with
cryptographic security.
There are 3 key properties of NEBRA UPA:
• Universality: NEBRA UPA can aggregate proofs from any circuit. This means that in the same batch, NEBRA UPA aggregates proofs from different sources, such as zkEVMs, zkDIDs, and zkCoprocessors.
Universality brings an "economy of scale" to NEBRA users, allowing them to enjoy cheap amortized verification costs without needing to generate a huge number of proofs.
• Permissionless: NEBRA UPA is an onchain protocol, meaning that anyone can submit proofs to NEBRA.
• Censorship resistance: NEBRA UPA is censorship resistant by adopting a forced-inclusion design similar to Ethereum L2s. You can trigger a forced inclusion to include your proofs if our off-chain
worker refuses to put your proof in the aggregated proof. Additionally, we would be slashed if a forced inclusion occurs.
By using NEBRA UPA, RISC Zero developers can lower their proof verification costs from 250,000 gas to roughly 18,000 gas, translating to approximately 10x onchain verification cost savings.
Why RISC Zero?
RISC Zero is the pioneer of building RISC-V zkVMs. RISC Zero zkVM is high performance. As one of the most mature zkVMs, there are a number of applications and primitives showcasing RISC Zero’s
capabilities, including:
In addition, RISC Zero developers can enjoy the upcoming Bonsai prover network to streamline the development process. NEBRA UPA will be supporting both developers who are deploying stand-alone RISC
Zero zkVM or Bonsai Prover Network.
Additional Resources: | {"url":"https://nebra.one/article/risczero-proof-support-on-nebra-upa-2","timestamp":"2024-11-09T00:05:26Z","content_type":"text/html","content_length":"39999","record_id":"<urn:uuid:f3e1e291-b561-47d7-b3a1-fd63c913f377>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00457.warc.gz"} |
Factoring and Prime Numbers
Read the Headline Story to the students. Encourage them to create problems that can be solved using information from the story.
Headline Story
A group of students shared 120 cookies. What can you say about the number of students and the number of cookies they each got?
Possible student responses
• If there were 3 students, they could each have 40 cookies.
• If there were 4 students, they would each get only 30.
• If there were 12 students, they would each get 10.
• The more students there are, the fewer cookies they each get. | {"url":"https://elementarymath.edc.org/mindset/factoring-and-prime-numbers-lesson-2/","timestamp":"2024-11-09T15:57:30Z","content_type":"text/html","content_length":"123874","record_id":"<urn:uuid:b7757e0a-9071-471c-a7e5-0c55eb290cd7>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00779.warc.gz"} |
Latin Squares
A Latin square of order n is an array of n symbols in which each symbol occurs exactly once in each row and exactly once in each column. See the interactivity Teddy Town for some examples of Latin
Construct some Latin squares for yourself and see how many different arrangements you can find for each value of n. Two Latin squares are essentially the same, the mathematical term is isomorphic, if
one can be transformed into the other by re-naming the elements or by interchanging rows or interchanging columns.
For example the coloured discs in this illustration form a Latin square of order 3. This Latin square is isomorphic to the square with the symbols B, R and G and to the square with symbols 1, 2 and
An interactivity can be found in http://www.cut-the-knot.org/arithmetic/latin.shtml .
Latin squares of all orders $m> 2$ can be constructed using modular arithmetic as in this example for $m=5$. The entry $S_{i,j}$ in row $i$ column $j$ is given by $S_{i,j}=i+j$ (mod $5$), where this
is defined to be the remainder when the sum $i+j$ is divided by $5$.
For example the entry in the 4th row and 3rd column is given by $S_{4,3}=4+3$ (mod $5$) $=7$ (mod $5$) $=2$.
The same arrays can be found by simply cycling the elements so this method becomes more useful when solving problems involving combinations of two Latin squares.
Two Latin squares are said to be orthogonal if they can be combined cell by cell so that each cell consists of a different pair of symbols.
The two Latin squares of order 3, given in paragraph three above are not orthogonal because in the first row and first column the combination is B1 and the same combination occurs in the second row
and second column. However the Latin squares on the right are orthogonal.
You can construct orthogonal Latin squares $S_{i,j}$ and $T_{i,j}$ of prime order $m$ where the $S_{i,j}=s i+j$ (mod $m$) and $T_{i,j}=t i+j$ (mod $m$) and $s$ is not equal to $t$. For example here
are orthogonal Latin squares of order $7$ taking $s=1$ and $t=2$.
Try making orthogonal Latin squares for yourself using the honour cards (Ace, King, Queen, Jack) from a standard pack of playing cards. First ignore the suits altogether and concentrate on arranging
the 4 by 4 array so that each row and column has an Ace, a King, a Queen and a Jack in it, or equivalently so that no row or column has two Aces, or two Kings etc. Note down the arrangement.
Now concentrate on the suits and ignore the pictures and note down the arrangement of the suits. Because each card is different, that is all the combinations are different, for example Ace of Spades,
Ace of Hearts, Ace of Diamonds and Ace of Clubs, the Latin square of 'suits' is necessarily orthogonal to the Latin square of 'pictures'.
In 1783 Euler made a conjecture about orthogonal Latin squares and it took nearly 200 years for mathematicians to prove that orthogonal Latin squares exist for all orders except 2 and 6.
How many different solutions can you find to the following problem that was originally posed by Euler?
"Arrange 25 officers, each having one of five different ranks and belonging to one of five different regiments, in a square formation 5 by 5, so that each row and each file contains just one officer
of each rank and just one from each regiment."
The corresponding problem with 36 officers of six different ranks and belonging to six different regiments cannot be solved although for 49 officers it is easy to solve as shown above. | {"url":"https://nrich.maths.org/articles/latin-squares","timestamp":"2024-11-09T03:48:35Z","content_type":"text/html","content_length":"40296","record_id":"<urn:uuid:e3711fc3-40c7-44c8-a133-ef5af13f64bf>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00017.warc.gz"} |
[SVM prediction] Grey Wolf algorithm optimizes SVM support vector machine to predict matlab source code
Selection of prediction model parameters plays an important role in its generalization ability and prediction accuracy.Least Squares Support Vector Machine (LSSVM) parameters based on radial basis
kernel function mainly involve penalty factor and kernel function parameters, which directly affect the learning and generalization ability of LSSVM.In order to improve the prediction results of
least squares support vector machine, the grey wolf optimization algorithm is used to optimize its parameters, and a software aging prediction model is established.Experiments show that the model is
effective in predicting software aging.
Defects left behind in the software may cause computer memory leaks, rounding errors accumulation, file locks not released and other phenomena along with the long-term continuous operation of the
software system, leading to system performance degradation or even crash.The occurrence of these software aging phenomena not only reduces the system reliability, but also endangers the safety of
human life and property when serious.In order to alleviate the harm of software aging, it is particularly important to predict the trend of software aging, and adopt anti-aging strategies to avoid
the occurrence of software aging phenomenon [1].
Many scientific research institutions at home and abroad, such as Bell Laboratory, IBM, Nanjing University, Wuhan University [2], Xi'an Jiaotong University [3], have carried out in-depth research on
software aging and achieved some results.Their main research direction is to find the best execution time of software anti-aging strategy by predicting the aging trend of software.
This paper takes Tomcat server as the research object, monitors the operation of Tomcat, collects system performance parameters, and establishes a prediction model of software aging based on least
squares support vector machine (LSSVM) with Grey Wolf optimization algorithm.Predict the running status of the software and determine the execution time of the software anti-aging strategy.
1 Least Squares Support Vector Machine
Support Vector Machine (SVM) was proposed by Cortes and Vapnik[4].Based on VC dimension theory and structure risk minimization principle, SVM can solve small sample, non-linear, high dimension and
local minimum problems.
The more training samples there are, the more complex the SVM solves the quadratic programming problem and the longer the model training time is.Snykens et al. [5] proposed Least Squares Support
Vector Machine (LSSVM). On the basis of SVM, Gun P uses equality constraints instead of inequality constraints to convert the quadratic programming problem into a system of linear equations, which
largely avoids a lot of complex calculation of SVM and reduces the training difficulty.In recent years, LSSVM has been widely used in the fields of regression estimation and nonlinear modeling, and
good prediction results have been obtained.
In this paper, the radial basis function is used as the kernel function of LSSVM model.LSSVM algorithm parameters based on radial basis kernel function mainly involve penalty factor C and kernel
function parameter. This paper uses the gray wolf optimization algorithm to optimize the parameters of LSSVM.
2 Gray wolf optimization algorithm
In 2014, Mirjalili et al. [6] proposed the Grey WolfOptimizer (GWO) algorithm, which searches for the optimal value by simulating the natural wolf hierarchy and predation strategy.The GWO algorithm
has attracted much attention because of its fast convergence and few adjustment parameters, and it shows superiority in solving function optimization problems.This method is superior to particle
swarm optimization, differential evolution and gravitational search in global searchability and convergence, and is widely used in feature subset selection and surface wave parameter optimization.
2.1 Gray Wolf Optimization Algorithm Principle
Grey wolf individuals work together to achieve population prosperity, especially during the hunting process where they have a strict pyramid social hierarchy.The highest ranked wolf isα,The
remaining individual gray wolves are marked in turn asβ,δ,ω,They collaborate on predation.
Throughout the wolf population,αWolves play a leading role in the hunting process and are responsible for making decisions and managing the entire wolf population.βWolf andδWolves are the least
adaptable group and they helpαWolves manage the entire wolf population and have decision-making power in the hunting process.Remaining gray wolf individuals are defined asω,assistα,β,δAttack on
2.2 Gray Wolf Optimization Algorithm Description
The GWO algorithm simulates wolf hunting and divides the whole hunting process into three stages: encircling, hunting and attacking. The process of catching prey is to find the best solution.Assuming
that the solution space of gray wolf is V dimension, the gray wolf population X consists of N gray wolf individuals, that is, X=[Xi;X2,..., XN];For individual Xi (1 < i < N), its position in
V-dimensional space Xi=[Xi1;Xi2,..., XiV], the distance between the individual position of the wolf and the location of its prey is measured by fitness, and the smaller the distance, the greater the
fitness.The GWO algorithm is optimized as follows.
2.2.1 Surrounding
First, the prey is surrounded, in which the distance between the prey and the wolf is represented by a mathematical model:
Where: Xp(m) is the prey position after the mth iteration, X(m) is the wolf position, D is the distance between the wolf and the prey, A and C are the convergence and swing factors respectively, and
the formula is:
2.2.2 Pursuit
The optimization process of the GWO algorithm is based onα,βandδLocation to locate the prey.ωWolf inα,β,δWolf-guided hunts, updates their respective positions based on the location of the current
best search unit, and updates them accordinglyα,β,δLocation to reposition the prey.The individual position of wolves changes with the escape of their prey, and the mathematical description of the
update process at this stage is:
2.2.3 Attack
Wolves attack their prey and capture it to get the best solution.This process is achieved by decrement in Formula (2).When 1 < A indicates that wolves are closer to their prey, the wolves will narrow
their search range for local searches.When 1<A, wolves disperse away from their prey, expanding their search range for a global search
tic % Time
%% Empty Environment Import Data
close all
format long
load wndspd
%% GWO-SVR
560.318,1710.53; 562.267,1595.17; 564.511,1479.78; 566.909,1363.74; 569.256,1247.72; 571.847,1131.3; 574.528,1015.33;
673.834,1827.52; 678.13,1597.84; 680.534,1482.11; 683.001,1366.24; 685.534,1250.1; 688.026,1133.91; 690.841,1017.81;
789.313,1830.18; 791.618,1715.56; 796.509,1484.76; 799.097,1368.85; 801.674,1252.76; 804.215,1136.49; 806.928,1020.41;
904.711,1832.73; 907.196,1718.05; 909.807,1603.01; 915.127,1371.43; 917.75,1255.36; 920.417,1139.16; 923.149,1023.09;
1020.18,1835.16; 1022.94,1720.67; 1025.63,1605.48; 1028.4,1489.91; 1033.81,1258.06; 1036.42,1141.89; 1039.11,1025.92;
1135.36,1837.45; 1138.33,1722.94; 1141.35,1607.96; 1144.25,1492.43; 1147.03,1376.63; 1152.23,1144.56; 1154.83,1028.73;
1250.31,1839.19; 1253.44,1725.01; 1256.74,1610.12; 1259.78,1494.74; 1262.67,1379.1; 1265.43,1263.29; 1270.48,1031.58;
1364.32,1840.51; 1367.94,1726.52; 1371.2,1611.99; 1374.43,1496.85; 1377.53,1381.5; 1380.4,1265.81; 1382.89,1150.18;
1477.65,1841.49; 1481.34,1727.86; 1485.07,1613.64; 1488.44,1498.81; 1491.57,1383.71; 1494.47,1268.49; 1497.11,1153.04;
1590.49,1842.51; 1594.53,1729.18; 1598.15,1615.15; 1601.61,1500.72; 1604.72,1385.93; 1607.78,1271.04; 1610.43,1155.93;
1702.82,1843.56; 1706.88,1730.52; 1710.65,1616.79; 1714.29,1502.66; 1717.69,1388.22; 1720.81,1273.68; 1723.77,1158.8;
input_test=[558.317,1825.04; 675.909,1712.89; 793.979,1600.35; 912.466,1487.32;
1031.17,1374.03; 1149.79,1260.68; 1268.05,1147.33; 1385.36,1034.68;1499.33,1037.87;1613.11,1040.92;1726.27,1044.19;];
235,175; 235,190; 235,205; 235,220; 235,235; 235,250; 235,265;
250,160; 250,190; 250,205; 250,220; 250,235; 250,250; 250,265;
265,160; 265,175; 265,205; 265,220; 265,235; 265,250; 265,265;
270,160; 270,175; 270,190; 270,220; 270,235; 270,250; 270,265;
285,160; 285,175; 285,190; 285,205; 285,235; 285,250; 285,265;
290,160; 290,175; 290,190; 290,205; 290,220; 290,250; 290,265;
305,160; 305,175; 305,190; 305,205; 305,220; 305,235; 305,265;
320,160; 320,175; 320,190; 320,205; 320,220; 320,235; 320,250;
335,160; 335,175; 335,190; 335,205; 335,220; 335,235; 335,250;
350,160; 350,175; 350,190; 350,205; 350,220; 350,235; 350,250;
365,160; 365,175; 365,190; 365,205; 365,220; 365,235; 365,250;
output_test=[235,160; 250,175; 265,190;270,205; 285,220; 290,235; 305,250; 320,265; 335,265; 350,265; 365,265;];
% Generate data to be regressed
x = [0.1,0.1;0.2,0.2;0.3,0.3;0.4,0.4;0.5,0.5;0.6,0.6;0.7,0.7;0.8,0.8;0.9,0.9;1,1];
y = [10,10;20,20;30,30;40,40;50,50;60,60;70,70;80,80;90,90;100,100];
X = input_train;
Y = output_train;
Xt = input_test;
Yt = output_test;
%% Choose the best using the gray wolf algorithm SVR parameter
SearchAgents_no=60; % Wolf population
Max_iteration=500; % Maximum number of iterations
dim=2; % This example needs to optimize two parameters c and g
lb=[0.1,0.1]; % Lower Bound for Parameter Value
ub=[100,100]; % Parameter value upper bound
Alpha_pos=zeros(1,dim); % Initialization Alpha Location of wolf
Alpha_score=inf; % Initialization Alpha The target function value of the wolf, change this to -inf for maximization problems
Beta_pos=zeros(1,dim); % Initialization Beta Location of wolf
Beta_score=inf; % Initialization Beta The target function value of the wolf, change this to -inf for maximization problems
Delta_pos=zeros(1,dim); % Initialization Delta Location of wolf
Delta_score=inf; % Initialization Delta The target function value of the wolf, change this to -inf for maximization problems
l=0; % Loop Counter
% %% SVM Network Regression Prediction
% [output_test_pre,acc,~]=svmpredict(output_test',input_test',model_gwo_svr); % SVM Model prediction and its accuracy
% test_pre=mapminmax('reverse',output_test_pre',rule2);
% test_pre = test_pre';
% gam = [bestc bestc]; % Regularization parameter
% sig2 =[bestg bestg];
% model = initlssvm(X,Y,type,gam,sig2,kernel); % Model Initialization
% model = trainlssvm(model); % train
% Yp = simlssvm(model,Xt); % regression
title('+Is the true value, o For Predictive Value')
% err_pre=wndspd(104:end)-test_pre;
% figure('Name','Test Data Residual Map')
% set(gcf,'unit','centimeters','position',[0.5,5,30,5])
% plot(err_pre,'*-');
% figure('Name','Original-Prediction map')
% plot(test_pre,'*r-');hold on;plot(wndspd(104:end),'bo-');
% legend('Forecast','Original')
% set(gcf,'unit','centimeters','position',[0.5,13,30,5])
% result=[wndspd(104:end),test_pre]
% MAE=mymae(wndspd(104:end),test_pre)
% MSE=mymse(wndspd(104:end),test_pre)
% MAPE=mymape(wndspd(104:end),test_pre)
%% Show program run time | {"url":"https://www.fatalerrors.org/a/19x30Tk.html","timestamp":"2024-11-09T14:09:07Z","content_type":"text/html","content_length":"20841","record_id":"<urn:uuid:f18127be-3386-4847-8e38-3c5c92eeef65>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00184.warc.gz"} |
Developing high-fidelity models of hydraulic systems · ModelingToolkit Course
Why focus on hydraulics? The answer is essentially hydraulic modelling is really hard (in numerical computing terms, hydraulic models are often referred to as "stiff" ODE's, which require more
rigorous solvers from standard ODE's). Solving the challenges of modeling hydraulics is applicable to the numerical modeling challenges of all other domains. Let's first start with the concept of
compressibility. Often we think of a liquid as incompressible, imagine attempting to "squeeze" water, it can be done but takes some very high forces. Therefore, if the model in question won't be
solving a problem with high forces, it can be assumed incompressible. However, most hydrulic industrial models will involve high forces, this is precisely the area where most hydraulic machines are
Density is simply mass over volume
$$\[\rho = m/V\]$$
Given a volume and mass of liquid, if the volume were to change from $V_0$ to $V$, we know that the pressure would increase, and since the mass in this case was constant, the density will increase as
The change in pressure for an isothermal compressible process is typically given as
$$\[\Delta p = -\beta \frac{\Delta V}{V_0}\]$$
Substituting $\Delta p$ and $\Delta V$
$$\[p - p_0 = -\beta \frac{V - V_0}{V_0}\]$$
substituting $V = m / \rho$
$$\[p - p_0 = -\beta (1 - \rho/\rho_0) \]$$
Solving for $\rho$
$$\[\rho = \rho_0 (1 + (p - p_0)/\beta)\]$$
Taking a known $\rho_0$ when $p_0$ is 0 (at gage pressure), simplifies to
$$\[\rho = \rho_0 (1 + p/\beta) \]$$
Conservation of mass gives us
$$\[m_{in} - m_{out} = m_s \]$$
The stored mass of oil is simply
$$\[m_s = \rho V \]$$
Taking the derivative gives us the rate of mass change
$$\[\dot{m}_{in} - \dot{m}_{out} = \frac{\delta (\rho V)}{\delta t} \]$$
Here is where the standard hydraulic modeling often makes a simplification.
Correct Derivation (1):
$$\[\frac{\delta (\rho V)}{\delta t} = \dot{\rho} V + \rho \dot{V} \]$$
Standard Practice^[1] (2):
$$\[\color{red} \frac{\delta (\rho V)}{\delta t} = \dot{\rho} V + \rho_0 \dot{V} \]$$
Given $\dot{\rho} = \rho_0 (\dot{p} / \beta)$, and $q = \dot{m}/\rho_0$ the above is often written as
$$\[\color{red} q_{in} - q_{out} = (\dot{p} / \beta) V + \dot{V} \]$$
Problem Definition - Given:
• $M = 10,000 kg$
• $A = 0.01 m^2$
• $\rho_0 = 876 kg/m^3$
• $\beta = 1.2e9 Pa$
• $g = 9.807 m/s^2$
Find the mass flow rate ($\dot{m}$) that provides a sinusodial output of $x$:
$$\[x(t) = amp \cdot sin(2πtf) + x_0\]$$
There are 3 fundamental equations needed to solve this problem.
(1) Mass balance:
$$\[\dot{m} = \dot{\rho} \cdot V + \rho \cdot \dot{V}\]$$
where $V$ is the cylinder volume $=x \cdot A$
(2) Newton's law:
$$\[M \cdot \ddot{x} = p \cdot A - m \cdot g\]$$
(3) Density equation:
$$\[\rho = \rho_0 (1 + p/\beta) \]$$
The variables of this system are $x$, $p$, $\rho$, and $\dot{m}$. By including 1 input condition that gives 4 equations and 4 variables to be solved. We will solve the problem 3 different ways
• case 1: guess $\dot{m}$, partial mass balance
• case 2: guess $\dot{m}$, complete mass balance
• case 3: solution, solve $\dot{m}$ directly
We know that mass flow rate thru a pipe is equal to
$$\[\dot{m} = \rho \bar{u} A\]$$
where $\bar{u}$ is the average flow velocity thru cross section $A$. We can assume that $\bar{u} \approx \dot{x}$. Therefore we have
$$\[\dot{m} = \rho \cdot \dot{x} \cdot A\]$$
To solve this in ModelingToolkit.jl, let's start by defining our parameters and x function
using ModelingToolkit
using DifferentialEquations
using Symbolics
using Plots
using ModelingToolkit: t_nounits as t, D_nounits as D
# parameters -------
pars = @parameters begin
r₀ = 876 #kg/m^3
β = 1.2e9 #Pa
A = 0.01 #m²
x₀ = 1.0 #m
M = 10_000 #kg
g = 9.807 #m/s²
amp = 5e-2 #m
f = 15 #Hz
dt = 1e-4 #s
t_end = 0.2 #s
time = 0:dt:t_end
x_fun(t,amp,f) = amp*sin(2π*t*f) + x₀
Now, to supply $\dot{m}$ we need an $\dot{x}$ function. This can be automatically generated for us with Symbolics.jl
ẋ_fun = build_function(expand_derivatives(D(x_fun(t,amp,f))), t, amp, f; expression=false)
RuntimeGeneratedFunction(#=in Symbolics=#, #=using Symbolics=#, :((t, amp, f)->begin
#= /home/runner/.julia/packages/SymbolicUtils/qyMYa/src/code.jl:373 =#
#= /home/runner/.julia/packages/SymbolicUtils/qyMYa/src/code.jl:374 =#
#= /home/runner/.julia/packages/SymbolicUtils/qyMYa/src/code.jl:375 =#
(*)((*)((*)(6.283185307179586, amp), f), NaNMath.cos((*)((*)(6.283185307179586, f), t)))
As can be seen, we get a cos function as expected taking the derivative of sin. Now let's build the variables and equations of our system. The base equations are generated in a function so we can
easily compare the correct derivation of mass balance (density_type = r(t)) with the standard practice (density_type = r₀).
vars = @variables begin
x(t) = x₀
p(t) = M*g/A #Pa
function get_base_equations(density_type)
eqs = [
D(x) ~ ẋ
D(ẋ) ~ ẍ
D(r) ~ ṙ
r ~ r₀*(1 + p/β)
ṁ ~ ṙ*x*A + (density_type)*ẋ*A
M*ẍ ~ p*A - M*g
return eqs
Note: we've only specified the initial values for the known states of x and p. We will find the additional unknown initial conditions before solving. Now we have 7 variables defined and only 6
equations, missing the final driving input equation. Let's build 3 different cases:
case 1:
eqs_ṁ1 = [
ṁ ~ ẋ_fun(t,amp,f)*A*r # (4) Input - mass flow guess
case 2:
eqs_ṁ2 = [
ṁ ~ ẋ_fun(t,amp,f)*A*r # (4) Input - mass flow guess
case 3:
eqs_x = [
x ~ x_fun(t,amp,f) # (4) Input - target x
Now we have 3 sets of equations, let's construct the systems and solve. If we start with case 3 with the target $x$ input, notice that the structural_simplify step outputs a system with 0 equations!
@mtkbuild odesys_x = ODESystem(eqs_x, t, vars, pars)
julia> odesys_xModel odesys_x with 0 equations
Unknowns (0):
Parameters (8):
r₀ [defaults to 876]
β [defaults to 1.2e9]
A [defaults to 0.01]
x₀ [defaults to 1.0]
Incidence matrix:0×0 SparseArrays.SparseMatrixCSC{Symbolics.Num, Int64} with 0 stored entries
What this means is ModelingToolkit.jl has found that this model can be solved entirely analytically. The full system of equations has been moved to what is called "observables", which can be obtained
using the observed() function
julia> observed(odesys_x)15-element Vector{Symbolics.Equation}:
xˍt(t) ~ 6.283185307179586amp*f*cos(6.283185307179586f*t)
xˍtt(t) ~ -39.47841760435743amp*(f^2)*sin(6.283185307179586f*t)
xˍttt(t) ~ -248.05021344239853amp*(f^3)*cos(6.283185307179586f*t)
x(t) ~ x₀ + amp*sin(6.283185307179586f*t)
ẋ(t) ~ xˍt(t)
ẋˍt(t) ~ xˍtt(t)
ẋˍtt(t) ~ xˍttt(t)
ẍ(t) ~ ẋˍt(t)
ẍˍt(t) ~ ẋˍtt(t)
p(t) ~ (-M*g - M*ẍ(t)) / (-A)
pˍt(t) ~ (M*ẍˍt(t)) / A
r(t) ~ r₀*(1 + p(t) / β)
rˍt(t) ~ (r₀*pˍt(t)) / β
ṙ(t) ~ rˍt(t)
ṁ(t) ~ A*r(t)*ẋ(t) + A*x(t)*ṙ(t)
Some of the observables have a ˍt appended to the name. These are called dummy derivatives, which are a consequence of the algorithm to reduce the system DAE index.
This system can still be "solved" using the same steps to generate an ODESolution which allows us to easily obtain any calculated observed state.
prob_x = ODEProblem(odesys_x, [], (0, t_end))
sol_x = solve(prob_x; saveat=time)
plot(sol_x; idxs=ṁ)
Now let's solve the other system and compare the results.
@mtkbuild odesys_ṁ1 = ODESystem(eqs_ṁ1, t, vars, pars)
julia> odesys_ṁ1Model odesys_ṁ1 with 4 equations
Unknowns (4):
x(t) [defaults to x₀]
Parameters (8):
r₀ [defaults to 876]
β [defaults to 1.2e9]
A [defaults to 0.01]
x₀ [defaults to 1.0]
Incidence matrix:4×7 SparseArrays.SparseMatrixCSC{Symbolics.Num, Int64} with 10 stored entries:
⋅ × ⋅ × ⋅ ⋅ ⋅
⋅ ⋅ × ⋅ × ⋅ ⋅
⋅ ⋅ ⋅ ⋅ ⋅ × ×
× × × ⋅ ⋅ ⋅ ×
Notice that now, with a simple change of the system input variable, structural_simplify() outputs a system with 4 states to be solved. We can find the initial conditions needed for these states from
sol_x and solve.
u0 = [sol_x[s][1] for s in unknowns(odesys_ṁ1)]
prob_ṁ1 = ODEProblem(odesys_ṁ1, u0, (0, t_end))
@time sol_ṁ1 = solve(prob_ṁ1; initializealg=NoInit());
┌ Warning: Initialization system is overdetermined. 3 equations for 0 unknowns. Initialization will default to using least squares. To suppress this warning pass warn_initialize_determined = false.
└ @ ModelingToolkit ~/.julia/packages/ModelingToolkit/cAhZr/src/systems/diffeqs/abstractodesystem.jl:1626
4.755130 seconds (3.20 M allocations: 233.692 MiB, 1.42% gc time, 99.98% compilation time)
The resulting mass flow rate required to hit the target $x$ position can be seen to be completely wrong. This is the large impact that compressibility can have when high forces are involved.
plot(sol_ṁ1; idxs=ṁ, label="guess", ylabel="ṁ")
plot!(sol_x; idxs=ṁ, label="solution")
If we now solve for case 2, we can study the impact the compressibility derivation
@mtkbuild odesys_ṁ2 = ODESystem(eqs_ṁ2, t, vars, pars)
prob_ṁ2 = ODEProblem(odesys_ṁ2, u0, (0, t_end))
@time sol_ṁ2 = solve(prob_ṁ2; initializealg=NoInit());
┌ Warning: Initialization system is overdetermined. 3 equations for 0 unknowns. Initialization will default to using least squares. To suppress this warning pass warn_initialize_determined = false.
└ @ ModelingToolkit ~/.julia/packages/ModelingToolkit/cAhZr/src/systems/diffeqs/abstractodesystem.jl:1626
4.599240 seconds (2.94 M allocations: 215.633 MiB, 1.35% gc time, 99.98% compilation time)
As can be seen, a significant error forms between the 2 cases. Plotting first the absolute position.
plot(sol_x; idxs=x, label="solution", ylabel="x")
plot!(sol_ṁ1; idxs=x, label="case 1: r₀")
plot!(sol_ṁ2; idxs=x, label="case 2: r")
And now plotting the difference between case 1 and 2.
plot(time, (sol_ṁ1(time)[x] .- sol_ṁ2(time)[x])/1e-3,
ylabel="error (case 1 - case 2) [mm]",
xlabel="t [s]"
Also note the difference in computation.
julia> sol_ṁ1.destatsSciMLBase.DEStats
Number of function 1 evaluations: 452
Number of function 2 evaluations: 0
Number of W matrix evaluations: 30
Number of linear solves: 240
Number of Jacobians created: 30
Number of nonlinear solver iterations: 0
Number of nonlinear solver convergence failures: 0
Number of fixed-point solver iterations: 0
Number of fixed-point solver convergence failures: 0
Number of rootfind condition calls: 0
Number of accepted steps: 30
Number of rejected steps: 0
As can be seen, including the detail of full compressibility resulted in more computation: more function evaluations, Jacobians, solves, and steps.
julia> sol_ṁ2.destatsSciMLBase.DEStats
Number of function 1 evaluations: 528
Number of function 2 evaluations: 0
Number of W matrix evaluations: 36
Number of linear solves: 288
Number of Jacobians created: 34
Number of nonlinear solver iterations: 0
Number of nonlinear solver convergence failures: 0
Number of fixed-point solver iterations: 0
Number of fixed-point solver convergence failures: 0
Number of rootfind condition calls: 0
Number of accepted steps: 34
Number of rejected steps: 2
Now let's re-create this example using components from the ModelingToolkitStandardLibrary.jl. It can be shown that by connecting Mass and Volume components that the same exact result is achieved. The
important thing is to pay very close attention to the initial conditions.
import ModelingToolkitStandardLibrary.Mechanical.Translational as T
import ModelingToolkitStandardLibrary.Hydraulic.IsothermalCompressible as IC
import ModelingToolkitStandardLibrary.Blocks as B
using DataInterpolations
mass_flow_fun = LinearInterpolation(sol_x[ṁ], sol_x.t)
function MassVolume(; name, dx, drho, dm)
pars = @parameters begin
A = 0.01 #m²
x₀ = 1.0 #m
M = 10_000 #kg
g = 9.807 #m/s²
amp = 5e-2 #m
f = 15 #Hz
vars = []
systems = @named begin
fluid = IC.HydraulicFluid(; density = 876, bulk_modulus = 1.2e9)
mass = T.Mass(;v=dx,m=M,g=-g)
vol = IC.Volume(;area=A, x=x₀, p=p_int, dx, drho, dm)
mass_flow = IC.MassFlow(;p_int)
mass_flow_input = B.TimeVaryingFunction(;f = mass_flow_fun)
eqs = [
connect(mass.flange, vol.flange)
connect(vol.port, mass_flow.port)
connect(mass_flow.dm, mass_flow_input.output)
connect(mass_flow.port, fluid)
return ODESystem(eqs, t, vars, pars; systems, name)
dx = sol_x[ẋ][1]
drho = sol_x[ṙ][1]
dm = sol_x[ṁ][1]
@mtkbuild odesys = MassVolume(; dx, drho, dm)
prob = ODEProblem(odesys, [], (0, t_end))
plot(sol; idxs=odesys.vol.x, linewidth=2)
plot!(sol_x; idxs=x)
The next challenging aspect of hydraulic modeling is modeling flow through a pipe, which for compressible flow requires resolving the momentum balance equation. To derive the momentum balance we can
draw a control volume (cv) in a pipe with area $A$, as shown in the figure below, and apply Newton's second law. Across this control volume from $x_1$ to $x_2$ the pressure will change from $p_1$ to
$p_2$. Assuming this is written for an acausal component we put nodes at $p_1$ to $p_2$ which will have equal mass flow $\dot{m}$ entering and exiting the cv^[2].
Now taking the sum of forces acting on the cv we have the pressure forces at each end as well as the viscous drag force from the pipe wall and the body force from gravity. The sum of forces is equal
to the product of mass ($\rho V$) and flow acceleration ($\dot{u}$).
$$\[ \rho V \dot{u} = (p_1 - p_2) A - F_{viscous} + \rho V g\]$$
$$\[ F_{viscous} = A \frac{1}{2} \rho u^2 f \frac{L}{d_h} \]$$
given $f$ is the fully developed flow pipe friction factor for a given shape, $L$ is the pipe length, and $d_h$ is the pipe hydraulic diameter.
the current implementation of this component in the ModelingToolkitStandardLibrary.jl does not include gravity force for this makes initialization challenging and will take some work to implement.
The density $\rho$ is an average of $\rho_1$ and $\rho_2$. The velocity is also taken as an average of $u_1$ and $u_2$
$$\[u_1 = \frac{\dot{m}}{\rho_1 A}\]$$$$\[u_2 = \frac{\dot{m}}{\rho_2 A}\]$$
the term $\rho V \dot{u}$ introduces what is referd to as fluid inertia. This is what resolves the pressure wave propagation through a pipe. A classic wave propagation example in pipes is the "water
hammer" effect. The full derivation for the flow velocity derivative is when deriving in 2 dimensions is
$$\[\frac{D \text{V}}{Dt} = \frac{\partial \text{V}}{\partial t} + \frac{\partial \text{V}}{\partial x} u + \frac{\partial \text{V}}{\partial z} w\]$$
where $\text{V}$ is the velocity vector, $u$ and $w$ are the flow components in $x$ and $y$ directions. In the ModelingToolkitStandardLibrary.jl this assumption is taken
$$\[\rho V \frac{D \text{V}}{Dt} \approx \frac{\partial \dot{m}}{\partial t}\]$$
Implement a more detailed Conservation of Momentum using the standard derivation. One idea is to implement the MethodOfLines.jl to provide the derivative in $x$.
To model a pipe for compressible flow, we can combine the mass balance and momentum balance components to give both mass storage and flow resistance. Furthermore, to provide a more accurate model
that allows for wave propagation we can discretize the volume connected by node of equal pressure and mass flow. The diagram below shows an example of discretizing with 3 mass balance volumes and 2
momentum balance resistive elements. Note: the Modelica Standard Library does this in a different way, by combining the mass and momentum balance in a single base class.
Both Modelica and SimScape model the actuator component with simply a uniform pressure volume component. The Modelica library defines the base fluids class around the assumption of constant length
(see: Object-Oriented Modeling of Thermo-Fluid Systems) and therefore adapting to a component that changes length is not possible. But in cases with long actuators with high dynamics the pressure is
not at all uniform, therefore this detail cannot be ignored. Therefore, adding in the momentum balance to provide flow resistance and fluid inertia are necessary. The diagram below shows the design
of a DynamicVolume component which includes both mass and momentum balance in addition to discretization by volume. The discretization is similar to the pipe, except the scheme becomes a bit more
complicated with the moving wall ($x$). As the volume shrinks, the control volumes will also shrink, however not in unison, but one at a time. In this way, as the moving wall closes, the flow will
come from the first volume $cv1$ and travel thru the full size remaining elements ($cv2$, $cv3$, etc.). After the first component length drops to zero, the next element will then start to shrink.
This design has a flaw unfortunately, expanding the system for N=3 gives
What happens when transitioning from one cv to the next, if the moving wall velocity is significant, then an abrupt change occurs due to the $\rho_i \dot{x}$ term. This creates an unstable condition
for the solver and results in poor quality/accuracy. To resolve this problem, the mass balance equation is split into 2 parts: mass balance 1 \& 2
$$\[\text{mass balance 1: } \dot{m}/A = \dot{\rho} x\]$$$$\[\text{mass balance 2: } \dot{m}/A = \rho \dot{x}\]$$
The below diagram explains how this component is constructed
Now the flows are simplified and are more numerically stable. The acausal connections then handle the proper summing of flows.
• 1See simscape hydraulic chamber. Note the deprecation warning moving to isothermal liquid library which uses the correct derivation.
• 2The Modelica Standard Library combines the mass and momentum balance to the same base class, therefore, mass flow in and out of the cv is not equal, which introduces an additional term to the
lhs of the momentum balance: $ \frac{\partial \left( \rho u^2 A \right) }{\partial x} $ | {"url":"https://sciml.github.io/ModelingToolkitCourse/dev/lectures/lecture2/","timestamp":"2024-11-04T23:03:13Z","content_type":"text/html","content_length":"36586","record_id":"<urn:uuid:bdab8798-ea11-4a38-8203-8a8238404a46>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00514.warc.gz"} |
BW2 = bwulterode(BW) computes the ultimate erosion of the binary image BW. The ultimate erosion of BW consists of the regional maxima of the Euclidean distance transform of the complement of BW.
BW2 = bwulterode(BW,method) specifies the distance transform method.
BW2 = bwulterode(___,conn) specifies the pixel connectivity.
Perform Ultimate Erosion of Binary Image
Read a binary image into the workspace and display it.
originalBW = imread('circles.png');
Perform the ultimate erosion of the image and display it.
ultimateErosion = bwulterode(originalBW);
figure, imshow(ultimateErosion)
Input Arguments
BW — Binary image
numeric array | logical array
Binary image, specified as a numeric or logical array of any dimension.
Example: BW = imread('circles.png');
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical
method — Distance transform method
'euclidean' (default) | 'quasi-euclidean' | 'cityblock' | 'chessboard'
Distance transform method, specified as one of the values in this table.
Method Description
In 2-D, the chessboard distance between (x[1],y[1]) and (x[2],y[2]) is
max(│x[1] – x[2]│,│y[1] – y[2]│).
In 2-D, the cityblock distance between (x[1],y[1]) and (x[2],y[2]) is
│x[1] – x[2]│ + │y[1] – y[2]│
In 2-D, the Euclidean distance between (x[1],y[1]) and (x[2],y[2]) is
In 2-D, the quasi-Euclidean distance between (x[1],y[1]) and (x[2],y[2]) is
'quasi-euclidean' $|{x}_{1}-{x}_{2}|+\left(\sqrt{2}-1\right)|{y}_{1}-{y}_{2}|,\text{}|{x}_{1}-{x}_{2}|>|{y}_{1}-{y}_{2}|$
For more information, see Distance Transform of a Binary Image.
conn — Pixel connectivity
4 | 8 | 6 | 18 | 26 | 3-by-3-by- ... -by-3 matrix of 0s and 1s
Pixel connectivity, specified as one of the values in this table. The default connectivity is 8 for 2-D images, and 26 for 3-D images.
Value Meaning
Two-Dimensional Connectivities
4 Pixels are connected if their edges touch. The neighborhood of a pixel are the adjacent pixels in the horizontal or vertical direction. Current pixel is shown in gray.
8 Pixels are connected if their edges or corners touch. The neighborhood of a pixel are the adjacent pixels in the horizontal, vertical, or diagonal direction. Current pixel is shown in gray.
Three-Dimensional Connectivities
Pixels are connected if their faces touch. The neighborhood of a pixel are the adjacent pixels in:
6 Current pixel is shown in gray.
• One of these directions: in, out, left, right, up, and down
Pixels are connected if their faces or edges touch. The neighborhood of a pixel are the adjacent pixels in:
18 • One of these directions: in, out, left, right, up, and down Current pixel is center of cube.
• A combination of two directions, such as right-down or in-up
Pixels are connected if their faces, edges, or corners touch. The neighborhood of a pixel are the adjacent pixels in:
• One of these directions: in, out, left, right, up, and down
26 Current pixel is center of cube.
• A combination of two directions, such as right-down or in-up
• A combination of three directions, such as in-right-up or in-left-down
For higher dimensions, bwulterode uses the default value conndef(ndims(BW),'maximal').
Data Types: double | logical
Output Arguments
BW2 — Eroded image
logical array
Eroded image, returned as a logical array of the same size as BW.
Data Types: logical
Version History
Introduced before R2006a | {"url":"https://se.mathworks.com/help/images/ref/bwulterode.html","timestamp":"2024-11-07T06:41:51Z","content_type":"text/html","content_length":"92917","record_id":"<urn:uuid:170293d8-5308-4d9b-aaf6-292ee724ac27>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00855.warc.gz"} |
• rename functions for consistency: eig() -> Eigen(), point_on_line() -> pointOnLine(), power_method() -> powerMethod(), row_cofactors() -> rowCofactors(), row_minors() -> rowMinors().
• add Det() to compute determinants by elimination, from eigenvalues, or by minors and cofactors, with possibility of verbose output.
• plotEqn3d() gets an axes argument and lit to control lighting of the planes; lit solves a problem with the planes becoming indistinguishable in some rotations.
• add svdDemo() function to illustrate the SVD of a 3 x 3 matrix [thx: Duncan Murdoch]
• add symMat() to create a square symmetric matrix from a vector.
• add angle() to calculate angle between vectors
• powerMethod() gets a keep argument, for possible use in plotting the convergence of eigenvectors.
• add adjoint(), to round out methods for determinants
• add GramSchmidt() for the Gram-Schmidt algorithm on columns of a matrix. The existing function gsorth() will be deprecated and then removed.
• gsorth() has been deprecated.
• fixed use of MASS::fractions in gaussianElimination
• added printMatEqn() to print matrix expressions side-by-side | {"url":"https://cran.dcc.uchile.cl/web/packages/matlib/news/news.html","timestamp":"2024-11-04T06:11:02Z","content_type":"application/xhtml+xml","content_length":"11405","record_id":"<urn:uuid:351f0a29-eae1-4270-8abc-9e230f591314>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00893.warc.gz"} |
Celsius Calculator for Temperature Conversion [Easily Solved]
Looking to convert temperatures quickly? Our Celsius calculator efficiently transforms Celsius into Fahrenheit and back. Below, you’ll find straightforward steps, the underlying formulas, and a
closer look at temperature scales to support all your temperature conversion calculator needs.
Celsius Calculator: Key Takeaways
• The celsius calculator formula to convert Celsius to Fahrenheit is °F = (°C imes 9/5) + 32, and tools like online temperature conversion calculators can assist in making these conversions
accurately and easily.
• The Celsius, Fahrenheit, and Kelvin scales are distinct units of temperature measurement. Each scale has different reference points and uses across various regions and scientific disciplines.
• Converting between Celsius, Fahrenheit, and Kelvin requires arithmetic calculations using specific formulas. These conversions add precision to scientific work, meteorology, cooking, medicine,
and engineering.
Quick and Accurate Celsius to Fahrenheit Conversion
While temperature conversion might appear complex, it’s relatively straightforward. To convert from degrees Celsius to Fahrenheit, there is a simple equation that provides accurate results. The
conversion from c to f is made by using the formula °F = (°C × 9/5) + 32, which shifts the temperature value from the Celsius scale to the Fahrenheit scale. Just take the temperature in degrees
Celsius, multiply it by 1.8 (equivalent to 9/5), and then add 32 to the result. Voila! You’ve successfully converted Celsius to Fahrenheit with your celsius calculator.
The Formula for Conversion
The conversion from Celsius to Fahrenheit is based on a mathematical formula. The equation for this conversion is degrees Fahrenheit (°F) = (°C × 9/5) + 32. This formula was derived by considering
the varying scales of the two temperature systems. Water boils at 100°C and 212°F, with the scales increasing at different rates (100 vs 180). The ratio of the temperature change in °F to °C is
180/100, which simplifies to 9/5.
A more accurate temperature conversion formula of a celsius calculator includes direct multiplication and addition: Celsius * 9/5 + 32. The disparity in the temperature conversion formula between
formulas for Celsius to Fahrenheit and Fahrenheit to Celsius stems from the unique mathematical relationships between the two temperature scales.
Using the Celsius Calculator
The Celsius calculator is a specialized tool designed to convert temperatures from Celsius to Fahrenheit and vice versa, using the conversion formula °F = °C × 9/5 + 32. The calculator functions
by implementing mathematical formulas for temperature conversion.
To convert Fahrenheit to Celsius, subtract 32 from the Fahrenheit temperature and then divide the result by 1.8 (or 9/5). To convert Celsius to Fahrenheit, you can use the formula: Multiply the
Celsius temperature by 1.8 (or 9/5) and then add 32 to the product. This will give you the corresponding Fahrenheit temperature..
There are reliable online temperature conversion calculator tools available, like the ones offered by Online Calculator and DigiKey, that can be easily accessed for temperature conversions.
Understanding Temperature Scales: Celsius, Fahrenheit, and Kelvin
Temperature scales provide a structured method to quantify heat and cold. The three primary scales worldwide are Celsius, Fahrenheit, and Kelvin, each with unique properties and units of measurement.
Anders Celsius created the Celsius scale in the 18th century, based on the freezing and boiling points of water.
The Celsius scale designates 0°C as the freezing point and 100°C as the boiling point of water. The Fahrenheit scale sets water’s freezing point at 32°F and its boiling point at 212°F. The
Kelvin scale is an absolute temperature scale based on the concept of absolute zero. At 0 K, it represents the absence of thermal energy.
Celsius Calculator Scale
Anders Celsius introduced the Celsius scale, also known as the centigrade scale, in 1742. Widely adopted as the standard in most countries except the USA, the Celsius scale is known for its
simplicity and ease of use. This scale sets the freezing point of water at 0 degrees and the boiling point at 100 degrees, significantly contributing to temperature measurement. It is commonly used
to measure temperature in many regions, excluding the United States.
Additionally, the medical sector widely uses the Celsius scale for assessing body temperature. The scientific community frequently utilizes the Celsius scale because it is part of the metric system
of units, which is extensively embraced in scientific endeavors.
Fahrenheit Scale
Physicist Daniel Gabriel Fahrenheit developed the Fahrenheit scale in 1724. He established the freezing point of water at 32 degrees and its boiling point at 212 degrees. The Fahrenheit scale is
predominantly used in countries like:
• United States
• Bahamas
• Belize
• Liberia
among others.
The Fahrenheit temperature scale is still used in these countries for non-scientific temperature measurement, such as weather forecasting and indoor heating and cooling.
Kelvin Scale
Lord Kelvin introduced the Kelvin temperature scale, an absolute temperature scale. This scale starts at absolute zero, where a thermodynamic system has the lowest energy. The Kelvin scale begins at
zero degrees and uses degrees of the same magnitude as Celsius.
The scientific community widely uses this scale because it provides precision, especially in fields involving very high or very low temperatures.
Converting Between Celsius, Fahrenheit, and Kelvin
Having gained a solid understanding of the different temperature scales, we can now explore their inter-conversion. Temperature conversion is an essential skill, especially in fields where precise
measurements are crucial. The formulas needed for these conversions are simple and easy to remember with a bit of practice.
Whether you’re a scientist working in a lab, a chef monitoring the temperature of your oven, or just a curious learner, understanding how to convert between these temperature scales within a
specific temperature range can be incredibly useful.
Celsius to Fahrenheit and Vice Versa
Converting temperatures between Celsius and Fahrenheit involves basic arithmetic. When converting from Celsius to Fahrenheit, simply multiply the temperature in degrees Celsius by 1.8 (or 9/5) and
then add 32 to get the result in Fahrenheit.
To convert from Fahrenheit to Celsius, subtract 32 from the Fahrenheit temperature value, and then divide the result by 1.8 (or 9/5). So whether you’re planning a trip to a country that uses a
different temperature scale or you’re just trying to convert temperature for a recipe, this simple conversion can come in handy.
Celsius to Kelvin and Vice Versa
The conversion of Celsius to Kelvin simply requires adding 273.15 to the Celsius temperature. This addition accounts for the difference between the freezing point of water on the Celsius and Kelvin
scales. To convert Kelvin to Celsius, subtract 273.15 from the Kelvin temperature. This simple conversion is particularly useful in scientific contexts where other scale than the Kelvin scale is
commonly used.
Fahrenheit to Kelvin and Vice Versa
The conversion from Fahrenheit to Kelvin requires a somewhat more elaborate formula. The equation is K = (F – 32) x 5/9 + 273.15. This formula accounts for the difference in the freezing points of
the Fahrenheit and Kelvin scales, as well as the difference in the size of the degrees.
To convert Kelvin to Fahrenheit, use the formula °F = K 1.8 – 459.67 or alternatively, F = 1.8(K-273) + 32. Although this conversion is less commonly used in everyday life, it’s essential in
scientific and engineering fields.
Historical Context of Temperature Scales
Creating temperature scales was a significant milestone in science. Anders Celsius created the Celsius scale in the 18th century, based on the freezing and boiling points of water, making it easy to
use for everyday temperatures. These scales revolutionized our understanding of heat and cold and have practical applications in various fields, including:
• Meteorology
• Cooking
• Medicine
• Engineering
• Chemistry
The three temperature scales we use today, Celsius, Fahrenheit, and Kelvin, were developed by scientists who made significant contributions to their respective fields.
Anders Celsius and the Celsius Calculator Scale
Anders Celsius developed the Celsius scale, also known as the centigrade scale, in 1742. Most countries, except the USA, widely adopt the Celsius scale as the de facto standard due to its simplicity
and ease of use. Initially, Celsius designated the boiling point of water as 100 degrees and the freezing point as 0 degrees. However, after his death, the scale was inverted to its current form,
with the freezing point of water at 0 degrees and the boiling water point at 100 degrees.
This scale is widely used worldwide, especially in scientific and medical fields.
Daniel Gabriel Fahrenheit and the Fahrenheit Scale
In the early 18th century, German physicist Daniel Gabriel Fahrenheit:
• Invented the mercury-in-glass thermometer
• Developed the Fahrenheit scale
• Set the freezing point of water at 32 degrees
• Set the boiling point of water at 212 degrees on his scale.
Today, people mainly use the Fahrenheit scale in the United States and a few other countries.
Lord Kelvin and the Kelvin Scale
The Kelvin scale, an absolute temperature scale, is named after William Thomson, also known as Lord Kelvin. Unlike the Celsius and Fahrenheit scales, which are based on the properties of water, the
Kelvin scale starts at absolute zero, where all particle motion theoretically stops.
Scientists primarily use this scale for its precision and consistency.
Practical Applications of Temperature Conversion for a Celsius Calculator
Grasping temperature conversion isn’t solely for scientists. It holds practical significance in diverse fields, ranging from meteorology to the food industry. In meteorology, temperature conversion
is crucial for standardizing temperature expressions across different scales. This ensures consistent reporting and analysis.
Similarly, in the food industry, accurate temperature conversion is essential for calculating heat transfer, a critical part of thermal processing. It also helps maintain accurate temperatures to
eliminate harmful bacteria and microorganisms, ensuring food safety.
Tips for Estimating Temperature Conversions for Celsius Calculator
Despite a calculator or an app easing temperature conversion, mastering the estimation of these conversions can be a beneficial skill. For instance, a quick and easy method to estimate Celsius to
Fahrenheit conversion is to double the Celsius temperature and subtract 10, then add 32. Similarly, to convert Fahrenheit to Celsius, you can subtract 30 from the Fahrenheit temperature and then
halve the result. Remember, these are only estimates and may not be accurate for precise measurements.
Common Temperature Conversion Questions
We’ve delved deeply into temperature conversion and the variety of temperature scales. You might still have some burning questions on the topic. Let’s tackle some of the most commonly asked
questions when it comes to temperature conversion and temperature scales.
Why are there different temperature scales?
The presence of numerous temperature scales has historical roots and reflects the evolution of scientific comprehension over time. Anders Celsius created the Celsius temperature scale, which most
countries, except the USA, widely adopt as the de facto standard. People developed different scales using various reference points and calibration methods.
Today, different countries and scientific disciplines use different scales based on preference, practicality, and established standards in the field.
How do I remember the conversion formulas?
While committing temperature conversion formulas to memory may initially seem challenging, there are various strategies you can employ. One method is to associate the formulas with a mnemonic device,
like ‘30 is hot; 20 is nice; 10 is cool; 0 is ice’ to remember approximate Celsius to Fahrenheit conversions. Another method is to practice the formulas frequently until they become second
What is the significance of absolute zero?
In temperature study, absolute zero is a pivotal concept. It represents the lowest possible temperature, where a thermodynamic system has the lowest energy. Absolute zero is the lower limit on the
Kelvin scale, making it crucial for scientific measurements and calculations.
Although reaching absolute zero in real-world conditions is impossible, various cooling methods can achieve temperatures close to it.
In this journey through temperature conversion, we’ve explored the Celsius, Fahrenheit, and Kelvin scales. We delved into the history behind these scales and learned how to convert between them.
We’ve discussed practical applications of temperature conversion in fields like cooking and meteorology. We also provided tips for estimating these conversions quickly.
Understanding temperature conversion is not just scientific but also practical. It can make your life easier when reading a recipe, planning a trip, or checking the weather. So, the next time you see
a temperature reading, remember it’s not just a number but part of a fascinating story of scientific discovery and innovation.
Certified MTP has numerous options for the thermometers, including Digital Thermometers, Mercury Free Thermometers, and Digital Infrared Thermometers.
Frequently Asked Questions
How do you use a Celsius Calculator?
To calculate Celsius from Fahrenheit, use the formula C = 5/9(F-32). For example, to convert 84 °F to Celsius, it would be approximately 28.89 °C.
How do you calculate F to C?
To convert temperatures from Fahrenheit to Celsius, subtract 32 from the temperature in Fahrenheit and then multiply the result by 5/9. For example, to convert 68°F to °C, you would do (68-32) * 5/
What is Celsius to Fahrenheit chart?
A Celsius to Fahrenheit chart shows the conversion between temperatures in Celsius and Fahrenheit.
Why does the U.S. use the Fahrenheit scale while most of the world uses Celsius?
The U.S. uses the Fahrenheit scale due to its historical adoption from English-speaking countries and independence. In contrast, most countries use the Celsius scale, the de facto standard, except
the USA. The metric system includes the Celsius scale, which was developed later.
What is absolute zero in celsius calculator?
Absolute zero is the lowest possible temperature. At this point, a thermodynamic system has minimal energy. It serves as the starting point of the Kelvin scale.
Related Blogs for Celsius Calculator
C to F Formula: Converting Celsius to Fahrenheit [Easily Solved]
0 F to C Conversion: Fahrenheit to Celsius Fast [Easily Solved] | {"url":"https://blog.certifiedmtp.com/quick-and-easy-celsius-calculator-for-temperature-conversion/","timestamp":"2024-11-11T21:39:36Z","content_type":"text/html","content_length":"347008","record_id":"<urn:uuid:e0263779-5f72-416a-bc07-3d54e9037252>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00359.warc.gz"} |
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
I was really struggling with the older version... so much I pretty much just gave up on it. This newer version looks better and seems easier to navigate through. I think it will be great! Thank you!
Willy Tucker, NJ.
WOW!!! This is AWESOME!! Love the updated version, it makes things go SMOOTH!!!
Richard Penn, DE.
This product is great. Im a Math 10 Honors student and my parents bought your algebra software to help me out. I didnt think Id use it as much as I have but the step-by-step instructions have come in
George Miller, LA
Search phrases used on 2010-01-22:
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• merrill geometry text book answers
• Beginnin & Intermediate Algebra an Integrated Approach, fifth edition, study guide
• multiply and simplify by factoring that uses radicals and indexes
• algebric tile
• free elementary algebra practice problems
• 6 grade adding
• how to solve multi -step equations by adding
• solve complex number system
• very hard maths for kids
• grade 6 percentage work sheet
• math 208 final exam uop study guide
• aptitude questions pdf
• sample test for solving system of equation involving quadratic
• how to find equation of a curved line
• fifth grade math review worksheets
• 11th grade algebra 2 final exam
• learn algebra
• order of operations and exponent worksheet
• multipying negative fractions
• composiion of functions to solve real-world problems
• difference between evaluation and simplification of an expression
• simplifying exponents worksheets
• prentice hall mathematics algebra 1 answers
• factor special polynomials online calculator
• permutations and combination + basic rules and problems + Basics + lecture notes + ppt
• math trivia puzzle
• What is the square root of 1500
• history of mathematics square root calculation
• teaching algebra equations
• aptitude+math question paper
• solving non-homogeneous second order odes
• free algebra questions
• Factoring Trinomials Calculator online
• 9th grade online math tests
• "algebrator" & "download"
• graph square root of two variables
• radicals application in real life
• how to solve differential equations by matlab
• simplifying calculator x^3
• 8th grade worksheets on decimals
• algebra passing test practice
• solving linear equations with fractions
• algebra cliff notes
• find the slope of a binomial
• 12 times the square root of 3xy^2
• multiplying one to twelve
• teachers helper math problem
• TI-84 Plus LOG
• java solve equation package
• program
• free basic algebra studies
• Printable 8th grade worksheets
• do maths online for kids at a year 7 n 8
• root quadratic factor
• free calculator for evaluating rational expressions
• aptitude question
• grade one homework programs
• hard equations
• algebra test questions and answeres
• clep study guides free download
• how to solve exponents with square roots
• math equations 9th grade pa
• How to find inverse function using Ti 83 plus
• free online general english test papers
• math software high school algebra New York State regents
• how to find binomial on ti 83
• elementary math trivia
• glencoe book online exercises algebra
• trivias in mathematics
• basic mathematical calculation,formula and equations which is used to write codings in programming languages like java and c free download
• cat test prep permutation and combination chapter
• algebra questions for KS3
• solving equations and inequalities calculator
• slope intercept formulas
• Aptitude Question Bank
• solve cubic equation matlab
• Add, subtract, multiply and divide rational expressions and solve rational equations
• how to solve radical fractions
• adding three sets of fractions
• Four Fundamental Math Concepts
• partial fraction
• heath work sheets
• multiple choice past maths paper
• The Great common factor in math
• Free Decimal Worksheets With Solutions
• free download GRE maths questions
• lesson plan in intermediate algebra-linear inequalities
• domain multiple variable
• finding roots of a nonlinear equations using matlab
• 6th grade honors math worksheets | {"url":"https://softmath.com/algebra-help/how-to-factor-with-exponents.html","timestamp":"2024-11-11T13:13:51Z","content_type":"text/html","content_length":"35317","record_id":"<urn:uuid:f8a8ce07-f2c9-4c83-84d2-b2417fba8d73>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00144.warc.gz"} |
NSEA 2024-25 Syllabus
What is NSEA?
The National Standard Examination in Astronomy is organized by the Indian Association of Physics Teachers and Homi Bhabha Center for Science Education. This is the very first stage of the
International Olympiad on Astronomy and Astrophysics. Students in Class 12th can take this examination. NSEA 2024 will be tentatively held in the third week of November. The NSEA is an amazing
opportunity for all students looking for a platform to expose their talents and assess their knowledge base in Astrophysics and Astronomy, all the while gaining some wholesome experience. Students
preparing for NSEA will have the added advantage of gaining additional skills that they could use for other similar competitive exams.
FAQs on NSEA-National Standard Examination in Astronomy
1. What are the Books That are to Be Consulted for NSEA Preparation?
Ans: Since the NSEA contains questions from Physics and Mathematics and since the syllabus prescribed for the exam is closely related to the curriculum prescribed for the 11 and 12th grades, the
NCERT textbooks of these classes can be consulted for the purpose of studying for the exam. The various Physics and Maths concepts that come under the textbooks will be useful entirely for those who
are preparing themselves to attempt NSEA. For the general astronomy included in the exams, students will have to consult outside sources and books published by experts in the field. The books,
Universe authored by William J. Kaufmann, Roger Freedman, and Robert Geller and Astronomy- Principles and Practice, written by Roy and Clarke will be of much help to the students.
2. Is a Calculator Allowed in NSEA?
Ans: Calculators are not entirely restricted to be used in NSEA or the National Standard Examination in Astronomy. But a set of certain guidelines had been put forth by the Indian Association of
Physics Teachers and Homi Bhabha Center for Science Education that is to be followed while using calculators for the exam. The calculators that the students bring inside the exam centre cannot have
any special graph mode or integration mode. The calculators must not be equipped with any special equation-solver function or matrix mode. The official website of IAPT has took the effort to publish
a list of calculators that the students are allowed to use for the exam along with a list of calculators that are not permitted to be used for the NSEA.
3. Where Can I Get the NSEA Syllabus Online?
Ans: For the National Standard Examination in Astronomy or the NSEA no precise and concrete syllabus has been outlined. The questions included in the question paper of NSEA mainly revolve around
three subjects which are Physics, Mathematics and Astronomy. Since the exam is for students from the higher secondary section, the exam will mostly have questions that are framed from the topics
which come under the Physics and Mathematics syllabus of classes 11 and 12. Students can also cover the basic concepts under astronomy in order to have a better chance of performing well in the exam.
4. How to download previous years question papers of NSEA?
Ans: Students are advised to go through and work out questions from the previous years’ question papers along with other sample papers in order to get an idea of the pattern of the exam and also what
kind of questions to expect beforehand. Working out the different kinds of questions offered by these question papers can ensure consistent practice which will help students develop speed and other
skills needed to crack the NSEA. Vedantu is one of the most trusted platforms that resorts to being of assistance to students who are looking to find proper resources to prepare for exams like NSEA.
Students can get hold of question papers from several previous years and sample papers of the NSEA exams as well from Vedantu. By solving these papers students looking to crack the exam will have an
additional advantage over their fellow competitors. Students will have a better idea of the structure of the paper and thereby will be able to move forward with the questions in a much easier
5. When will the NSEA be conducted?
Ans: The National Standard Examination in Astronomy or NSEA is conducted every year and the usual date to be chosen for the exam comes around the end of November. But since the COVID-19 situation has
proven to be quite unpredictable in several areas there is a high chance for the dates to be changed postponed. Even though the regulations regarding conducting exams are favourable, students need to
be vigilant about any updates or announcements about the dates or mode of examination. | {"url":"https://www.vedantu.com/olympiad/nsea-exam-syllabus","timestamp":"2024-11-05T13:54:54Z","content_type":"text/html","content_length":"204731","record_id":"<urn:uuid:dcb15795-6be3-405d-8086-c71ce2a4c61f>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00437.warc.gz"} |
On the relationship between school mathematics and university mathematics : A comparison of three approaches
Journal article
Scheiner, Thorsten and Bosch, Marianna. (2023). On the relationship between school mathematics and university mathematics : A comparison of three approaches.
ZDM Mathematics Education.
55, pp. 767-778.
Authors Scheiner, Thorsten and Bosch, Marianna
This paper examines how different approaches in mathematics education conceptualise the relationship between school mathematics and university mathematics. The approaches considered here
include: (a) Klein’s elementary mathematics from a higher standpoint; (b) Shulman’s transformation of disciplinary subject matter into subject matter for teaching; and (c) Chevallard’s
Abstract didactic transposition of scholarly knowledge into knowledge to be taught. Similarities and contrasts between these three approaches are discussed in terms of how they frame the
relationship between the academic discipline and the school subject, and to what extent they problematise the reliance and bias towards the academic discipline. The institutional
position implicit in the three approaches is then examined in order to open up new ways of thinking about the relationship between school mathematics and university mathematics.
Keywords didactic transposition; elementarisation; school mathematics; transformation of disciplinary subject matter; university mathematics
Year 2023
Journal ZDM Mathematics Education
Journal 55, pp. 767-778
Publisher Springer
ISSN 1863-9690
Object https://doi.org/10.1007/s11858-023-01499-y
Scopus EID 2-s2.0-85162262216
Open access Published as ‘gold’ (paid) open access
Page range 767-778
CC BY 4.0
version File Access Level
Output Published
Online 20 Jun 2023
Accepted 30 May 2023
Deposited 09 Aug 2023
Permalink -
Download files
License: CC BY 4.0
File access level: Open
• 49
total views
• 27
total downloads
• 0
views this month
• 0
downloads this month
These values are for the period from 19th October 2020, when this repository was created. | {"url":"https://acuresearchbank.acu.edu.au/item/8z7v7/on-the-relationship-between-school-mathematics-and-university-mathematics-a-comparison-of-three-approaches","timestamp":"2024-11-11T21:01:21Z","content_type":"text/html","content_length":"52826","record_id":"<urn:uuid:517a2d89-9baf-49d8-acdc-6f275f6b7804>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00040.warc.gz"} |
Formula with multiple IF(AND statements
I am having trouble with a formula that will return a 1, 3, 5, or 10 based on a combination of two columns.
Anything selected as REPORTED will equal 1. Easy enough. But in the same column, If FIXED Other points will be rewarded based on the Category it is paired with. Here is what I have so far:
=IF([Reported or Fixed?]1 = "Reported", 1, IF(AND([Reported or Fixed?]1 = "fixed", Category1 = "Find & Fix"), 5, IF(AND([Reported or Fixed?]1 = "fixed", Category1 = "Procedure Review", 3, IF(AND
([Reported or Fixed?]1 = "Fixed", Category1 = "Hands Free / Ergonomics Find & Fix", 10))))))
It works for the first part of the formula, but as I added to it, it now returns INCORRECT ARGUMENT SET.
• Are the only two options for the column "REPORTED" or "FIXED"? If so, you can use some built in logic to simplify. The nested IF statement will work from left to right and stop on the first true
value. So if the first argument is if it equals "REPORTED" then output 1, everything after that (if the only other option is "FIXED") will be assumed to be "FIXED" which means that you don't have
to specify that part.
=IF([Reported or Fixed?]1 = "Reported", 1, IF(Category1 = "Find & Fix", 5, IF(Category1 = "Procedure Review", 3, IF(Category1 = "Hands Free / Ergonomics Find & Fix", 10))))
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/75883/formula-with-multiple-if-and-statements","timestamp":"2024-11-03T13:39:23Z","content_type":"text/html","content_length":"432895","record_id":"<urn:uuid:181e0cf7-49bd-4299-a191-6479bde2b136>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00485.warc.gz"} |
Braided 2-Groups from Lattices
Posted by John Baez
I’d like to tell you about a cute connection between lattices and braided monoidal categories.
We’ve been looking at a lot of lattices lately, like the $\mathrm{E}_8$ lattice and the Leech lattice. A lattice is an abelian group under addition, so we can try to categorify it and construct a
‘2-group’ with points in the lattice as objects, but also some morphisms. Today I’ll show you that for a lattice in a vector space with an inner product, there’s a nice 1-parameter family of ways to
do this, each of which gives a ‘braided 2-group’. Here the commutative law for addition in our lattice:
$a + b = b + a$
is replaced by an isomorphism:
$a + b \cong b + a$
And this has some fun spinoffs. For example: for any compact simple Lie group $G$, the category of representations $Rep(T)$ of any maximal torus $T \subseteq G$, with its usual tensor product, has a
1-parameter family of braidings that are invariant under the action of the Weyl group.
What is this good for? I don’t know! I hope you can help me out. The best clues I have seem to be lurking here:
Braided 2-groups
Here is the kind of braided monoidal category I’m really interested in right now:
Definition. A braided 2-group is braided monoidal category such that every morphism has an inverse and every object has a ‘weak inverse’. A weak inverse of an object $x$ is an object $x^{-1}$ such
$x \otimes x^{-1} \cong x^{-1} \otimes x \cong 1$
where $1$ is the unit object — that is, the unit for the tensor product.
We say two braided 2-groups $X$ and $Y$ are equivalent if there’s a braided monoidal functor $f : X \to Y$ that’s an equivalence of categories.
In this paper, Joyal and Street showed how to classify braided 2-groups:
This paper is mainly famous for defining braided monoidal categories, but there’s a lot more in it. They published a closely related paper called ‘Braided tensor categories’ in 1992 — but it left out
a lot of fun stuff, so I urge you to read this one.
Here’s the idea behind their classification. First, we take our braided 2-group $X$ and form two abelian groups:
• $A$, the group of isomorphism classes of objects of $X$
• $B$, the group of automorphisms of the unit object $1 \in X$
If we have an automorphism $b: 1 \to 1$ we can ‘translate’ it using the tensor product in $X$ to get an automorphism of any other object, so $B$ becomes, in a canonical way, the group of
automorphisms of any object of $X$.
Next, the associator and braiding in $X$ give two functions
• $\alpha: A^3 \to B$
• $\beta: A^2 \to B$
How does this work? We can choose a skeleton of $X$, a full subcategory containing one object from each isomorphism class. This will be a braided 2-group equivalent to $X$, so from now on let’s work
with this skeleton and just call it $X$. In the skeleton, isomorphic objects are equal, so we have
$(a \otimes a') \otimes a'' = a \otimes (a' \otimes a'')$
$a \otimes a' = a' \otimes a$
and every object $a$ has an object $a^{-1}$ with
$a^{-1} \otimes a = a \otimes a^{-1} = 1$
So, the set of objects now forms an abelian group, which is just $A$.
In this setup we can assume without loss of generality that the ‘unitor’ isomorphisms
$\alpha \otimes 1 \cong 1 \cong 1 \otimes \alpha$
are trivial. We still have an associator
$\alpha(a,a',a'') : (a \otimes a') \otimes a'' \to a \otimes (a' \otimes a'')$
and braiding
$\beta(a,a') : a \otimes a' \to a' \otimes a$
and these can be nontrivial, but now they are automorphisms, so we can think of them as elements of the group $B$. So, we get maps
$\alpha: A^3 \to B$
$\beta: A^2 \to B$
The data $(A,B,\alpha,\beta)$ is enough to reconstruct our braided 2-group $X$ up to equivalence. To reconstruct it, we start by taking the category with one object for each element of $A$ and no
morphisms except automorphisms, with the automorphism group of each object being $B$. Then we give this category an associator using $\alpha$ and a braiding using $\beta$.
But suppose someone just hands you a choice of $(A,B,\alpha,\beta)$. Then it needs to obey some equations to give a braided 2-group! The famous pentagon identity for the associator:
says we need
$\alpha(b, c,d) - \alpha(a b, c, d) + \alpha(a, b c, d) - \alpha(b, c, d) = 0$
for all $a,b,c,d \in A$. Here I’m writing the operation in the group $A$ as multiplication and the operation in $B$ as addition. There are also some identities that the associator and unitor must
obey, and these say:
$\alpha(a, b, 1) = \alpha(a, 1, c) = \alpha(1, b, c) = 0$
Fans of cohomology will recognize that we’re saying $\alpha$ is a normalized 3-cocycle on $A$ valued in $B$. And if our 2-group weren’t ‘braided’, we’d be done: any such cocycle gives a 2-group!
But the hexagon identities for the braiding:
give two more equations:
$\alpha(a, b, c) + \beta(a, b c) + \alpha(b, c, a) = \beta(a,b) + \alpha(b, a, c) + \beta(a,c)$
$\alpha(a,b,c) + \beta(b,c) - \alpha(a,c,b) = \beta(a b, c) - \alpha(c, a, b) - \beta (a,c)$
Long before braided monoidal categories were invented, Eilenberg and Mac Lane called a choice of $\alpha$ and $\beta$ obeying all these equations an abelian 3-cocycle on $A$ with values in $B$.
Here’s the easy, obvious part of what Joyal and Street proved:
Theorem (Joyal, Street). Any abelian 3-cocycle on $A$ with values in $B$ gives a braided 2-group with $A$ as its group of objects and $B$ as the group of automorphisms of every object. Moreover, any
braided 2-group is equivalent to one of this form.
They went a lot further. For starters, they said when two 2-groups of this form are equivalent. But let’s look at some examples!
Example 1. Let’s simplify things by taking associator to be trivial: $\alpha = 0$. Then all the required equations hold automatically except the hexagon identities, which become
$\beta(a, b + c) = \beta(a,b) + \beta(a,c)$
$\beta(a + b, c) = \beta(a,c) + \beta(b,c)$
Now I’m writing the group operation in $A$ as addition, to help you see something nice: these equations just say that $\beta$ is bilinear! So: any bilinear map
$$ \beta : A \times A \to B$gives a braided 2-group. <b>Example 2.</b> In particular, if$L$is any lattice in a real inner product space$V$, for any constant$\theta \in \mathbb{R}$we get a bilinear
map$$\beta: L \times L \to \mathbb{R}$$given by$$\beta(x,y) = \theta \langle x,y\rangle$$So, we get a 1-parameter family of braided 2-groups with$L$as the group of objects and$\mathbb{R}$as the group
of automorphisms of any object. <b>Example 3.</b> We call$L$an <b>integral</b> lattice if$\langle x, y \rangle$is an integer whenever$x, y \in L$. In this case we have$$\beta: L \times L \to \mathbb
{Z}$$whenever$\theta \in \mathbb{Z}$. So, we also get a 1-parameter family of braided 2-groups with$L$as the group of objects and$\mathbb{Z}$as the group of automorphisms of any object --- but now
the parameter needs to be an integer. <b>Example 4.</b> Alternatively, if$L$is a lattice in a real inner product space$V$, for any constant$\theta \in \mathbb{R}$we get a bilinear map$$\beta: L \
times L \to \mathrm{U}(1)$$given by$$\beta(x,y) = e^{i \theta \langle x,y\rangle }$$So, we get a 1-parameter family of braided 2-groups with$L$as the group of objects and$\mathrm{U}(1)$as the group
of automorphisms of any object. <b>Example 5.</b> Take the previous example and suppose$L$is integral. In this case$\beta$doesn't change when we replace$\theta$by$\theta + 2 \pi$, so there is really
a circle's worth of braidings. Certain nice things happen for special values of$\theta$. When$\theta = 0$,$$\beta(x,y) = 1$$for all$x,y \in L$, so the braiding is 'trivial'. This also happens for a
category of line bundles or 1-dimensional representations of a group, and we'll see this is no coincidence. When$\theta$is$0$or$\pi$,$$\beta(x,y) = \beta(y,x) = \pm 1$$so$$\beta(x,y) \beta(y,x) = 1$
$and we get a <b>symmetric 2-group</b> --- that is, a 2-group that's a symmetric monoidal category. So far these examples look a bit general and abstract, so let me give you some examples of these
examples. (Category theory can be defined as the branch of math where the examples require examples.) <h3> An example from Lie theory </h3> Suppose$G$is a compact simple Lie group, and let$T \
subseteq G$be a maximal torus. The Lie algebra of$T$is abelian: its Lie bracket vanishes, so we might as well think of it as a mere vector space. For this reason I'll call it$V$, instead of the more
scary Gothic$\mathfrak{t}$that Lie theorists would prefer. But this vector space$V$is equipped with some other interesting structure: an inner product and a lattice! You see, sitting inside$V$is a
lattice$L$consisting of those elements$v$with$$\exp(2 \pi v) = 1 \in T$$Furthermore the Lie algebra of$G$comes with a god-given nondegenerate bilinear form called the <a href = "https://
en.wikipedia.org/wiki/Killing_form">Killing form</a>. If we multiply this by a suitable constant and restrict it to$V$, we get an inner product on$V$for which$L$is an integral lattice. Indeed,$L$is
usually called the <b>integral lattice of$G$</b>. So, using the trick in Example 4, we get a 1-parameter family of braided 2-groups with$L$as the group of objects and$\mathrm{U}(1)$as the group of
automorphisms of any object. <h3> Another example from Lie theory </h3> If you're not familiar with the integral lattice of$G$, you may know about its 'weight lattice'. This is dual to the integral
lattice, in a certain sense. Namely, we can start with the integral lattice$L$in$V$and define a new lattice$L^\ast$consisting of all points in the dual vector space$V^\ast$whose pairing with every
point in$L$is an integer:$$L^\ast = \{ \ell \in V^\ast : \; \ell(v) \in \mathbb{Z} \; for \; all \; v \in L \}$$Let's call$L^\ast$the <b><a href = "https://en.wikipedia.org/wiki/
Weight_%28representation_theory%29#Integral_weight">weight lattice</a></b> of$G$, though people usually reserve this term for the case when$G$is simply connected. Why is the weight lattice
interesting? For starters, any 1-dimensional unitary representation of the maximal torus$T$is equivalent to one coming from a point in this lattice! If we take$v \in V$then$\exp(2 \pi v) \in T$, and
any element$T$arises this way. Given$\ell \in L^$we can thus define a 1-dimensional unitary representation of$T$by$$\exp(2 \pi v) \mapsto \exp(2 \pi i \ell(v)) \in \mathrm{U}(1)$$In fact, we get all
the 1d unitary reps of$T$this way. Since every finite-dimensional unitary representation of$T$is a direct sum of 1-dimensional ones, the category$Rep(T)$of finite-dimensional unitary representations
of$T$is pretty well described by the weight lattice$L^$. How can we make this precise? Here's one way. There's a category FinVect$[L^\ast]$whose objects are complex vector bundles on$L^\ast$that have
<b>finite support</b>: that is, have 0-dimensional fibers outside a finite set. The morphisms are just vector bundle morphisms. This category is monoidal, where the tensor product comes from <a href
= "http://ncatlab.org/nlab/show/Day+convolution">Day convolution</a>:$$(V \otimes W)_x = \bigoplus_{x' + x'' = x} V_{x'} \otimes W_{x''}$$This should remind you of the group algebra of the lattice$L^
\ast$. Indeed, it's just a categorified version of the group algebra where we use finite-dimensional vector spaces instead of numbers as coefficients! Moreover,$$\mathrm{FinVect}[L^\ast] \cong Rep(T)
$$as monoidal categories. (This is pretty obvious if you know your stuff, but if you want a proof, see the section in my paper on <a href = "http://arxiv.org/abs/q-alg/9609018">2-Hilbert spaces</a>
where I discuss the categorifed Fourier transform. In case it helps, I should point out that the lattice I'm calling$L^\ast$is actually the Pontryagin dual of$T$. You see, the dual of a lattice in
some vector space is really the Pontryagin dual of the vector space mod that lattice.) Now let's build some braided 2-groups as in Example 4. We've got a lattice$L^\ast$in a real inner product space$
V^\ast$. This gives a braided 2-group whose objects are points of$L^\ast$, where the automorphisms of any object form the group$\mathrm{U}(1)$, and where the braiding$$\beta: L^\ast \times L^\ast \to
\mathrm{U}(1)$$is given by$$\beta(x,y) = e^{i \theta \langle x,y \rangle }$$for any chosen constant$\theta \in \mathbb{R}$. We can think of this as a braiding on the category of 1-dimensional unitary
representations of$T$. Since every finite-dimensional representation is a direct sum of these, we can extend it uniquely to a braiding on$$\mathrm{FinVect}[L^\ast] \cong \mathrm{Rep}(T)$$that is
compatible with direct sums. When$\theta = 0$this is the usual braiding on$\mathrm{Rep}(T)$. So, as we move$\theta$away from$0$, we're deforming that braiding. Sitting inside the weight lattice are
certain vectors called <a href = "https://en.wikipedia.org/wiki/Root_system">roots</a>. For$\mathrm{E}8$they look like this, if we project them from 8 dimensions down to 2 in a certain nice way: <a
href = "https://en.wikipedia.org/wiki/Root_system#E6.2C_E7.2C_E8"> <img src = "http://math.ucr.edu/home/baez/mathematical/500px-E8Petrie_net.png" alt = ""/></a> For any root$r \in L^{\ast}$we have a
reflection$$R_r : V^{\ast} \to V^{\ast}$$that maps$r$to$-r$while leaving orthogonal vectors alone. And the wonderful thing is that these reflections preserve the weight lattice! These reflections
generate a group called the <b><a href = "https://en.wikipedia.org/wiki/Weyl_group">Weyl group</a></b> of$G$. Since the Weyl group preserves the inner product on$V^{\ast}$and also the weight lattice,
it acts on the monoidal category$$\mathrm{FinVect}[L^\ast] \cong \mathrm{Rep}(T)$$in a way that preserves the braiding for any value of the parameter$\theta$. <h3> Puzzles </h3> <b>Puzzle 1.</b> How
are the braided 2-groups constructed in Example 3 related to Nora Ganter's <a href = "http://arxiv.org/abs/1406.7046">categorical tori</a>? A 3-cocycle on$\mathbb{Z}^n$valued in$\mathbb{Z}$is just
what you need to define a gerbe on the$n$-torus. Is an abelian 3-cocycle what you need to define a multiplicative gerbe on the$n$-torus? I believe this is essentially the same as what Ganter would
call a '2-group extension of the$n$-torus by the circle'. <b>Puzzle 2.</b> What can we do with this 1-parameter family of braidings on$\mathrm{Rep}(T)$? Perhaps it's worth noting that any
representation of the torus$T$gives rise to a$G$-equivariant vector bundle on the flag variety$G/T$; the space of holomorphic sections of this bundle forms a representation of$G$, and we can get all
the finite-dimensional representations of$G$this way. This is part of a well-known story. But I haven't figured out how to do anything exciting with it yet. <b>Puzzle 3.</b> I'm especially interested
in the$\mathrm{E}8$weight lattice. The braided 2-groups with this lattice as their group of objects are categorifications of the integral octonions. But what can we do with them? Naturally it makes
sense to start with simpler examples like$\mathrm{A}2$(ordinary integers in$\mathbb{R}$),$\mathrm{A}3$(Eisenstein integers in$\mathbb{C}$), and$\mathrm{F}_4$(Hurwitz integers in$\mathbb{H}$). <b>
Puzzle 4.</b> Among the most interesting integral lattices are the <b>even</b> ones, where$\langle x, x \rangle$is even whenever$x$is in the lattice. For example, the$A, D,$and$E$type lattices are
even, and so is the Leech lattice. If we construct a braided 2-group from an even lattice as in Example 3, the <b>self-braiding</b>$\beta(x,x)$is the square of some other natural transformation. What
can we do using this fact? Do you know other interesting braided monoidal categories where this happens? <b>Puzzle 5.</b> How about deforming both the braiding and the associator? Suppose we have
abelian groups$A$and$B$. It's easy to check that any trilinear map$$\alpha : A \times A \times A \to B$$obeys the pentagon identity$$\alpha(b, c,d) - \alpha(a + b, c, d) + \alpha(a, b + c, d) - \
alpha(b, c, d) = 0$$(where I'm writing the group operation in$A$as addition), and also the other identities we need to get a monoidal category with$A$as the group of objects and$B$as the group of
automorphisms of any object. That's nice. I also think any 3-cocycle on$A$valued in$B$is cohomologous to a trilinear one; if so we're not losing anything about assuming$\alpha$is trilinear. But now
suppose we want to give the resulting monoidal category a braiding. Now we want$$\beta : A \times A \to B$$obeying$$\alpha(a, b, c) + \beta(a, b + c) + \alpha(b, c, a) = \beta(a,b) + \alpha(b, a, c)
+ \beta(a,c)$$and$$\alpha(a,b,c) + \beta(b,c) - \alpha(a,c,b) = \beta(a + b, c) - \alpha(c, a, b) - \beta (a,c)$$How can we find such$\beta$? Suppose we take$\beta$to be bilinear. Then we need$$\
alpha(a, b, c) + \alpha(b, c, a) = \alpha(b, a, c)$$and$$\alpha(a,b,c) - \alpha(a,c,b) = - \alpha(c, a, b)$$
But what do these equations really say? Are there any nontrivial solutions? Can we classify them?
Posted at January 1, 2015 1:17 AM UTC
Re: Integral Octonions (Part 12)
Re: Puzzle 2, I have been told that there are nice constructions of quantum groups (nice meaning, in particular, not just by writing down generators and relations) starting from more natural objects
living in $\text{Rep}(T)$ with one of its nontrivial braidings, but I don’t know where there are details written up.
Re: Puzzle 4, the significance of evenness has to do with the distinction between bilinear forms and quadratic forms: evenness means precisely that the bilinear form $B(x, y)$ is induced from an
integral quadratic form $q(x)$ in the sense that $B(x, y) = q(x + y) - q(x) - q(y)$. This whole story has more to do with quadratic forms than bilinear forms.
To see this we’ll pass through the homotopy hypothesis: in the same way that 2-groups are precisely pointed connected homotopy 2-types, braided 2-groups are precisely pointed connected simply
connected homotopy 3-types; that is, they describe spaces whose only nontrivial homotopy groups are $\pi_2$ and $\pi_3$. The machinery of Postnikov towers tells us that the extra data needed to
describe such a space is a Postnikov invariant living in $H^4(B^2 \pi_2, \pi_3)$, and this is known to be precisely the group of quadratic forms $\pi_2 \to \pi_3$.
The quadratic form $\pi_2 \to \pi_3$ is in fact a homotopy operation corresponding to the Hopf map $S^3 \to S^2$. I believe it refines the bilinear form you wrote down, which as a homotopy operation
should correspond to the Whitehead bracket $\pi_2 \times \pi_2 \to \pi_3$, and in particular it contains strictly more information than the Whitehead bracket if $\pi_3$ has any 2-torsion. There is a
nice way to visualize all of this by passing it through the cobordism hypothesis, but maybe that’s a digression.
A lattice equipped with a quadratic form gives you slightly more than a braided monoidal category: I think taking vector bundles should give you a braided monoidal category with duals and a ribbon
structure, and while the braiding can only see the bilinear form the ribbon structure can see the quadratic form. Or at least that’s what Teleman says in these TFT notes.
Posted by: Qiaochu Yuan on January 2, 2015 9:47 AM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
Thanks for all that!
While writing this post I started out focused on quadratic forms, since Joyal and Street discuss Eilenberg and Mac Lane’s old work on $H^4(K(\pi_2,2),\pi_3)$, which is what they called ‘the abelian
cohomology of $\pi_2$ with coefficients in $\pi_3$’, and they talk about how this winds up being the group of quadratic forms on $\pi_2$ valued in $\pi_3$. But I had difficulty seeing in an explicit
way how to get an associator and braiding from a quadratic form. The problem is presumably that the quadratic form encodes the associator and braiding up to equivalence, and one needs to pick a
representative. Maybe there’s a nice formula, but I didn’t see it.
On the other hand, there’s an utterly obvious formula for a braiding given a bilinear map — it just is the bilinear map. Since I wanted everything to be very explicit and calculable, I went that way.
On top of this, I’m always a bit confused about bilinear maps versus quadratic forms, since I’ve spent most of my life in characteristic zero where there’s no difference. There’s the recipe you
mentioned for getting a bilinear map from a quadratic form
$\beta(x,y) = q(x+y) - q(x) - q(y)$
but also a recipe for getting a quadratic form from a bilinear map
$q(x) = \beta(x,x)$
and these recipes are not inverse to each other, because a factor of 2 pops up. So it seems one should be a bit careful about saying that one contains more information than the other; it seems they
each contain information the other does not unless you can divide by 2.
And this business — ways to go back and forth that aren’t inverse to each other — makes me think of adjunctions.
Is there an adjunction between the category of pairs of abelian groups $A, B$ with bilinear map $\beta: A \times A \to B$ and the category of pairs of abelian groups $A, B$ with quadratic form $q: A
\to B$?
By what I said in my blog article, a bilinear map $\beta : A \times A \to B$ gives a skeletal braided 2-group with $A$ as objects, $B$ as automorphisms of the identity, and trivial associator. And
These will correspond to a certain special class of pointed connected simply-connected homotopy 3-types.
Let’s say these are the ones with trivial associator.
On the other hand, by what you said (and Eilenberg and Mac Lane said), a quadratic form $q : A \to B$ gives a pointed connected simply-connected homotopy 3-type with $\pi_2 = A$, $\pi_3 = B$.
These should correspond to all pointed connected simply-connected homotopy 3-types.
Is there an adjunction between the homotopy category of all pointed connected simply-connected homotopy 3-types, and the subcategory of those with trivial associator?
That would ease my confusion.
Posted by: John Baez on January 2, 2015 8:00 PM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
$q(x) = \beta(x, x)$ is the “wrong” way to pass from a bilinear form to a quadratic form, in the sense that if $\beta(x, y)$ is the Whitehead bracket and $q(x)$ is the Hopf quadratic form then $q(x)
eq \beta(x, x)$, but $\beta(x, y) = q(x + y) - q(x) - q(y)$, which gives $\beta(x, x) = 2 q(x)$. So 1) if you know $q(x)$ you can recover $\beta(x, y)$, but 2) if you know $\beta(x, y)$ you can only
recover $q(x)$ up to some ambiguities involving $2$-torsion. So as I said before, $q(x)$ contains strictly more information than $\beta(x, y)$ in general. In this situation $q(x)$ is said to be a
quadratic refinement of $\beta(x, y)$.
The factor of $2$ in the relation $\beta(x, x) = 2 q(x)$ really belongs there. Another place where it pops up is in the definition of the Kervaire invariant: this involves a quadratic refinement of
the intersection pairing on a framed manifold over $\mathbb{F}_2$, and all of the data in the Kervaire invariant is destroyed if you multiply the quadratic refinement by $2$!
A third place where it pops up is in the “correct” definition of Clifford algebras over a base ring where you can’t divide by $2$. In this context it’s important to decide once and for all whether
Clifford algebras take as input bilinear forms or quadratic forms. I think the correct definition involves quadratic forms, and the defining relation should be $v^2 = q(v)$, which in particular gives
$v w + w v = \beta(v, w)$ where $\beta(v, w) = q(x + y) - q(x) - q(y)$. Here one should think of $v, w \in V$ as having odd degree and hence of $v w + w v$ as a supercommutator.
The reason is that the Clifford algebra construction can be thought of as a slight variant of the construction of the universal enveloping algebra, but of a graded Lie algebra rather than an ordinary
Lie algebra. Whatever a graded Lie algebra is over, say, $\mathbb{Z}$, at the very least, graded derivations of a graded algebra should be an example. But if $D : A \to A$ is such a graded derivation
of odd degree, then the supercommutator $[D, D]$ naturally admits a quadratic refinement, namely $D^2$, which is also a graded derivation. In other words, graded Lie algebras ought to have as part of
their data a quadratic refinement of the supercommutator on odd elements. For example, the Whitehead bracket defines a graded Lie algebra structure on the homotopy groups of a space in this refined
sense (at least I know this is true for the Whitehead bracket $\pi_2 \times \pi_2 \to \pi_3$ and I believe it’s true in general).
Posted by: Qiaochu Yuan on January 2, 2015 10:17 PM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
You know, I think the way I’m connecting the braiding to a bilinear form is a bit different than the Whitehead bracket.
Usually when we have two objects $x, y$ in a braided 2-group and we want to get an automorphism we take the ‘double braiding’
$B_{x,y} B_{y, x} : x \otimes y \to x \otimes y$
If we call the group of isomorphism classes of objects $\pi_2$ and call the group of automorphisms of any object $\pi_3$, this gives a bilinear map
$[\cdot, \cdot]: \pi_2 \times \pi_2 \to \pi_3$
which is the Whitehead product. If we turn this into a quadratic form in the obvious way:
$\begin{array}{ccc} \pi_2 &\to& \pi_3 \\ x &\mapsto & [x,x] \end{array}$
the resulting quadratic form encodes the process of sending any object $x$ to the automorphism
$B_{x,x}^2 : x \otimes x \to x \otimes x$
This is not as informative as the automorphism
$B_{x,x} :x \otimes x \to x \otimes x$
However, in my post, I was getting a bilinear map in a different way! Every braided 2-group is equivalent to a skeletal one, in which $x \otimes y$ and $y \otimes x$ are equal, because they’re
isomorphic. Then, any pair of objects gives an automorphism
$B_{x,y} : x \otimes y \to y \otimes x = x \otimes y$
and this gives me a bilinear map
$\beta: \pi_2 \times \pi_2 \to \pi_3$
This is not the Whitehead product. Indeed, it’s a bit like ‘half’ the Whitehead product, since
$[x, x] = 2 \; \beta(x,x)$
And if we turn my bilinear map into a quadratic form in the obvious way:
$\begin{array}{rcc} q: \pi_2 &\to& \pi_3 \\ x &\mapsto & \beta(x,x) \end{array}$
we see that this quadratic form encodes the process of sending any object $x$ to the automorphism
$B_{x,x} : x \otimes x \to x \otimes x$
This quadratic form, $q$, is more informative than the last one: it’s what you’re calling the Hopf quadratic form, the one that completely describes our braided 2-group after we know $\pi_2$ and $\
pi_3$. Note that
$[x,y] = q(x+y) - q(x) - q(y)$
so this quadratic form contains all the information in the Whitehead product… but not vice versa.
We’re left with the small puzzle of how I managed to cook up a bilinear form that acts a bit like ‘half’ the Whitehead product, in the limited sense that
$[x, x] = 2 \; \beta(x,x)$
We certainly do not have
$[x,y] = 2 \; \beta(x,y)$
for all $x, y$. That would be good to be true: the Whitehead product is not always divisible by 2.
The answer to this puzzle is that…
… well, I’ll let people tackle it if they want, but this is one I know the answer to, so I’ll give it away if nobody else does.
Posted by: John Baez on January 2, 2015 11:44 PM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
Oh, I see. That makes sense. Based on the formulas you’ve written down it looks a bit like what you’re doing is at least analogous to taking two odd derivations $D, E$ and directly writing down their
product $D E$ rather than their (super)commutator $[D, E] = D E + E D$. Unlike the supercommutator, the product isn’t guaranteed to be another derivation in general, but it might be in some special
cases; even if it is, though, it’s not part of the graded Lie algebra structure on derivations, but is extra structure coming from a particular choice of representation of the graded Lie algebra as
derivations on a particular algebra. This is all by analogy though.
Posted by: Qiaochu Yuan on January 3, 2015 5:43 AM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
I will answer my latest puzzle before I forget to do so.
The question was: given a braided 2-group (or equivalently, a pointed connected simply-connected homotopy 3-type), how am I managing to find a bilinear map
$\beta: \pi_2 \times \pi_2 \to \pi_3$
such that $x \mapsto \beta(x,x)$ is the Hopf map from $\pi_2 \to \pi_3$? This seems odd given that the well-known Whitehead product
$[-, -] : \pi_2 \times \pi_2 \to \pi_3$
only gives twice the Hopf map:
$[x,x] = 2 \; \beta(x,x)$
The answer is that my bilinear map, unlike the Whitehead product, is not a symmetric bilinear map. Moreover, defining it required an arbitrary choice!
I started with a braided 2-group with:
• $\pi_2$ as its group of isomorphism classes of objects
• $\pi_3$ as the group of automorphisms of any object
and chose an equivalence between this 2-group and a skeleton of this 2-group, thus making the skeleton into a braided 2-group. Why the italics? When you’re a category theorist, it’s tremendously
traumatic to do something that depends on an arbitrary choice. And that turns out to be the key to the puzzle.
In the skeleton, isomorphic objects are equal, so we have
$B_{x,y} : x \otimes y \to y \otimes x = x \otimes y$
Thus, the braiding $B_{x,y}$ becomes an automorphism, and thus an element of $\pi_3$. It thus defines a map
$\beta: \pi_2 \times \pi_2 \to \pi_3$
For convenience I assumed the associator in the skeleton was trivial; given this, $\beta$ is bilinear. But I don’t think we need this in what follows.
What’s really important is this: $\beta$ depends on the choice I made! A different choice would give a different choice of $\beta$, say $\beta'$. By Section 7 of Joyal–Street you can see that
$\beta'(x,y) - \beta(x,y) = k(x,y) - k(y,x)$
for some map $k : \pi_2 \times \pi_2 \to \pi_3$.
So, only the ‘antisymmetric part’ of $\beta$ depends on the choice made.
That’s a bit vague, so let me be precise. First, $\beta(x,x)$ does not depend on the choice made, and this is the Hopf map. Second, $\beta(x,y) + \beta(y,x)$ does not depend on the choice made, and
this is the Whitehead product:
$\beta(x,y) + \beta(y,x) = [x,y]$
I thank Qiaochu for his probing comments, which helped me realize what’s going on here.
Posted by: John Baez on January 3, 2015 11:42 PM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
It would be great to extend this to Freudenthal triple systems and their quartic forms.
Freudenthal triple systems by root system methods Fred W. Helenius
Posted by: Metatron on January 3, 2015 3:37 PM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
Posted by: John Baez on January 3, 2015 10:59 PM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
Concerning Puzzle 1: The following things are the same:
1. a Lie 2-group with $A$ the group of iso classes of objects and $B$ the group of automorphisms of 1
2. a multiplicative $B$-gerbe over $A$
3. a central 2-group extension of $A$ by $B$ in the sense of Schommer-Pries.
Up to isomorphism, all these things are classified by Segal’s and Brylinski’s smooth Lie group cohomology $H^3(B A,B)$, which is $H^4(B A,\mathbb{Z})$ if $B=U(1)$.
When the 2-group of (1) is braided, it should be equivalent under (2) to a multiplicative $B$-2-gerbe over $B A$ (the classifying space of $A$), and under (3) to a “central” 3-group extension of $B
A$ by $B$, whatever that means.
If $A=\mathbb{Z}^n$ so that $BA=T^n$, then we have a multiplicative $B$-2-gerbe over $T^n$. If further $B=\mathbb{Z}$, then a $B$-2-gerbe is the same as a $U(1)$-gerbe, so that we finally have a
multiplicative gerbe over $T^n$, just as you wrote. Cool!
Posted by: Konrad Waldorf on January 4, 2015 9:26 PM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
Hi, Konrad! Thanks for tackling Puzzle 1!
Actually I now see Nora Ganter had similar things to say in her paper Categorical tori, which by the way acknowledges you. She gets a multiplicative $\mathrm{U}(1)$ gerbe over a torus $T$ from a
lattice $L$ with a bilinear form $\beta : L \times L \to \mathbb{Z}$. She also thinks of this as a (central) 2-group extension of $T$. And changing her notation to match mine here, she shows:
Proposition 2.7 For any bilinear form $\beta: L \times L \to \mathbb{Z}$, the bilinear form $\beta^t$ given by $\beta^t(x,y) = \beta(y,x)$ gives an equivalent 2-group extension of $T = L \otimes_
{\mathbb{Z}} \mathrm{U}(1)$.
Corollary 2.8 i) If $\beta: L \times L \to \mathbb{Z}$ is an even symmetric bilinear form on a lattice $L$, then the multiplicative bundle gerbe associated to $\beta$ possesses a square root.
(ii) If $\beta$ is a skew symmetric bilinear form on a lattice $L$, then $\beta$ yields the trivial 2-group extension of the torus $T = L \otimes_{\mathbb{Z}} \mathrm{U}(1)$.
Corollary 2.9 Let $\beta: L \times L \to \mathbb{Z}$ be a bilinear form on a lattice $L$. Then up to equivalence over $T$, the 2-group we obtain from $\beta$ only depends on the even symmetric
bilinear form $\beta + \beta^t$.
All this leads to further questions:
What are some fun things you would do with a multiplicative gerbe over a torus?
I imagine that if I were doing string theory on a spacetime of the form $M \times T^n$ it would be interesting to have a nontrivial gerbe with connection over $T^n$. People love to talk about torus
compactifications of string theory coming from integral lattices like $\mathrm{E}_8 \times \mathrm{E}_8$, $\mathrm{D}_{16}^+$ and the Leech lattice, as well as many others. Do the gerbes over tori
coming from integral lattices in the way we’re discussing come equipped with a connection in some canonical way? Have people studied this already? Have they used the multiplicativity of the gerbes?
Posted by: John Baez on January 4, 2015 11:17 PM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
Good questions. First of all, the mutliplicative gerbes of Nora’s construction are have all trivial underlying gerbes. The bilinear form is used to construct the multiplication, see Construction 2.3
in Nora’s paper. That multiplication is given by the Poincaré line bundle over $T \times T$ associated to the bilinear form, and the Poincaré line bundle does indeed have a canonical connection. As
such, it induces a multiplicative connection on the multiplicative gerbe , this is explained in my paper arxiv:0804.4835, Example 1.4 (b). So, yes, all the multiplicative gerbes of Nora’s
construction come equipped with multiplicative connections.
If these connections have been used, I don’t know. Thomas Nikolaus and I have a project that tries to relate them to T-duality, but we have not finally succeded.
Connections on mutliplicative gerbes, in general, have the following meaning. As a gerbe with connection (forgetting the multiplicative structure), it is a B-field for string theory on oriented
surfaces with target space the torus. The multiplicative structure means that this string theory lives on the boundary of a Chern-Simons theory with the torus as its gauge group, and the
characteristic class of the multiplicative gerbe as its level.
This is explained here:
• Bundle gerbes for Chern-Simons and Wess-Zumino-Witten theories, arxiv:0410013
• Polyakov-Wiegmann Formula and Multiplicative Gerbes, arxiv:0908.1130
Posted by: Konrad Waldorf on January 5, 2015 11:14 AM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
By the way, my favorite section in Nora Ganter’s paper is the part on ‘Mathieu, Conway and Weyl 2-groups’. Even though I imagine a lot is left for future papers, this hints at what I believe will
become a very fun subject: exceptional higher algebraic structures. The first ones will be based on known exceptional structures — e.g., 2-groups or $n$-groups extending the sporadic finite simple
groups. But someday we may start getting structures whose ‘exceptionalness’ is only visible in the higher parts. And with luck, these will shed new light on the exceptional structures we already
A simple but very nice example is the Mathieu groupoid, $M_{13}$.
Posted by: John Baez on January 4, 2015 11:37 PM | Permalink | Reply to this
Re: Integral Octonions (Part 12)
just wondering about octonions and particles - maybe the exceptional groups have to cope with both normal particles of the standard model, and also with ‘dark’ particles - one might break the 480
tables into 40 particles by factoring out spin and color and signature reversal - giving 10 particles in each of 4 generations instead of 4 particles per generation. The extra 6 need to be ‘dark’ and
not really fermions or bosons.
Posted by: Joel Rice on January 15, 2015 4:03 PM | Permalink | Reply to this
Re: Braided 2-Groups from Lattices
I think the most fun thing to do with an integral lattice $L$ in a real inner product space is to pass to the discriminant group. Namely, consider the dual lattice $L^\ast$. From the inner product we
have an embedding $L \hookrightarrow L^\ast$, and the discriminant group is the quotient $D = L^\ast / L$.
The quadratic form $Q(v) = \frac{1}{2} \langle v, v\rangle$ on $L^\ast$ passes to a quadratic form
$q : D \rightarrow \mathbb{R}/\mathbb{Z}$
on the discriminant group. So, from an even lattice we have produced a $U(1)$-valued quadratic form $q$ on a finite abelian group $D$! Moreover, all such pairs $(D, q)$ arise from lattices (this is a
result of Nikulin).
This all ties in with $\theta$-functions in $U(1)$ Chern Simons theory. See the thesis of Spencer Stirling, Abelian Chern-Simons theory with toral gauge group, modular tensor categories, and group
Perhaps we could recover the braided monoidal category of $D$-graded vector spaces (i.e. living “at the level of the quotient”) by some kind of category of “$L$-equivariant vector bundles on $L^\ast$
”. See also the paper of Davydov and Futorny, Commutative algebras in Drinfeld categories of abelian Lie algebras”.
Posted by: Bruce Bartlett on March 21, 2018 11:21 PM | Permalink | Reply to this | {"url":"https://golem.ph.utexas.edu/category/2015/01/integral_octonions_part_12.html","timestamp":"2024-11-06T05:58:19Z","content_type":"application/xhtml+xml","content_length":"196427","record_id":"<urn:uuid:e5dbfbce-0186-4515-97a2-30afaed558a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00537.warc.gz"} |
Probability distribution
A probability distribution is a mathematical approach to quantifying uncertainty.
There are two main classes of probability distributions: Discrete and continuous. Discrete distributions describe variables that take on discrete values only (typically the positive integers), while
continuous distributions describe variables that can take on arbitrary values in a continuum (typically the real numbers).
In more advanced studies, one also comes across hybrid distributions.
A gentle introduction to the concept
Faced with a set of mutually exclusive propositions or possible outcomes, people intuitively put "degrees of belief" on the different alternatives.
A simple example
When you wake up in the morning one of three thing may happen that day:
• You will get hit by a meteor falling in from space.
• You will not get hit by a meteor falling in from space, but you'll be struck by lightning.
• Neither will happen.
Most people will usually intuit a small to zero belief in the first alternative (although it is possible, and is known to actually have occurred), a slightly larger belief in the second, and a rather
strong belief in the third.
In mathematics, such intuitive ideas are captured, formalized and made precise by the concept of a discrete probability distribution.
A more complicated example
Rather than a simple list of propositions or outcomes like the one above, one may have a to deal with a continuum.
For example, consider the next new person you'll get to know. Given a way to measure height exactly, with infinite precision, how tall will he or she be?
This can be formulated as an uncountably infinite set of propositions, or as a ditto set of possible outcomes of a random experiment.
Let's look at three of these propositions in detail:
• The person is exactly 1.7222... m tall.
• The person is exactly 2.333... m tall.
• The person is exactly 25.010101... m tall.
Clearly, we don't believe the person will be over 25 meters tall. But neither do we believe any of the other propositions. Why should any particular proposition turn out to be the exact correct one
among an infinity of others?
But we still somehow feel that the first proposition listed is more "likely" than the second, which again is more "likely" than the third.
Also, we feel that some "ranges" are more likely than others, f.i. a height between 1.6 and 1.8 meters feels "likely", a height between 2.2 and 2.4 m seems possible but unlikely, and a height larger
than that usually seems safe to exclude.
In mathematics, such intuitive ideas are captured, formalized and made precise by the concept of a continuous probability distribution.
A formal introduction
Discrete probability distributions
Let ${\displaystyle S=\{...,s_{0},s_{1},...\}}$ be a countable set. Let f be a function from S to ${\displaystyle R}$ such that
• f(s) ∈ [0,1] for all s ∈ S
• The sum ${\displaystyle \sum _{i=-\infty }^{\infty }f(s_{i})}$ exists and evaluates to exactly 1.
Then f is a probability mass distribution over the set S. The function F on S defined by
${\displaystyle F(s_{i})=\sum _{k=-\infty }^{i}f(s_{k})}$
is said to be a discrete probability distribution on S.
Continuous probability distributions
Let f be a function from ${\displaystyle \mathbb {R} }$ to ${\displaystyle \mathbb {R} }$ such that
• f is measurable and f(s) ≥ 0 for all s in ${\displaystyle \mathbb {R} }$
• The Lebesgue integral ${\displaystyle \int _{-\infty }^{\infty }f(s)ds}$ exists and evaluates to exactly 1.
Then f is said to be a probability density on the real line. The function ${\displaystyle F:(-\infty ,\infty )\rightarrow [0,1]}$ defined as the integral:
${\displaystyle F(x)=\int _{-\infty }^{x}f(s)ds,}$
is said to be a continuous probability distribution on the real line. This type of distribution is absolutely continuous with respect to the Lebesgue measure.
It should be emphasized that the above is a basic definition of probability distributions on the real line. In probability theory, probability distributions are actually defined much more generally
in terms of sigma algebras and measures. Using these general definitions, one can even formulate probability distributions for general classes of abstract sets beyond the real numbers or Euclidean
Probability distributions in practice
Statistical methods used to choose between distributions and estimate parameters
In the first example in this article, one may look through medical records to find approximately how many people are known to suffer such mishaps per century, and from that information create a
statistic to estimate the probabilities. A strict frequentist will stop there, most statisticians will allow non-statisticial information to be used to arrive at what would be considered the best
available distribution to model the problem. Such information would include knowledge and intuition about peoples tendency to consult doctors after accidents, the comprehensiveness of the records and
so on.
• [1]Person actually hit by a meteorite.
See also
Related topics
==External links== | {"url":"https://citizendium.org/wiki/Probability_distribution","timestamp":"2024-11-05T09:20:05Z","content_type":"text/html","content_length":"52163","record_id":"<urn:uuid:b00de9ec-aac0-47c5-a76a-193854c5a516>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00509.warc.gz"} |
Introductory Chemistry, 1st Canadian Edition [Clone]
1. Learn Dalton’s law of partial pressures.
One of the properties of gases is that they mix with each other. When they do so, they become a solution — a homogeneous mixture. Some of the properties of gas mixtures are easy to determine if we
know the composition of the gases in the mix.
In gas mixtures, each component in the gas phase can be treated separately. Each component of the mixture shares the same temperature and volume. (Remember that gases expand to fill the volume of
their container; gases in a mixture do that as well.) However, each gas has its own pressure. The partial pressure of a gas, P[i], is the pressure that an individual gas in a mixture has. Partial
pressures are expressed in torr, millimetres of mercury, or atmospheres like any other gas pressure; however, we use the term pressure when talking about pure gases and the term partial pressure when
we are talking about the individual gas components in a mixture.
Dalton’s law of partial pressures states that the total pressure of a gas mixture, P[tot], is equal to the sum of the partial pressures of the components, P[i]:
Although this may seem to be a trivial law, it reinforces the idea that gases behave independently of each other.
A mixture of H[2] at 2.33 atm and N[2] at 0.77 atm is in a container. What is the total pressure in the container?
Dalton’s law of partial pressures states that the total pressure is equal to the sum of the partial pressures. We simply add the two pressures together:
P[tot] = 2.33 atm + 0.77 atm = 3.10 atm
Test Yourself
Air can be thought of as a mixture of N[2] and O[2]. In 760 torr of air, the partial pressure of N[2] is 608 torr. What is the partial pressure of O[2]?
152 torr
A 2.00 L container with 2.50 atm of H[2] is connected to a 5.00 L container with 1.90 atm of O[2] inside. The containers are opened, and the gases mix. What is the final pressure inside the
Because gases act independently of each other, we can determine the resulting final pressures using Boyle’s law and then add the two resulting pressures together to get the final pressure. The total
final volume is 2.00 L + 5.00 L = 7.00 L. First, we use Boyle’s law to determine the final pressure of H[2]:
Solving for P[2], we get:
Now we do that same thing for the O[2]:
The total pressure is the sum of the two resulting partial pressures:
Test Yourself
If 0.75 atm of He in a 2.00 L container is connected to a 3.00 L container with 0.35 atm of Ne and the connection between the containers is opened, what is the resulting total pressure?
0.51 atm
One of the reasons we have to deal with Dalton’s law of partial pressures is that gases are frequently collected by bubbling through water. As we will see in Chapter 10 “Solids and Liquids,” liquids
are constantly evaporating into a vapour until the vapour achieves a partial pressure characteristic of the substance and the temperature. This partial pressure is called a vapour pressure. Table 6.2
“Vapour Pressure of Water versus Temperature” lists the vapour pressures of H[2]O versus temperature. Note that if a substance is normally a gas under a given set of conditions, the term partial
pressure is used; the term vapour pressure is reserved for the partial pressure of a vapour when the liquid is the normal phase under a given set of conditions.
Table 6.2 Vapour Pressure of Water versus
Temperature (°C) Vapour Pressure (torr)
5 6.54
10 9.21
15 12.79
20 17.54
21 18.66
22 19.84
23 21.08
24 22.39
25 23.77
30 31.84
35 42.20
40 55.36
50 92.59
60 149.5
70 233.8
80 355.3
90 525.9
100 760.0
Any time a gas is collected over water, the total pressure is equal to the partial pressure of the gas plus the vapour pressure of water. This means that the amount of gas collected will be less than
the total pressure suggests.
Hydrogen gas is generated by the reaction of nitric acid and elemental iron. The gas is collected in an inverted 2.00 L container immersed in a pool of water at 22°C. At the end of the collection,
the partial pressure inside the container is 733 torr. How many moles of H[2] gas were generated?
We need to take into account that the total pressure includes the vapour pressure of water. According to Table 6.2 “Vapour Pressure of Water versus Temperature,” the vapour pressure of water at 22°C
is 19.84 torr. According to Dalton’s law of partial pressures, the total pressure equals the sum of the pressures of the individual gases, so:
733 torr = PH[2] + PH[2]O = PH[2] + 19.84 torr
We solve by subtracting:
PH[2] = 713 torr
Now we can use the ideal gas law to determine the number of moles (remembering to convert the temperature to kelvins, making it 295 K):
All the units cancel except for mol, which is what we are looking for. So:
n = 0.0775 mol H[2] collected
Test Yourself
CO[2], generated by the decomposition of CaCO[3], is collected in a 3.50 L container over water. If the temperature is 50°C and the total pressure inside the container is 833 torr, how many moles of
CO[2] were generated?
0.129 mol
Finally, we introduce a new unit that can be useful, especially for gases. The mole fraction, χ[i], is the ratio of the number of moles of component i in a mixture divided by the total number of
moles in the sample:
(χ is the lowercase Greek letter chi.) Note that mole fraction is not a percentage; its values range from 0 to 1. For example, consider the combination of 4.00 g of He and 5.0 g of Ne. Converting
both to moles, we get:
The total number of moles is the sum of the two mole amounts:
total moles = 1.00 mol + 0.025 mol = 1.25 mol
The mole fractions are simply the ratio of each mole amount to the total number of moles, 1.25 mol:
The sum of the mole fractions equals exactly 1.
For gases, there is another way to determine the mole fraction. When gases have the same volume and temperature (as they would in a mixture of gases), the number of moles is proportional to partial
pressure, so the mole fractions for a gas mixture can be determined by taking the ratio of partial pressure to total pressure:
This expression allows us to determine mole fractions without calculating the moles of each component directly.
A container has a mixture of He at 0.80 atm and Ne at 0.60 atm. What is the mole fraction of each component?
According to Dalton’s law, the total pressure is the sum of the partial pressures:
P[tot] = 0.80 atm + 0.60 atm = 1.40 atm
The mole fractions are the ratios of the partial pressure of each component to the total pressure:
Again, the sum of the mole fractions is exactly 1.
Test Yourself
What are the mole fractions when 0.65 atm of O[2] and 1.30 atm of N[2] are mixed in a container?
Food and Drink App: Carbonated Beverages
Carbonated beverages — sodas, beer, sparkling wines — have one thing in common: they have CO[2] gas dissolved in them in such sufficient quantities that it affects the drinking experience. Most
people find the drinking experience pleasant — indeed, in the United States alone, over 1.5 × 10^9 gal of soda are consumed each year, which is almost 50 gal per person! This figure does not include
other types of carbonated beverages, so the total consumption is probably significantly higher.
All carbonated beverages are made in one of two ways. First, the flat beverage is subjected to a high pressure of CO[2] gas, which forces the gas into solution. The carbonated beverage is then
packaged in a tightly sealed package (usually a bottle or a can) and sold. When the container is opened, the CO[2] pressure is released, resulting in the well-known hiss, and CO[2] bubbles come out
of solution (Figure 6.5 “Opening a Carbonated Beverage”). This must be done with care: if the CO[2] comes out too violently, a mess can occur!
Figure 6.5 “Opening a Carbonated Beverage.” If you are not careful opening a container of a carbonated beverage, you can make a mess as the CO2 comes out of solution suddenly.
The second way a beverage can become carbonated is by the ingestion of sugar by yeast, which then generates CO[2] as a digestion product. This process is called fermentation. The overall reaction is:
C[6]H[12]O[6](aq) → 2C[2]H[5]OH(aq) + 2CO[2](aq)
When this process occurs in a closed container, the CO[2] produced dissolves in the liquid, only to be released from solution when the container is opened. Most fine sparkling wines and champagnes
are turned into carbonated beverages this way. Less-expensive sparkling wines are made like sodas and beer, with exposure to high pressures of CO[2] gas.
• The pressure of a gas in a gas mixture is termed the partial pressure.
• Dalton’s law of partial pressure states that the total pressure in a gas mixture is the sum of the individual partial pressures.
• Collecting gases over water requires that we take the vapour pressure of water into account.
• Mole fraction is another way to express the amounts of components in a mixture.
1. What is the total pressure of a gas mixture containing these partial pressures: PN[2] = 0.78 atm, PH[2] = 0.33 atm, and PO[2] = 1.59 atm?
2. What is the total pressure of a gas mixture containing these partial pressures: P[Ne] = 312 torr, P[He] = 799 torr, and P[Ar] = 831 torr?
3. In a gas mixture of He and Ne, the total pressure is 335 torr and the partial pressure of He is 0.228 atm. What is the partial pressure of Ne?
4. In a gas mixture of O[2] and N[2], the total pressure is 2.66 atm and the partial pressure of O[2] is 888 torr. What is the partial pressure of N[2]?
5. A 3.55 L container has a mixture of 56.7 g of Ar and 33.9 g of He at 33°C. What are the partial pressures of the gases and the total pressure inside the container?
6. A 772 mL container has a mixture of 2.99 g of H[2] and 44.2 g of Xe at 388 K. What are the partial pressures of the gases and the total pressure inside the container?
7. A sample of O[2] is collected over water in a 5.00 L container at 20°C. If the total pressure is 688 torr, how many moles of O[2] are collected?
8. A sample of H[2] is collected over water in a 3.55 L container at 50°C. If the total pressure is 445 torr, how many moles of H[2] are collected?
9. A sample of CO is collected over water in a 25.00 L container at 5°C. If the total pressure is 0.112 atm, how many moles of CO are collected?
10. A sample of NO[2] is collected over water in a 775 mL container at 25°C. If the total pressure is 0.990 atm, how many moles of NO[2] are collected?
11. A sample of NO is collected over water in a 75.0 mL container at 25°C. If the total pressure is 0.495 atm, how many grams of NO are collected?
12. A sample of ClO[2] is collected over water in a 0.800 L container at 15°C. If the total pressure is 1.002 atm, how many grams of ClO[2] are collected?
13. Determine the mole fractions of each component when 44.5 g of He is mixed with 8.83 g of H[2].
14. Determine the mole fractions of each component when 9.33 g of SO[2] is mixed with 13.29 g of SO[3].
15. In a container, 4.56 atm of F[2] is combined with 2.66 atm of Cl[2]. What is the mole fraction of each component?
16. In a container, 77.3 atm of SiF[4] are mixed with 33.9 atm of O[2]. What is the mole fraction of each component?
1. 2.70 atm
3. 162 torr, or 0.213 atm
5. P[Ar] = 10.0 atm; P[He] = 59.9 atm; P[tot] = 69.9 atm
7. 0.183 mol
9. 0.113 mol
11. 0.0440 g
Media Attributions
The pressure that an individual gas in a mixture has.
The total pressure of a gas mixture, P_tot, is equal to the sum of the partial pressures of the components, P_i.
The partial pressure exerted by evaporation of a liquid.
The ratio of the number of moles of a component to the total number of moles in a system. | {"url":"https://opentextbc.ca/introductorychemistryclone/chapter/gas-mixtures/","timestamp":"2024-11-14T03:53:02Z","content_type":"text/html","content_length":"134619","record_id":"<urn:uuid:12bbb0ae-bc64-4d17-a9c0-2a49e8af7249>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00511.warc.gz"} |
C1: EDCS in Distributed Settings
Reminder: This post contains 5100 words · 15 min read · by Xianbin
Edge degree constrained subgraph (EDCS) has received much attention and proved to be very useful in matching related problems. In this post, we introduce some interesting properties of EDCS in
distributed settings. This post is based on [1,2,3].
\(\textbf{Definition (EDCS)}\) Given a graph \(G = (V, E)\), let \(\beta_1 \geq \beta_2 \geq 0\) be two parameters. An edge degree constrained subgraph (EDCS) \(H=({\color{blue}V}, {\color{red}E_H})
\) of \(G\) satisfies the following properties.
\(\textbf{P1)}:\) For any edge \((u,v) \in E_H\): \(d_H(u) + d_H(v) \leq \beta_1\).
\(\textbf{P2)}:\) For any edge \((u,v) \in E\setminus E_H\): \(d_H(u) + d_H(v) \geq \beta_2\).
EDCS in Sampled Subgraphs
\(\textbf{Lemma 1}\) (Degree Distribution Lemma). Fix a graph \(G=(V, E)\) and two parameters \(\beta_1, \beta_2\). Let \(\beta_2 = (1-\lambda)\beta_1\). For any two subgraphs \(G_1,G_2\) that are \
(\text{EDCS}(G, \beta_1,\beta_2)\), and any vertex \(v\in V\), we have
\[\lvert d_{G_1}(v) - d_{G_2}(v) \rvert = O(\log n) \sqrt{\lambda}\cdot \beta\]
\(\textbf{Theorem 1}\). Let \(G_1, G_2 \subseteq G\) be two edge sample subgraphs (selecting with probability \(p\)) for a given graph \(G= (V, E)\). Let \(\color{blue} H_1, H_2\) denote any EDCSs of
\(G_1, G_2\) respectively, with parameters \((\beta, (1-\lambda)\beta)\).
Suppose, \(\beta \geq \frac{750}{\lambda^2} \ln n\), with high probability, for each \(v \in V\), we have
\[\lvert d_{H_1}(v) - d_{H_2}(v)\rvert \leq O(\log n) \sqrt{\lambda}\cdot \beta\]
\(\textbf{Theorem 2}\). Let \(G_1, G_2 \subseteq G\) be two \(\color{red}\text{vertex}\) sample subgraphs (selecting with probability \(p\)) for a given graph \(G= (V, E)\). Let \(\color{blue} H_1,
H_2\) denote any EDCSs of \(G_1, G_2\) respectively, with parameters \((\beta, (1-\lambda)\beta)\).
Suppose, \(\beta \geq \frac{750}{\lambda^2} \ln n\), with high probability, for each \(\color{blue} v \in G_1\cap G_2\), we have
\[\lvert d_{H_1}(v) - d_{H_2}(v)\rvert \leq O(\log n) \sqrt{\lambda}\cdot \beta\]
\(\textbf{Lemma 1}\). For any \(\epsilon > 0\) and \(\beta \geq 1/\epsilon\), any graph \(G = (V, E)\) admits a \(EDCS(G, \beta, (1-\epsilon)\beta).\)
It is easy to state this argument, but how to prove such a thing? Let me try to do this in thirty minutes.
If I can design an algorithm to create the required EDCS for any graph, it is proved. So my job is to find this algorithm. For simplicity, let us assume that \(\beta,\epsilon\) are some constants.
The first step is to obtain the first property, i.e., for any edge \((u,v)\) in EDCS, the sum of degree of \(u,v\) is at most \(\beta\). That is easy. We just randomly sample some constant number of
edges for each node in \(V\). It seems that we already satisfy both properties, but there is a problem. For example, if a node has a large degree, then in the sampled subgraph, its degree can be
large, which will violate the first property.
I also noticed that if we start with an empty graph, the first property and second property are already satisfied.
When I look at the definition of EDCS, I realized that one thing I did notice before. The goal of EDCS is for the matching. The edge with large degrees of endpoints should not be included in the
matching, because it will reduce the chance we find a larger size of matching. If we need a data structure to contain a good matching, we should not contain such an edge. That is to say, for edges in
the EDCS, we want the degrees are small and edges out of EDCS can be large.
It is better starting with an empty graph. We first add some edges trying to meet property (ii). We may need to fix the previous results, i.e., removing some edges, if the added edges violate
property (i). The final thing is to show that after a certain number of steps, the procedure of fixing will stop. I know potential function might help. The question is to build such a potential
Time is up! I cannot figure out the potential function.
Let us check the answer.
\(\textbf{Proof}\). Start with an empty graph \(H\). ( Ha. We have it! ). If there exists an edge in \(H\) that violates property (i), we remove it. If there exists an edge in \(G\setminus H\), we
add it to \(H\).
Set the potential function to be
\[\Phi = \Phi(H) :=\underbrace{(1-\frac{\epsilon}{2})\beta \sum_{u\in V} deg_H(u)}_{\text{ first term} } +\underbrace{\sum_{(u,v)\in H}-(deg_H(u) + deg_H(v))}_\text{second term}\]
( This function is what we missed. ) The maximum value of this function is at most \(O(n\beta^2)\) and the initial value is 0.
After removing an edge,
• the first term decreases by \(2\beta - \epsilon \beta\) (each node reduces one degree).
• the second term increases by at least \(\beta + 1 + \beta -1 = 2\beta\).
Therefore, \(\Phi\) increases by at least \(\epsilon \beta\) for this case.
Similarly, after adding an edge, \(\Phi\) increases by at least \(\epsilon \beta\).
So, after \(O(n\beta/\epsilon)\), the algorithm will stop. It is proved.
It is a beautiful proof.
Unfortunately, I did not figure out the potential function.
Some feelings about this potential function.
I feel that the role of this potential function is to show that no matter edge deletions or edge insertions, the potential function increases. To achieve this goal, we need experience.
[1]. Assadi, Sepehr, et al. “Coresets meet EDCS: algorithms for matching and vertex cover on massive graphs.” Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms. Society
for Industrial and Applied Mathematics, 2019.
[2]. A. Bernstein and C. Stein. Faster fully dynamic matchings with small approximation ratios. In Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016,
Arlington, VA, USA, January 10-12, 2016, pages 692–711, 2016
[3]. Bernstein, Aaron, and Cliff Stein. “Fully dynamic matching in bipartite graphs.” Automata, Languages, and Programming: 42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015,
Proceedings, Part I 42. Springer Berlin Heidelberg, 2015. | {"url":"https://blog-aaronzhu.site/edcs/","timestamp":"2024-11-05T23:23:39Z","content_type":"text/html","content_length":"9111","record_id":"<urn:uuid:b0f49d6e-df3f-45d5-8e9e-6cbbcd46d367>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00345.warc.gz"} |
Coordinate Systems Archives - David The Maths Tutor
A recurring property of coordinate systems is that in order to locate a point in an n-dimensional space, you need n numbers (or n independent pieces of information about that point). There is a way
to “cheat” this using just one number (called a parameter) to locate an n-dimensional point.
This isn’t really cheating as you still have to initially provide the required information, but once done, one number will suffice to place a point.
An example of this is the equation of a circle of radius r:
This is the standard Cartesian equation but a parametric way of defining a circle, using a parameter t, is
Defined in this way, any value of t will generate a point on the same circle. We can generate the Cartesian equation from these two parametric equations, but I will leave that as a topic for a future
Parametric equations can be a much more useful way to represent a curve, especially curves that model a physical process.
If a projectile is launched with an initial speed of 31.6267 m/s at an angle of 50.78Β° from the ground, it will follow a parabolic trajectory which can be represented by the equation
where y is the height above the ground and x is the distance along the ground from the launch point. The trajectory of the projectile (the graph of the above equation) looks like
This graph is useful in that it tells us how high the projectile goes and how far. But it doesn’t tell us where the ball is at any time or how long it take to hit the ground.
The initial velocity can be broken up into a horizontal and a vertical component. These components can be treated separately:
If resistance due to the air is neglected, the horizontal distance at a given time t is x = 20t. The vertical distance cannot be treated as simply as the vertical velocity is constantly changing due
to gravity. From physics and calculus, the vertical distance is y = -4.9t^2 + 24.5t. These two equations are the parametric forms of the Cartesian trajectory equation. For any time t, a point on this
trajectory, (20t, -4.9t^2 + 24.5t), is located and represents where the projectile is at that time:
Notice how we get more information about what is going on with this way of representing a graph. We can now tell where the projectile is at any time t, that it takes 2.5 seconds to reach the top of
the trajectory, and that it takes 5 seconds to return to the ground.
This is not a unique way to represent this trajectory parametrically, but this one conforms to the physics of the problem. In general, parametric equations make it possible to plot graphs that are
difficult or even impossible to plot with a single Cartesian equation.
This can be extended to higher dimensions as well. There is a lot of mathematics around parametric equations and it adds to the wonder (and complexity) of maths.
Coordinate Systems – 3D, part 2
Once again, if you want to locate a point in 3-dimensional space, you need 3 numbers. In my last post, the 2-D Cartesian coordinate system (sometimes called the rectangular coordinate system) was
extended to 3-D by adding another axis that is perpendicular to the other two axes. A point is then located using the coordinates (x, y, z), (1,β 2,3) for example. Here are two more ways to locate a
3-D point that uses the rectangular system as a backdrop.
Spherical Coordinate System
If you remember in the 2-D scenario, polar coordinates used an angle (π ) from the positive x-axis and a distance (r) from the origin to determine the location of a point. And equations to
represent a plot of points that satisfied the relationship between these coordinates had r‘s and π ’s in them. In 3-D, the spherical coordinate system extends this method.
There are different conventions here but they all use two angles and a distance. The mathematical convention is shown below:
Source: https://en.wikipedia.org/wiki/Spherical_coordinate_system#/media/File:3D_Spherical_2.svg
Here, a point is located by an angle from the positive x-axis, π , (like in polar coordinates), an angle from the positive z-axis, π , and a distance, r, from the origin. A point in this system
has coordinates (r, π , π ). As with polar coordinates, there are curves that are more easily expressed in spherical coordinates. For example, a sphere of radius 4 centred at the origin can be
easily expressed in spherical coordinates as r = 4:
Or how about:
There are other conventions for spherical coordinates, one of which you are very familiar with. Locating a point on the earth is typically done with two numbers, longitude and latitude. Longitude is
the angle a location is from the agreed reference meridian that runs through Greenwich England, and latitude is its angle from the equator. If the origin is at the earth’s centre with the x-axis
going through the reference meridian (called the prime meridian) and the z-axis going through the north pole, longitude is our π , latitude is 90Β° β π , and r is always the radius of the
Another convention is used to locate earth satellites using angles right ascension (similar to longitude) and declination (similar to latitude) from an agreed earth centred coordinate system where
the axes are fixed and do not rotate with the earth.
There are other variations of this coordinate system; these are just a few.
Cylindrical Coordinate System
You can think of the cylindrical coordinate system as the 2D polar system with an added z coordinate:
Source: https://tutorial.math.lamar.edu/classes/calciii/CylindricalCoords.aspx
Different letters/Greek symbols can be used, but they all represent the same system. If you look at the x–y plane above, you see that this is just the polar coordinate system that was explained in a
previous post. To add the third dimension, just move up z units to the desired point (r, π , z).
Cylindrical coordinates are useful in putting objects that are symmetrical with respect to the z-axis. For example, a cylinder of radius 4 can be easily described with the equation r = 4:
r = 4
Another example is a cone: z = r:
z = r
Switching between rectangular, spherical, and cylindrical coordinates is a useful tool in calculus. An equation expressed in one of these systems may be unsolvable but solvable in a different system.
In my next post, I’ll describe a sneaky way to locate a point in 2 or 3-D with one number: parametric equations.
Coordinate Systems – 3D, part 1
Since we live in a 3 dimensional world, many problems we encounter in fields such as science and engineering, as well as others, are modelled mathematically using 3 variables, hence, 3D.
The first coordinate system introduced to students to handle 3 variables is an extension of the 2D Cartesian coordinate system. If another number line is added to the 2D system that is 90Β° t0 the
previous 2 axes, with the origin coinciding with the other two origins, you have the 3D system. The third axis is called the z-axis. So a point now needs 3 numbers to place it in 3D space: (x,y,z).
Frequently, to draw a 3D grid on a 2D surface, the y and z axes are drawn in he plane of the surface and the x axis is drawn in perspective to show that it is perpendicular to the surface. So placing
a point in a 3D Cartesian frame is an artistic challenge for me but drawing dashed lines parallel to the axes helps:
There are other orientations of the 3 axes when showing them in 2D, but this is a very common one.
As with the 2D Cartesian coordinate system, equations relating the variables x, y, and z can be plotted, showing all the values of x, y, and z that make the equation true.
In 2D, a general equation of a line is ax + by = c, where the a, b, and c are specific numbers. For example, the set of points that satisfy the equation 2x -3y = 7, plot as a straight line. By
extension, in 3D, the general linear equation is ax + by + cz = d. Though this is called a linear equation, it plots as a plane in 3D:
The 3D version of a circle in 2D is a sphere. The generic equation of a sphere of radius r centred at the origin is x^2 + y^2+ x^2 = r^2:
Very interesting shapes can be made using 3D graphs. Here are a few:
\[z = \pm \sqrt{0.4^2-\left(2-\sqrt{x^2+y^2}\right)^2}\]
\[z=4 e^{-\frac{1}{4} y^2} \sin (2 x)\]
As with 2D, there are other ways of locating points in 3D. I will present some of these in my next post.
Coordinate Systems – 2D, part 2
In my last post, I talked about the Cartesian coordinate system where a point or a set of points can be located using the two numbers (x, y). There is another popular coordinate system that also
locates a point in 2D space.
In the graph below, I have plotted the point (5, 3) in the Cartesian coordinate system we now know very well. I have added a line from the origin to that point and noted that the line makes an angle
π with the x-axis and that the length of the line is r. I’ve also added perpendicular lines from the point to the x and y axes to show that similar right triangles are formed:
From this graph, you can see that the right triangles have sides of lengths 5 and 3 units. From the Pythagorean Theorem,
And from trigonometry:
Why did I do this? Another way to locate that same point is to 1) define a line (also called a ray) from the origin that is 30.96Β° from the x-axis then, 2) go along that line 5.83 units and stop.
That is your point. Welcome to polar coordinates.
This system of locating a point in 2D is called “polar” because the origin is a “pole” from which all the rays that you can define radiate from. In the polar coordinate system, you also need two
numbers to locate a point: r and π . Conventionally, a point in polar coordinates is given in the order (r, π ).
The variable r is a point’s distance from the origin. π is the angle measured from the postive x-axis: anti-clockwise is + and clockwise is β . Because angles repeat every 360Β° or 2π radians,
a particular (r, π ) for a point is not unique. For example, (2, 25Β°) locates the same point as (2, 385Β°).
Graphing relations is usually done by plotting r as a function of π . Just as in Cartesian coordinates, the polar graph of an equation between r and π is a picture of all the points whose (r, π
) coordinates satisfy the equation. For example, the graph below are all points that satisfy r = 2cos(2π ):
Notice how a grid of concentric circles (possible r values) and rays (possible π values) is super-imposed on the x and y axes. This is a polar graph grid.
There are Cartesian graphs that are more easily expressed and plotted in polar coordinates (and vice-versa). One glaring example is a circle. In the Cartesian frame, the equation of a circle, centred
at the origin, is
where r is the radius. For a circle of radius 2, the above equation would have 4 on the right side and the graph would be a circle of radius 2 centred at the origin. In polar coordinates, the same
graph would be r = 2. This is a picture of all points that are 2 units away from the origin:
In orbital dynamics, polar plots are most useful plotting a 2-body orbit. What is meant by “2-body” will be the subject of another post. The path of most orbits of satellites around the earth, are
approximated by the ellipse. In Cartesian coordinates, the equation of an ellipse is:
The parameters a and b determine the size and orientation (long side vertical or horizontal) of the ellipse. For example,
The problem with this plot is that the geometric centre of the ellipse is at the origin. The path of an earth satellite is not the path followed in this plot if the earth is at the origin. The earth
is at one of two special points associated with an ellipse called foci (singular focus). It is more useful in orbital dynamics if the ellipse were plotted in polar coordinates. The polar equation of
an ellipse (actually any conic shape which includes circles, parabolas, and hyperbolas) is
where p and e are parameters that determine the size and the shape (circle, ellipse, parabola, or hyperbola) of the orbit. The parameter p is the y-intercept on a superimposed Cartesian frame and we
will limit e to be strictly between 0 and 1 which makes the equation plot as an ellipse. This equation, by the way, is called the orbit equation because it accurately describes the shape of any orbit
between two point masses without being perturbed by other masses. An example of an elliptical orbit around the earth with a satellite at a particular position is:
This polar plot is more useful to describe orbits because the earth is at the origin and it shows three of the parameters commonly used to describe a satellite’s position and orbit: p (called the
semi-latus rectum), e (called the eccentricity), and π (called the true anomaly).
Polar plots can generate shapes that would be unwieldy to generate in the Cartesian frame:
There are other less popular 2D coordinate systems like the parabolic coordinate system. Here is what parabolic graph paper looks like:
I personally do not want to go there.
Coordinate Systems – 2D, part 1
How do you locate a point on a two-dimensional (2D) surface. Since we are now in two dimensions, it will take a minimum of 2 numbers to locate a point. As in the case for 1D, the 2D surface used can
be flat (which this post talks about) or curved: for example the surface of the Earth where the most common system to locate a point is the Geographic Coordinate System using latitude and longitude
(again, two numbers to locate a point).
Cartesian Coordinate System
The coordinate system most used by students of mathematics is the Cartesian Coordinate System. This was invented (and named after) RenΓ© Descartes in the 17th century. This system is used in 3D as
well as higher dimensions, but this post is limited to 2D. As most people best learn and retain mathematical concepts visually, this system of plotting was, and still is, indispensable in algebra,
calculus, geometry, trigonometry, and many more subjects. So what is the Cartesian Coordinate System?
If you take two 1D number lines, one horizontal and the other vertical so that they are at 90Β° to one another and that their origins intersect, voilΓ , you have a Cartesian Coordinate System:
The system above also has a superimposed grid so that we can more easily located a point.
Conventionally, the horizontal line is called the x-axis, and the vertical one the y-axis. Note the negative numbers are to the left and down. A point on a plane which has this system of location, is
said to have coordinates (x, y). Note that x is always first. So a general point (x, y) will have a position such that it is x units left or right of the y-axis and y units above or below the x-axis.
Here are some examples:
Analysing points and shapes plotted on a Cartesian coordinate system is called Coordinate Geometry. The lengths and midpoints of plotted lines with defined endpoints can be calculated. But the much
more interesting use of a 2D coordinate system is plotting all the points that satisfy a relation between x and y values. This is called plotting an equation.
Suppose you have a relationship (equation) x^2 + y^2 = 4. What are the values of x and y that satisfy this equation? There are an infinite number of (x, y) pairs that will solve this equation. For
example, (0, 2) solves this equation because 0^2 + 2^2 = 4. Even though there are infinite solutions, we can draw a picture of all the points that do solve the equation:
As you can see, the set of all points that solve this equation plots as a circle of radius 2. Plots of other equation can look quite strange:
But it is important to remember that the (x, y) coordinates of any point on the graph of a relation, makes the equation true when you substitute those values into it.
The Cartesian coordinate system is not the only way to locate a point in 2D. I will talk about another popular 2D coordinate sytstem in my next post.
Coordinate Systems – 1D
Many of the posts I have written, had plots of functions or relations between two variables, usually x and y. Most of teaching algebra and calculus relies on graphs to illustrate concepts. These
graphs are plots of all the points that satisfy an algebraic relation between the two (or more) variables. Behind these plots is the coordinate system used. This series of posts explores the
different coordinate systems commonly used in maths. Let’s first look at a one dimension (1D) coordinate system.
1D means that one number is needed to locate a point. The most used 1D coordinate system is the number line:
Number lines can be vertical or even curvy, for example, to show distance along a path. Usually though, the number line is a straight horizontal line. But they all have some things on common. First,
they have to have a reference point: a point from which all other points obtain their position. This point here and in all coordinate systems is called the origin. And second, there is a scale: the
distance between the tick marks that allow us to place a point. In the example above, the scale is 1 unit between tick marks. For example, if we want to plot the variable x = 5, the plot would be
There are an infinite number of points on this line: an infinite number of tick marks and an infinite number of points between each tick mark. What are the kinds of numbers that can be plotted?
Any number on the number line is called a real number. This is an actual mathematical term to distinguish these from other types of numbers used in maths such as imaginary numbers (despite the name,
imaginary numbers have a real meaning in science and engineering). The set of real numbers is represented by the symbol β . There are several subsets of real numbers.
The first set of numbers you learned as a child were the natural numbers. These are the counting numbers 1, 2, 3, … but do not include 0. This set of numbers is given the symbol β .
Then you learned about 0 and negative integers. Integers are whole numbers (no decimals or fraction parts) and include the natural numbers, 0, and the negative integers. This set of numbers is given
the symbol β €. Why not π ? Because π is the symbol for imaginary numbers which are not real numbers and π is also sometimes used to refer to irrational numbers which I will talk about soon.
Notice that β is a subset of β € which is a subset of β .
The next type of real numbers is the set of rational numbers. These are numbers that can be put into the form p/q where p and q are integers. Any integer is a rational number like 2 since 2 can be
written as 2/1. Any decimal number with a repeating pattern of decimals (even if that is a repeating 0) is a rational number. As β is already used for real numbers, this set of numbers is given the
symbol β . This stands for quotient as p/q is a quotient (a maths term for division). All of the previous sets of numbers are subsets of β .
That leaves the set of irrational numbers: the numbers that cannot be put into the form p/q. Numbers like π or β 2 are irrational and symbols like these are the only way to represent the exact
values. They cannot be exactly represented as a decimal number as their decimal parts never repeat. There is no common symbol for these but β or π are sometimes used. There are few occasions
where only irrational numbers are required, but a more common notation would be β \β which means “all real numbers except rational numbers”. Here is a nice picture of how all these types of real
numbers are related:
It’s the irrational and some of the rational numbers that lie between the tick marks. So π would be approximately
Plotting single points on the number line is rather boring. But it can also be used to indicate intervals of numbers like all the numbers between β 6 and 2. This is shown as β 6 < x < 2 where the
endpoints are not included or β 6 β € x β € 2 if both endpoints are included or a combination. When plotting these, an open circle means that the endpoint is not included and a filled in circle
means that it is included. So β 6 < x β € 2 would plotted
There’s not much else we can do when using the 1D number line, but we have a lot more options when expanding to 2D: to be continued. | {"url":"https://davidthemathstutor.com.au/tag/coordinate-systems/","timestamp":"2024-11-03T11:55:00Z","content_type":"text/html","content_length":"96185","record_id":"<urn:uuid:7b6f76ae-a5ef-46da-9a86-6992796f59f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00419.warc.gz"} |
Group synchronization on grids
Group synchronization requires to estimate unknown elements (θ[v])v∈V of a compact group G associated to the vertices of a graph G = (V; E), using noisy observations of the group differences
associated to the edges. This model is relevant to a variety of applications ranging from structure from motion in computer vision to graph localization and positioning, to certain families of
community detection problems. We focus on the case in which the graph G is the d-dimensional grid. Since the unknowns θ[v] are only determined up to a global action of the group, we consider the
following weak recovery question. Can we determine the group difference (formula presented) between far apart vertices u; v better than by random guessing? We prove that weak recovery is possible
(provided the noise is small enough) for d ≥ 3 and, for certain finite groups, for d ≥ 2. Vice-versa, for some continuous groups, we prove that weak recovery is impossible for d = 2. Finally, for
strong enough noise, weak recovery is always impossible.
All Science Journal Classification (ASJC) codes
• Computational Theory and Mathematics
• Signal Processing
• Statistics and Probability
• Theoretical Computer Science
• Graphs
• community detection
• group synchronization
• weak recovery
Dive into the research topics of 'Group synchronization on grids'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/group-synchronization-on-grids","timestamp":"2024-11-02T03:27:08Z","content_type":"text/html","content_length":"47232","record_id":"<urn:uuid:21fcccb0-c93a-46d9-9677-9055aa7450d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00045.warc.gz"} |
The Small_side_angle_bisector_decomposition_2 class implements a simple yet efficient heuristic for decomposing an input polygon into convex sub-polygons.
It is based on the algorithm suggested by Flato and Halperin [5], but without introducing Steiner points. The algorithm operates in two major steps. In the first step, it tries to subdivide the
polygon by connect two reflex vertices with an edge. When this is not possible any more, it eliminates the reflex vertices one by one by connecting them to other convex vertices, such that the new
edge best approximates the angle bisector of the reflex vertex. The algorithm operates in \( O(n^2)\) time and takes \( O(n)\) space at the worst case, where \( n\) is the size of the input polygon. | {"url":"https://doc.cgal.org/5.5.2/Minkowski_sum_2/classCGAL_1_1Small__side__angle__bisector__decomposition__2.html","timestamp":"2024-11-04T20:29:02Z","content_type":"application/xhtml+xml","content_length":"9958","record_id":"<urn:uuid:49695024-bc3d-49eb-8bfc-6eb0ebbf134d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00686.warc.gz"} |
(PDF) Error Correction for Fire Growth Modeling
Author content
All content in this area was uploaded by Kathryn Leonard on May 19, 2016
Content may be subject to copyright.
Error Correction for Fire Growth Modeling
Kathryn Leonard and Derek DeSantis
CSU Channel Islands, Camarillo, CA, USA
Abstract. We construct predictions of fire boundary growth using level set meth-
ods. We generate a correction for predictions at the subsequent time step based
on current error. The current error is captured by a thin-plate spline deformation
from the initial predicted boundary to the observed boundary, which is then ap-
plied to the initial prediction at the subsequent time step. We apply these methods
to data from the 1996 Bee Fire and 2002 Troy Fire. We also compare our results
to earlier predictions for the Bee Fire using the FARSITE method. Error is mea-
sured using the Hausdorff distance. We determine conditions under which error
correction improves prediction performance.
1 Introduction
Developing accurate models for the growth of forest fires is a vexing problem with
life-and-death implications. The physical interactions between variables in a fire are
too complex to be captured by any solvable mathematical formulation. For example,
humidity plays an important role in rate of fire spread (ROS), but the fire itself alters
the humidity of the surrounding air. Additionally, collecting reliable measurements of
these variables is often impossible during a fire event.
The model currently used by US fire departments, FARSITE [3], propagates fires
locally along ellipses normal to the fire boundary via functions based on topography,
atmosphere, vegetation, and elevation above ground level of the fire [6]. Implemen-
tation often relies on coarse approximations to real-time input parameters, or none at
all. Recently, level set methods have been applied to model ROS [5]. Level set meth-
ods develop a global model of fire growth that depends on the geometry of the fire
front as well as an external vector field that can encompass the aforementioned exter-
nal variables. Again, implementations of the the level set model rarely account for the
complications of the physical reality of the fire.
Not surprisingly, none of these models produces very accurate predictions. As a
result, attention has turned to error correction [4], [7], whereby correcting error in a
current prediction relies on errors at earlier times. The hope is that we can account for
the barriers to sophistication in our implementations by learning from their failures.
In one such approach, Fujioka defines a notion of bias in [4] based on a polar rep-
resentation of the fire boundary, and generates a correction based on removing that bias
from the prediction estimates. His work concludes that the uncorrected estimates are
more accurate. We compare our results with his in Section 3.2. In another approach,
Rochoux, et. al, develop a probabilistic framework using simulations and controlled
burns to generate a best linear unbiased estimator (BLUE) of the correction using tech-
niques based on the Kalman Filter [7]. To date, the methodology has not been applied
to real-world fire prediction.
Our approach uses fire intensity data from the California Troy and Bee fires to ex-
plore the idea that past fire behavior can realistically inform future fire prediction on the
time scales for which data is available during an actual fire event. We also explore the
relative accuracy of level set and FARSITE methods for modeling fire spread for the
Bee fire, data that captures the first 105 minutes of fire growth.
We apply the level set method as implemented by Sumengen [9] to an initial fire
boundary. We determine a mismatch between the predicted fire boundary and the ob-
served fire boundary using thin plate spline (TPS) point matching as implemented by
Chui and Rangarajan [2]. The level set prediction at the following time step is then
adjusted according to the TPS deformation. Accuracy is measured using the Hausdorff
distance between the observed and predicted boundaries. We present accuracy results
for the Troy and Bee fires, and compare our results for the Bee fire with those found in
Fujioka [4].
2 Methods
Troy fire data consists of 13 heat intensity aerial images at approximately 10 minute
intervals beginning in the afternoon of June 19, 2002. The 1996 Bee fire data consists
of three images at 15, 45, and 105 minutes after ignition. All data was obtained from the
USDA Forest Service. In addition, our data includes predicted boundaries generated by
Fujioka’s method described in [4] for the Bee Fire. Fujioka’s method uses FARSITE, a
Rothermel-based method, with unusually detailed wind data to generate predictions.
2.1 Preprocessing
Bee fire data consists of UTM coordinates of the fire boundary points, which we scaled
down. To extract boundaries from the heat intensity images comprising the Troy fire
data, we segment the fire area using k-means clustering with k= 2, then extract the
coordinates of the boundary contour. Given the boundary coordinates, we compute the
signed distance function of the boundary for input to the level set method.
2.2 Level Set Method
The level set method models contours evolving in time as the zero level sets of a func-
tion φt(x, y). The level set function satisfies the level set equation:
dt +V· ∇φ= 0
where V(x, y)is a continuous vector field encoding both external forces and intrinsic
geometry of the curve [8]. For the Troy fire, Vcontains coarse wind information and
constant rate of spread normal to the curve. For the Bee fire, Vis just the constant
normal rate of spread (wind data was not available). We compute a first-order Godunov
numerical solution as implemented in [9] with square mesh width dx =dy = 0.5, 1.5
iterations per minute of prediction, and α= 0.5in determining dt.
2.3 Thin-plate Spline Matching
Thin-plate splines (TPS) approximate smooth planar deformations mapping one con-
tour onto another by defining a function f(v) = Pn
i=1 aiφ(v−xi)based on pairings
of control points {(xi, yi)}n
i=1 that minimizes the energy functional [1]:
||f(xi)−yi|| +λZZ "∂2f
+ 2 ∂2f
∂x∂ y +∂2f
Given two sets of boundary points {xi}and {yj}, the energy minimization is dif-
ficult to compute. As implemented in Chui and Rangarajan [2], an approximate min-
imization is found using deterministic annealing. At high temperatures, point sets are
matched based on geometric features. As the temperature decreases, points are matched
based on proximity. The output of the implementation is a point matching between the
two sets of boundary points, and the weights and coefficients for computing the result-
ing transformation for any new input points. In our implementation, initial temperatures
range from 400 to 7500 and final temperatures range from 12 to 1000, based on magni-
tude of the coordinates.
2.4 Error Correction
At time t=t0, we input points on the initial fire boundary B0to the level set method,
producing an estimate P1for the observed boundary B1at t=t1. In this first stage,
there is no history available to construct a corrected prediction, so the corrected pre-
diction Q1=P1. We then compute the TPS matching between the level set prediction
P1and the observed boundary B1, producing a function f1:R2→R2. We begin the
iteration at t=ti,i > 0, with the observed boundary Biand the TPS mapping ficap-
turing the error between the level set prediction Piand Bi. We then generate the level
set prediction Pi+1 of the observed boundary Bi+1 at t=ti+1 and correct it according
to fi, generating a revised prediction Qi+1 of Bi+1, where Qi+1 =fi(Pi).
2.5 Error Measurement
We use the Hausdorff distance to measure the error between predicted and observed fire
boundaries. The Hausdorff distance captures the maximum Euclidean distance between
two boundaries. Given two boundaries B1, B2,
dH(B1, B2) = max
q∈Bj,i6=jd(p, q)
where d(p, q)is the standard Euclidean distance between points p, q in the plane.
3 Results
We find that neither the level set predictions {Pi}nor the corrected predictions {Qi}
satisfactorily capture the fire behavior. For the Troy fire data, both methods provide
adequate predictions, with better accuracy sometimes with correction and sometimes
without. For the Bee fire, neither corrected nor uncorrected method is satisfactory. We
judge Fujioka’s predicted boundaries to be superior. In other words, correction after the
fact does not adequately compensate for the inability of the original implementation to
adequately model the physical complexities of the fire.
3.1 Troy Fire Results
The Hausdorff distances between the observed boundary and the predicted boundaries
are given in Table 1. We also show a sampling of the level set predictions without cor-
rection (cyan), with correction (green), and the observed boundaries (yellow) in Figures
1-9. Note that the corrected and uncorrected predictions for the boundary at time t= 2
are the same because there is not yet a history of error. Note also in Figures 7and 9how
significant new growth areas emerge but are not captured at all by either corrected or
uncorrected predictions.
Time-step Level Set Corrected
2 51.99 51.99
3 13.92 51.30
4 19.10 11.64
5 14.14 14.79
6 22.36 16.21
7 15.13 19.50
8 28.07 22.63
9 24.18 15.10
10 16.12 15.83
11 26.00 19.45
12 13.03 15.72
Table 1. Hausdorff distance between level set predictions of fire boundaries and the observed
boundaries, with and without TPS correction based on error at previous time step.
In certain cases, the correction contributes to a substantially more accurate predic-
tion (Figures 3,8), but often the two predictions are roughly the same distance from
the boundary. As the images in Figures 1-10 show, the level set method errors tend to
underestimate growth while the TPS-corrected predictions tend to overestimate growth.
3.2 Bee Fire Results
For the Bee fire, only three time steps of data are useable (Figure 11) giving a very small
sample. We include results nonetheless because we are able to directly compare our
predictions with FARSITE predictions. The Hausdorff distances between the observed
boundary and the predicted boundaries are given in Table 2for level set predictions,
Fig. 1. Troy fire t= 2: level set prediction (cyan), TPS-corrected level set prediction (green), and
observed boundary (yellow).
Fig. 2. Troy fire t= 3: level set prediction (cyan), TPS-corrected level set prediction (green), and
observed boundary (yellow).
Fig. 3. Troy fire t= 4: level set prediction (cyan), TPS-corrected level set prediction (green), and
observed boundary (yellow).
Fig. 4. Troy fire t= 5: level set prediction (cyan), TPS-corrected level set prediction (green), and
observed boundary (yellow).
Fig. 5. Troy fire t= 6: level set prediction (cyan), TPS-corrected level set prediction (green), and
observed boundary (yellow).
Fig. 6. Troy fire t= 6: level set prediction (cyan), TPS-corrected level set prediction (green), and
observed boundary (yellow).
Fig. 7. Troy fire t= 7: level set prediction (cyan), TPS-corrected level set prediction (green), and
observed boundary (yellow).
Fig. 8. Troy fire t= 10: level set prediction (cyan), TPS-corrected level set prediction (green),
and observed boundary (yellow).
Fig. 9. Troy fire t= 11: level set prediction (cyan), TPS-corrected level set prediction (green),
and observed boundary (yellow).
Fig. 10. Troy fire t= 12: level set prediction (cyan), TPS-corrected level set prediction (green),
and observed boundary (yellow).
corrected level set predictions, and FARSITE predictions. We applied TPS correction
to the FARSITE predictions but found no improvement. We do not include those results
here. Again, recall that at t= 60 minutes, no history of error exists and so the corrected
prediction is equal to the original prediction. Note also that the scale of these errors is
different than the results for the Troy fire, as the coordinate systems are different.
−500 0 500 1000 1500 2000
Fig. 11. Bee fire after 30, 60 and 105 minutes.
Time Level Set Corrected FARSITE
60 591.7 591.7 790.5
105 1907.7 1876.5 954.2
Table 2. Hausdorff distance between FARSITE or level set predictions of fire boundaries and the
observed boundaries, with and without TPS correction based on error at previous time step.
At the early stage, the two methods are comparable. At the later stage, however,
FARSITE error is half the error for either of the level set predictions.
We show the FARSITE predictions (white), level set predictions without correction
(green), and the observed boundaries (yellow) in Figures 12 and 13.
4 Discussion
We have shown that the level set method with and without TPS error correction mod-
els the Troy fire reasonably well. For time steps where the fire growth remains stable,
error correction does improve estimates meaningfully. An adaptive scheme where the
−500 0 500 1000 1500 2000
Fig. 12. Bee fire t= 60 minutes: FARSITE prediction (white), level set prediction (cyan), and
observed boundary (yellow).
−1000 −500 0 500 1000 1500 2000 2500
Fig. 13. Bee fire t= 105 minutes: FARSITE prediction (white), level set prediction (cyan), TPS-
corrected level set prediction (green), and observed boundary (yellow).
decision to correct or not is based on degree of change in atmospheric, terrain, or tem-
perature factors may be desirable.
We also show that the more accurate approximation for the Bee fire is the Rothermel-
based model. We believe this is largely due to the superiority of the FARSITE method
with detailed wind data to our implementation of the level set method with no wind
Some of the most unpredictable behavior arises from newly formed bulges in the
fire, so-called “fire fingers,” which are among the most dangerous of fire behaviors.
Current work is underway to better model these localized events. Predicting when and
where fire fingers are likely to arise will be helpful, even if precise modeling of their
boundary evolution eludes us.
Our work demonstrates the challenges of applying historical error to correct current
prediction of fire boundaries. Particularly during the early stages of a fire, or for a fire
with unusual physical constraints, the orientation and magnitude of boundary evolution
is rapidly changing. Sophistication equal to that desired in the original model of bound-
ary evolution is likely necessary to produce useful correction. Past error alone, at least
as globally measured, is not reliably informative. Future work will consider local error
The authors gratefully acknowledge Francis Fujioka for introducing us to the fire mod-
eling problem, sharing data and predictions from the Bee fire, and predictions for the
Troy fire, Robert Tissell for sharing data from the Troy fire, and the National Science
Foundation IIS-0954256 for funding our work.
1. Bookstein, F. Principal Warps: Thin-plate Splines and the Decomposition of Deformations.
IEEE Transactions in Pattern Analysis and Machine Intelligence, 14(2):239-256, 1992. 3
2. Chui, H., Rangarajan, A. A New Algorithm for Non-rigid Point Matching. IEEE Conference
on Computer Vision and Pattern Recognition, 2000. 2,3
3. Finney, M. FARSITE: Fire Area Simulator-Model, Development and Evaluation. Report
RMRS-RP-4, US Department of Agriculture Forest Service, Rocky Mountain Research Sta-
tion Paper, 1998. 1
4. Fujioka, F. A New Method for the Analysis of Fire Spread Modeling Errors. International
Journal of Wildland Fire, 11:193-203, 2002. 1,2
5. Mallet, V., Keyes, D., Fendell, F. Modeling Wildland Fire Propagation with Level Set Meth-
ods. Journal of Computers & Mathematics with Applications, 57(7): 1089-1101, 2009. 1
6. Rothermel, R. A Mathematical Model for Predicting Fire Spread in 17 Wildland Fuels. Re-
search Paper INT-115, US Department of Agriculture Forest Service, 1972. 1
7. Rochoux, M., Delmottea, B., Cuenot, B., Riccia, S., Trouv, A. Regional-scale simulations
of wildland fire spread informed by real-time flame front observations Proceedings of the
Combustion Institute, 34(2):2641-2647, 2013. 1,2
8. Sethian, J. Level Set Methods and Fast Marching Methods : Evolving Interfaces in Compu-
tational Geometry, Fluid Mechanics, Computer Vision, and Materials Science. Cambridge
University Press, 1999. 2 | {"url":"https://www.researchgate.net/publication/287818070_Error_Correction_for_Fire_Growth_Modeling","timestamp":"2024-11-13T16:37:37Z","content_type":"text/html","content_length":"511665","record_id":"<urn:uuid:2ff609ec-a8c8-43a5-84f4-d733d858c4b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00762.warc.gz"} |
string math 2021
and Physics), [G]Suman Kundu (Tata Inst. http://silentmatt.com/javascript-expression-evaluator/, http://www.codeproject.com/KB/scripting/jsexpressioneval.aspx, I spent a couple of hours to implement
all the arithmetical rules without using eval() and finally I published a package on npm string-math. Mhh, you could use the Function() constructor: There's nothing wrong with eval, especially for
cases like this. In this talk, I explain an ongoing work to combine them, and to give a higher relations between QFTs and differential cohomology. Joerg Teschner (DESY) String Math 2021 IMPA, Rio de
Janeiro, June 14 - 18, 2021- Turned into an online event The series of String-Math conferences has developed into a central event on the interface between mathematics and physics related to string
theory, quantum field theory and neighboring subjects. Title: Magnetic quivers for Superconformal Field Theories We present a novel construction of such flux vacua in permutation-symmetric
multiparameter compactifications, and develop methods for efficiently computing the zeta functions of the corresponding geometries. What does "use strict" do in JavaScript, and what is the reasoning
behind it? Estrada Dona Castorina 110, Jardim Botnico All, Blasting is well underway in South Dakota for the $2 billion + LBNF/DUNE facility. David Morrison (University of California) In response to
the obvious question about relation to experiment and testability, people have clearly completely given up on this, with nothing to say (other than a bogus claim that string theory makes some
prediction about B-mode polarization), going on about how it took thousands of years for the atomic hypothesis to be vindicated. This story also relates to the higher-form global symmetry. We study
an open quantum many body spin system to check whether its integrable and chaotic regimes can be distinguished by looking at the non-unitary dynamics of a simple operator in Krylov space. We
introduce a. new idea, boundary averaging, to address the question: Does averaging also work well in AdS/BCFT? Everything is in the description. problemu upakowania sfer w wymiarze 8, a take we
wsppracy z innymi matematykami rozwizanie tego problemu w wymiarze 24. Ironically, the very fact that he had to refer to it at all should alarm them. Likely Witten was thinking of things like the 6d
(2,0) superconformal theory which has not classical limit and its construction is somewhat mysterious. There are slight differences regarding scoping, but they mostly behave the same, including
exposure to much of the same security risks: You can't, at most you could do something retort like parsing the numbers and then separating the operations with a switch, and making them. In addition,
distinct tensor branch data obtained from 6D theories with a single instanton inspire us to classify heterotic strings probing ADE singularities. In other physical contexts you have different
symmetries than Poincare, often approximate symmetries. [Slides]. Abstract: I will be discussing my recent work [2110.08179] with Chris Blair and Dan Thompson on topology change in Poisson-Lie
T-duality with spectators (i.e. In my case, I will give a run, certainly much more healthy, no? I will also explain how the Hecke operators from geometric and analytic Langlands correspondence are
realized in the four dimensional super-Yang-Mills theory. The operators satisfy an operator product expansion. Talks are available for watching every day via Youtube, links are on the main page.
Abstract: I will discuss methods to numerically approximate Ricci-flat metrics for Calabi-Yau n-folds defined as complete intersections in toric ambient spaces. Title: Liouville conformal field
theory: from the probabilistic construction to the bootstrap construction Browse other questions tagged, Where developers & technologists share private knowledge with coworkers, Reach developers &
technologists worldwide. We therefore relate differential and enumerative geometry, topology and geometric representation theory in mathematics, via a maximally-supersymmetric topological quantum
field theory with electric-magnetic duality in physics. LHC will discover SUSY In particular, i can not find a derivation of any black hole entropy from first principles in quantum gravity outside of
string theory. The LQG example hardly qualifies if you do not have any way to fix the Immirzi parameter and just tune it to get the desired result. We also discuss generalizations to b_2^+(X)>1.
Title: Vafa-Witten Theory: Invariants, Floer Homologies, Higgs Bundles, a Geometric Langlands Correspondence, and Categorification [online talk] Possible applications include computing massive string
excitations, volumes of non-calibrated cycles, the SYZ conjecture, and approximating solutions to the Hermitian Yang-Mills equations. I notice that Roger Penrose, a well-know string theory skeptic,
will be giving a talk on the last day of the conference next week. If youre so concerned about the public or physics community getting misled, surely youve contacted the organizers of and
participants in the conferences Outreach activities. This is damaging. Ive personally never understood why showing that (unphysical limits of) your quantum theory gives the expected semi-classical
result is anything other than a rather weak consistency check on your theory. In the presence of chiral multiplets that are added to plumbing graphs, we show that ST-transformations lead to many
equivalent plumbing theories, and hence generalize the scope of Kirby moves. Mark Gross Anyway, truly bold attempt! What is the !! Dear Peter, I believe you are cherry-picking parts of comments.
String theorists are for the most part no longer actively pursuing connecting to particle physics of the real world. Abstract: The spectrum of BPS states in D=4 supersymmetric field theories and
string vacua famously jumps across codimension-one walls in vector multiplet moduli space. In this talk, we will discuss recent progress in constructing explicit examples of higher-categories
describing non-invertible symmetries in quantum field theories of various spacetime dimensions ranging from d=3 to d=6. CEIs are associated to an arbitrary Calabi-Yau category, together with a
splitting of the Hodge filtration. A place where magic is studied and practiced? Im posting this since I thought it could be useful for those very busy hep-ph physicists who may wish to go directly
to (what I & surely many others) considers as the key moment from perspective lectures. We will speak about the history of Keplers conjecture. As Peter keeps on repeating, the denial of the string
community to acknowledge the failure of string theory as a unification theory (which is nothing to be ashamed off: physics is full of beautiful theory that are not correct) is a shame for the physics
community. for Fundamental Research), [D] Juan Maldacena (Institute for Advanced Study), [T] Shiraz Minwalla (Tata Inst. If you are interested in what field theories Wittens probably interested in,
then I certainly recommend you to follow the on going ICTSs Quantum Field Theory, Geometry and Representation Theory, with Wittens mini-course starting tomorrow: https://www.icts.res.in/program/
qftgrt2021/talks, In order to get a feel what is at stake here, start with: [Slides] [Video], Maxim Kontsevich (IHS) [Slides], Sachin Chauhan (Indian Institute of Technology, Bombay) Miosz Panfil
Divergence of perturbation theory in quantum electrodynamics. for Fundamental Research), [T] Sebastian Mizera (Institute for Advanced Study), [T] Sameer Murthy (Kings College London), [S] Hirosi
Ooguri (Caltech and Kavli IPMU), [D] Joo Penedones (cole Polytech. (i) No one convincingly proved the mathematical equivalence between the RNS superstrings and GS formalism. We engineer these models
using toric geometry techniques to construct non-compact threefolds that manifestly have multiple fibrations and hence M/F-theory lifts. Moving away from such orbifold points requires generalizing
the flow tree formula from the Abelian category of quiver representations to the derived category of the same. The Ryu-Takayanagi formula then encodes the fractal dimension of the boundary, while the
extremal surfaces encode an important part of the geometry of the buildings, showing a new interplay between ideas from holography and geometric group theory. Yuya Kusuki (California Institute of
Technology) Title: Calculating Quantum Knot Homology from Homological Mirror Symmetry Another series of talks that I took a look at and that I can recommend is Nima Arkani-Hameds lectures on Physics
at Future Colliders at the ICTP summer school on particle physics. Motived by (twisted) holography (following Costello, Gaiotto, Li, and Paquette) we find a holomorphic model describing the minimal
twist of the six-dimensional theory. Jerome Gauntlett, Rajesh Gopakumar, Mariana Grana, Michael Green, David Gross, Daniel Grumiller, Jeff Harvey, Marc Henneaux, Veronika Hubeny, Marina Huerta, Janet
Hung, Antal Jevicki, Clifford Johnson, Shamit Kachru, Zohar Komargodski, Finn Larsen, Yolanda Lozano, Kimyeong Lee, Dieter Luest, Juan Maldacena, Shiraz Minwalla, Jeff Murugan, Rob Myers, Hirosi
Ooguri, Leo Pando-Zayas, Silvia Penati, Fernando Quevedo, Eliezer Rabinovici, John Schwarz, Nathan Seiberg, Ashoke Sen, Steve Shenker, Eva Silverstein, Wei Song, Andy Strominger, Tadashi Takayanagi,
Sandeep Trivedi, Cumrun Vafa, Pedro Vieira, Anastasia Volovich, Spenta Wadia, Edward Witten, Konstantin Zarembo. To address this issue, we use the fusion-matrix bootstrap, which is recently developed
in the context of the light-cone bootstrap. Dec 18, 2021 at 16:43. In this talk I will introduce the heterotic G2 system and show how to construct families of solutions using homogeneous 3-Sasakian
manifolds with squashed metrics. The simplest example is the qubit. How would you please implement. The linking numbers of plumbing graphs are interpreted as the effective mixed Chern-Simons levels
of 3d N=2 theories with chiral multiplets. Abstract: Holomorphic Floer theory is the analogof Floer theoryfor holomorphic symplectic manifolds. The two-step procedure involves first finding points on
the Calabi-Yau manifold and then approximating the Ricci-flat metric by approximating solutions to the underlying Monge-Ampere equation. Well, that would be the case if people follow the hype for
last 20-30 years or more. Mathematically, they can be identified with the K-theoretic versions of the Donaldson invariants on X. Note also that I was making no comment about the work people were
presenting at the conference, was sticking to discussing particular important points about the current state of the subject being made by the two most talented and hard-w0rking leaders of the field.
Are there any actual attempts in the literature to build such an intrinsically quantum theory, or is this just pie in the sky? Besides reviews of major developments in the field and specialized talks
on specific topics, an important novelty will be several informal discussions involving two researchers and the conference participants. Title: Holomorphic Floer theory and Donaldson-Thomas
invariants Update: In the final discussion section, Witten emphasizes that What is string theory? still has no answer, that we have little idea what it really is. Abstract: During my presentation, I
will discuss certain equivalences between conformal field theories with a continuous spectrum: Liouville theory, its supersymmetric extension, and models based on the affine su(2) algebra with
irrational level. UBC, Vancouver, Canada. Title: The sphere packing problem and beyond. Abstract: Recently, higher categorical understandings of quantum field theories have been rapidly developing.
Thats interesting, can you give pointers? My talk will be based on recent joint work with Baej Ruba. [Slides], Sam Gunningham (Montana State University) Title: Smoothing, scattering and a conjecture
of Fukaya [online talk] I did watch fully the discussions I wrote about here, also took a look at many of the others. Abstract: In this talk, I will revisit the construction of heterotic ALE
instantons and the corresponding T-dual systems in 6D little string theories (LSTs). How useful this will be will depend on the symmetries you have available. [Slides] [Video], Sunghyuk Park
(California Institute of Technology) String Math 2021 - Si Li (Tsinghua University) 848 views Jun 15, 2021 27 Dislike Share Instituto de Matemtica Pura e Aplicada 100K subscribers Title: Langlands
duality for 3-manifolds Talks are available for watching every day via Youtube, links are on the main page. An exchange that was otherwise quite lively and interesting, you paint in the worst
possible way. Title: Counting BPS states with discrete charges in M-theory Penroses talk is interesting (https://www.youtube.com/watch?v=hk_6EtWUatM), as is the discussion between him and string
folks afterwards. Title: Perturbative calculations in twisted 4d gauge theories [online talk] Additionally, Evaluator.js intelligently reports invalid syntax, such as a misused operator, missing
operand, or mismatched parentheses. The only game in town in fundamental physics now is to give up on a theory of the real world. There will also be a public lecture and outreach activity Ask a
String Theorist during the weekend. Nov 22-26, 2021: Higher structures in geometry and physics, Chern Institute of Mathematics, Nankai University 9. ncdu: What's going on with this second size
column? [Slides] [Video], Antoine Bourget (CEA Saclay) We show that many 3d theories can be represented as plumbing graphs and hence be able to be constructed by compactifying M5-branes on
corresponding plumbing manifolds using the construction of 3d-3d correspondence. In a different direction, I will in this talk present a conjectural picture relating holomorphic Floer theory of
complex integrable systems to Donaldson-Thomas invariants. Theres a long list of things they are talking about under the name string theory, with the generic problem that the ones that are reasonably
well-defined have nothing to do with the real world (e.g., wrong dimension). (UoS & CNRS) Quantum G & of Looijenga pairs Berkeley String-Math 2021 Overview The two main messages: 1ve (different, but
equivalent) string-theory motivated enumerative theories built from (X;D) 2they are all closed-form solvable Joint with P. Bousseau (ETH Zrich/Saclay) and M. van Garrel (Warwick/Birmingham). Leonardo
Rastelli Abstract: The heterotic G2 system is the 7-dimensional analogue of the Hull-Strominger system. What is a word for the arcane equivalent of a monastery? Hirosi Ooguri (Caltech & IPMU) A
better version was given September 22, 2021 at Yan Soibelman's M-seminar. We prove a reformulation of the main conjecture in Fukayas second correspondence, where multi-valued Morse theory on the base
B is replaced by tropical geometry on the Legendre dual B. But, as has also been the case for many years, the conference features many talks that have nothing to do with string theory and may be
quite interesting. Pawe Caputa Theres no reason to get into the technical argument over whether the perturbative expansion of various versions of the superstring in 10d is mathematically
well-defined. This feature could come in handy in projects where we want to evaluate math expressions provided in string format. [Slides] [Video], Netta Engelhardt (MIT) Its explicit description as a
QFT has remained elusive. Nigel Hitchin (University of Oxford) Finally, I will mention how some of these features are likely to persist in a proposal for the worldsheet dual of free N=4 Super
Yang-Mills theory. Strings 2021 Summary Talk, So Paulo (June 21 - July 2, 2021) [on-line] . I saw the discussions you refer to. Why are trials on "Law & Order" in the New York Supreme Court? Using
Hands-On Tools to Monitor Progress and Assess Students in Math. Francis (Skip) Fennell | Nov 3, 2021 . Title: Category of QFTs and differential cohomology More information can be found here Thematic
trimester on vertex algebras: This construction gives rise to new quantum groups. Due to its interaction, the system density matrix and operators within the system Hilbert space evolve non-unitarily.
[Slides] [Video], Boris Pioline (Sorbonne Universit and CNRS) Abstract: I will explain how to explicitly calculate knot homology using the Fukaya category associated to a 2d Landau-Ginsburg theory,
which is the equivariant mirror of the Coulomb branch of a 3d $mathcal{N}=4$ theory. I just had a look at that session, and more than anything, felt sorry seeing such eminent physicists giving such
vacuous reasons just for the sake of defending a theory they have worked on for a long time. IMPA, Rio de Janeiro, June 14 18, 2021 Turned into an online eventSpeaker: Mina Aganagic (UC Berkeley)
Lecture title: Knot homologies from mirror symmetryEvent page: http://bit.ly/stringmath2021Videos playlist: https://bit.ly/3v2lHlsThe series of String-Math conferences has developed into a central
event on the interface between mathematics and physics related to string theory, quantum field theory and neighboring subjects. Weekly on Mondays 2:10 PM (Pacific Time) Meetings are in-person, at 402
Physics South. [Slides], Muyang Liu (Uppsala University) This public lecture is also a lecture in the Zapytaj fizyka series here is the lecture website. Strings 2021 started today, program is
available here. Of course, a few minutes earlier Witten had explained that no one knows what the theory actually is, which means there is no way to show that it is mathematically inconsistent.
Abstract: Homological block, commonly denoted by , is a 3-manifold invariant whose existence was predicted by S. Gukov, D. Pei, P. Putrov, and C. Vafa in 2017 using 3d/3d correspondence. Making
statements based on opinion; back them up with references or personal experience. The string community has been dishonest for at least 20 years, and has absolutely no right to give any lessons about
ethics in science. https://www.youtube.com/watch?v=WwmxaXTnGMI Thank you. This is a joint work with K. Ohmori (U. Tokyo). Abstract: I will describe a close analogy between the spectral geometry of
hyperbolic manifolds and conformal field theory. The key steps in this equivalence is a probabilistic derivation of the DOZZ formula for the structure constants, the spectral analysis of the
Hamiltonian of the theory and the proof that the probabilistic construction satisfies certain natural geometrical gluing rules called Segals axioms. NN-QFT Correspondence. Off topic, but the European
congress of Mathematics has finished up. Clay Mathematics Institute, ContactEnhancement and Partnership ProgramMillennium Prize ProblemsPublicationsHome. All, [Slides] [Video], Brian Williams
(University of Edinburgh) I will discussa phenomenon similar to Frobenius of modular representation theory. In the case of rank one instantons this factors through a `Heisenberg type super Lie
algebra and the story is analogous to Nakajimas action on the Hilbert scheme. [Slides] [Video], Mayuko Yamashita (Kyoto University) Since it's online only, talks are much more accessible than usual
(and since it's free, over 2000 people have registered to in principle participate via Zoom). Abstract: Thepositive GrassmannianGrk,n0is the subset of the real Grassmannian where all Plcker
coordinates are nonnegative. Albrecht Klemm But this is just an opinion. We provide significant evidence that averaging plays an important role in reproducingsemiclassical gravity in AdS/BCFT. It's
december of 2021, and the java world has been rattled by a log4j vulnerability. Really amazing work! Asking for help, clarification, or responding to other answers. Dear Peter, I am not a fan of
string theory. Strings 2021 will be held online during the two-week period June 21 July 2, 2021 from 9:30 15:00 in So Paulo (8:30 14:00 in NY, 14:30 20:00 in Paris). Non-QG QFT derivations that are
not attempting to identify the microstates. Math Talks and Number Strings provide the foundation to help students improve their understanding and comprehension of numbers. The quotes I gave are
accurate, and links are provided so that anyone who wants to can see the full context. Abstract: We revisit Vafa-Witten theory in the more general setting whereby the underlying moduli space is not
that of instantons, but of the full Vafa-Witten equations. The problem is not that its too new to be evaluated properly, but that its been a failure. Strings provided the authors with a way to make
relatively small changes to guided notes in algebra 1, algebra 2, precalculus, and AP calculus classes. [Slides] [Video], Andrei Caldararu (University of Wisconsin Madison) from 9:30 15:00 in So
Paulo (8:30 14:00 in NY, 14:30 20:00 in Paris). I will then explain how to describe their representations using subcrystals and how they can be translated into the framings of the quivers. In this
talk, I will present a match between the large-N saddles of the correlation functions of determinants with semiclassical D-brane configurations in the dual theory, using a spectral curve
construction. TheAttractor Flow Tree conjecture postulates that the BPS index $\Omega(\gamma,z)$ for given charge $\gamma$and moduli $z$ can be reconstructed from the`attractor indices $\Omega_*(\
gamma_i)$ counting BPS states of charge $\gamma_i$ in their respective attractor chamber, by summing over all possible decompositions $\gamma=\sum_i \gamma_i$ and over decorated rooted flow trees.
Elements of this Hilbert space can be thought of as local operators living in a (d-1)-dimensional spacetime. The latter approach allows to realize holomorphically and topologically twisted field
theories directly as worldvolume theories in deformed supergravity backgrounds, and we make extensive use of this. What I wanted with my post was to point out that (in my view, again), you were
selecting particular phrases that sounded negative, in what other wise were quite interesting exchanges among physicists. Abstract: Knizhnik-Zamolodchikov equation, originally found in the context of
a two dimensional conformal field theory, has recently been found in the context of instanton counting, albeit with a significantly extended domain of allowed parameters (level, spins etc). Abstract:
We will discuss how dualities of quantum field theories can be understood as analogous to Morita equivalences for algebras of various types. Title: Quantum duality and Morita theory for chiral
algebras [online talk] To begin with, we'll discuss a few third-party libraries and their usage. He mentions that one basic problem with this is that there is no understanding of what happens in
time-dependent backgrounds, so, in particular, this is useless for addressing the big bang, which is the one place people now point to as a possible connection to real world data. a proponent of
string theory but also a well-known populariser of science, . Physically, flow treesprovide a mesoscopic representation of BPS states as nested multi-centered bound states of elementary constituents.
@David Roberts: There are many references that discuss this from different perspectives. Title: Homological blocks and R-matrices This topic has been recently explored,from several
differentperspectives,by Kontsevich-Soibelman and Doan-Rezchikov. The conference String-Math 2021 which was supposed to be held at IMPA, will be online due to the current pandemic. Symmetries,
Horizons, and Black Hole Entropy [https://arxiv.org/abs/0705.3024] What video game is Charlie playing in Poker Face S01E07? Their definition involves subtle boundary conditions. you see Ed Witten,
and then Shiraz Minwalla, basically agreeing with you. Others clearly feel differently and think this is of huge significance. But this procedure you adopted in this post, is not the best way of
understanding science, or communicating it. The video answered some questions that I still had (e.g. Abstract: About 40 years ago J. Zinn-Justin formulated several conjectures about the perturbative
expansion of the spectrum a quantum Hamiltonian (for the polynomial potential in 1 variable). It has a beautiful combinatorial structure as well as connections to statistical physics, integrable
systems, and scattering amplitudes. I am currently reading Weinbergs book to fill some holes in my knowledge. Pyry Kuusela (University of Oxford) Nicolai Reshetikhin From this description, we propose
a symmetry for the space of instantons on C^2 by an exceptional super Lie algebra called E(3|6). In my talk, I will discuss the following two examples. Thanks for contributing an answer to Stack
Overflow! I could go on the whole night, but I believe, as most other string theorists do, there are more urgent or at least doable problems to solve now. ), one would at least expect that a
well-behaved asymptotically perturbation series follows as one takes into account contributions from all the moduli parameters of Teichmller spaces of Riemann surfaces with higher genus and market
points. The plumbing graphs with matter can be viewed as certain kinds of quiver diagrams for 3d N=2 theories. Exactly because of this. In 2019, S. Gukov and C. Manolescu initiated a program to
mathematically construct via Dehn surgery, and as part of that they conjectured that the Melvin-Morton-Rozansky expansion of the colored Jones polynomials can be re-summed into a two-variable series
for knot complements. But the time, the community (and the arXiv) takes care of these mistakes. While the community just spread out to other problems. Evaluator.js is a small, zero-dependency module
for evaluating mathematical expressions. Maryna Viazovska was awarded with the Fields medal, https://www.fuw.edu.pl/faculty-of-physics-home.html, Hera guesthouse of the University of Warsaw (budget
option), https://warsawgenomics.pl/en/sars-cov-2/koronawirus-rtpcr. And how I regret it, D-branes also Connect and share knowledge within a single location that is structured and easy to search. This
is exactly the place where you should be using eval(), or you will have to loop through the string and generate the numbers. What sort of strategies would a medieval military use against a fantasy
giant? All major operations, constants, and methods are supported. As has been the case for many years, it doesnt look like there will be anything significantly new on the age-old problems of getting
fundamental physics out of a string theory. AP Computer Science A 2021-22. From the README: [Slides], Peter Spacek (TU Chemnitz) Finally, I will discuss some applications. 2. [Slides] [Video], David
Jordan (University of Edinburgh) 4. Title: Quivers for 3-manifolds String-Math 2015, Hainan Island, China (December 31, 2015 - January 4, 2016; How to Quantize Gravity. Abstract: Since 2017, knots
and symmetric quivers are known to be intimately related via BPS spectra. Matilde Marcolli To learn more, see our tips on writing great answers. [Slides] [Video], Dan Freed (University of Texas at
Austin) [Slides], Pawe Ciosmak (IDEAS NCBiR) String Math Summer School. commutative) spacetime case is well-known to yield the Yang-Mills(-Higgs) theory, namely almost-commutative manifolds, has to
be replaced by its fuzzy counterpart already the classical level. Reimundo Heluani Organizing Committee. Abstract: Recently there has been lots of activity surrounding generalized notions of symmetry
in quantum field theory, including categorical symmetries, higher symmetries, noninvertible symmetries, etc. Of course, these are people. Organizing Committee:Henrique Bursztyn (IMPA)Reimundo Heluani
(IMPA)Marcos Jardim (IMECC-UNICAMP)Gonalo Oliveira (UFF) Scientific Committee:Anton Alexeev (Universit de Genve)David Ben-Zvi (University of Texas)Alexander Braverman (Brown University)Ron Donagi
(University of Pennsylvania)Giovanni Felder (ETH Zrich)Dan Freed (University of Texas)Edward Frenkel (University of California)Marco Gualtieri (University of Toronto)Nigel Hitchin (University of
Oxford)Sheldon Katz (University of Illinois)Maxim Kontsevich (IHS)David Morrison (University of California)Andrei Okounkov (Columbia University)Vasily Pestun (IHS)Boris Pioline (Sorbonne Universit \
u0026 CNRS)Nicolai Reshetikhin (University of California)Pedro Vieira (Perimeter Institute for Theoretical Physics)Katrin Wendland (Albert-Ludwigs-Universitt Freiburg)Edward Witten (Institute for
Advanced Study)Shing-Tung Yau (Harvard University \u0026 The Chinese University of Hong Kong)Maxim Zabzine (Uppsala University)Steering Committee: Ron Donagi (University of Pennsylvania)Dan Freed
(University of Texas at Austin)Nigel Hitchin (University of Oxford)Sheldon Katz (University of Illinois)Maxim Kontsevich (IHS)David Morrison (University of California)Hiroshi Ooguri (Caltech \u0026
IPMU)Boris Pioline (Universit Pierre et Marie Curie)Joerg Teschner (DESY)Edward Witten (Institute for Advanced Study)Shing-Tung Yau (Harvard University)Redes Sociais do IMPA: https://linktr.ee/
impabrIMPA - Instituto de Matemtica Pura e Aplicada https://impa.br | http://impa.br/videos#stringmath2021 Os direitos sobre todo o material deste canal pertencem ao Instituto de Matemtica Pura e
Aplicada, sendo vedada a utilizao total ou parcial do contedo sem autorizao prvia e por escrito do referido titular, salvo nas hipteses previstas na legislao vigente.The rights over all the material
in this channel belong to the Instituto de Matemtica Pura e Aplicada, and it is forbidden to use all or part of it without prior written authorization from the above mentioned holder, except in the
cases prescribed in the current legislation. Properties of the construction are explained both purely in the context of holomorphic field theory and also by engineering the holomorphic theory on the
worldvolume of a D-brane. People interested in string theory/LQG religious warfare argue about the calculations in 3. and 4. Piotr Kucharski (University of Amsterdam) I agree with Felix. [Slides],
Aranya Bhattacharya (IISc, Bangalore) It seems that no one in the string theory community dares to publicly breathe a word of skepticism. In particular, he discovered a hidden structure governed by
Bernoulli numbers. [Slides] [Video], Davide Gaiotto (Perimeter Institute) IMPA, Rio de Janeiro, June 14 18, 2021 Turned into an online eventSpeaker: Si Li (Tsinghua University)Lecture title: Elliptic
chiral homology and quantum master equationEvent page: http://bit.ly/stringmath2021Videos playlist: https://bit.ly/3v2lHlsThe series of String-Math conferences has developed into a central event on
the interface between mathematics and physics related to string theory, quantum field theory and neighboring subjects.
David Cook Law Office
The Emperor Speaks To Mortarion
Frontier Airlines Minor Identification
Medaria Arradondo Family
Articles S | {"url":"https://curtisstone.com/fNx/string-math-2021","timestamp":"2024-11-08T07:50:03Z","content_type":"text/html","content_length":"37721","record_id":"<urn:uuid:ca8d4bf5-ec37-4994-b1a8-7851ce3587ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00430.warc.gz"} |
CSE 235 Fall 2003
CSE235 Course Information
Lecture Recitation #1 Recitation #2
Time and Day MWF 3:30-4:20pm T 4:30-5:20pm F 9:30-10:20am
Location Ferguson 217 Ferguson 113 OldH 303
Instructor Chuck Cusack Haitham Hamza
E-mail cusack@cse.unl.edu hhamza@cse.unl.edu
Office Ferguson 108 501 Bldg Room 6.12
Phone 472-2615 472-3485
Office Hours MWF 10:30-11:30 am, and by appointment T 2:00-4:00pm, F 10:30-11:30am
Schedule The Schedule link gives the details for each class period, including what you should read before each class period, what assignments are due, when tests will be, etc. Since the schedule
will change as the course progresses, please refer to it on a regular basis.
Note: Events listed for Tuesday apply to both the Tuesday and Friday Recitation for that week.
Course In this course you will learn many of the mathematical defintions, techniques, and ways of thinking that will be useful in computer science. The course will focus both on the theory and
Coverage its application to various computer science topics. Not all topics will relate obviously to computer science, but they will provide you with new ways of thinking that will indirectly help
you in the future. Specifically, you will learn about
• Graphs and trees
• Sets, relations, and functions
• Propositional and predicate logic
• Methods of Proof, including mathematical induction
• Recurrence relations
• Counting (permutations, combinations, inclusion-exclusion, etc.)
• Asymptotic notation
The homework assignments will consist of working mathematics problems, writing proofs, and applying the theory by writing programs that do one or more of the following:
• Implement a discrete mathematics topic
• Serve as a tutorial for a discrete mathematics topic
• Use a discrete mathematics topic in an application
See the Schedule for a more detailed description of what we will do when.
Reading Before class each day you should read the sections of the textbook listed on the schedule for that day. Be sure the read the introduction to each chapter and the entire section(s)
the indicated. Each class will start with question you may have about what you read. After clearing up any confusions, we will spend class time doing examples and solving related problems. If
Textbook you are not doing the assigned readings, you will not get nearly as much out of the course as possible, and it is likely your grade will reflect that.
Suggested After you read each section, attempt as many of the suggested exercises as you are able. Some suggested exercises will ask you to solve similar problems for different sets of data. If you
Exercises are certain of how to do the problems after doing a few, you should feel free to skip the similar problems. However, sometimes the other problems will have subtle differences that make
the solutions slightly (and sometimes totally) different, so make sure you really understand what you are doing if you skip problems. | {"url":"https://cusack.hope.edu/Teaching/?class=cse235F03&page=main","timestamp":"2024-11-04T04:04:36Z","content_type":"text/html","content_length":"7893","record_id":"<urn:uuid:3eb05720-ea4d-42b5-bea5-6abfcf074318>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00272.warc.gz"} |
Cahiers pour l’Analyse
Some of the Cahiers authors found in the mathematisation of the infinite (undertaken by Cantor and other contributors to modern set theory) a paradigmatic instance of the struggle between science and
The concept of infinity at issue in the Cahiers pour l’Analyse is not the metaphysical notion emphasised by Spinoza or Hegel so much as the mathematical notion at the heart of fundamental debates in
the development of modern mathematics. Before Cantor’s invention of transfinite numbers in the 1870s, the great majority of philosophers had agreed that application of the concept of actual or
self-embracing infinity should be reserved for an entity more or less explicitly identified with God: the most that mathematics could do, it seemed, was describe something potentially infinite, the
sort of thing illustrated by unending numerical succession: 1, 2, 3...n. Real or actual infinity was apparently destined to belong to a realm beyond number and thus beyond measurement - doomed to
remain, in short, an essentially indefinite if not explicitly religious concept.^1
Confronted with Zeno’s famous paradoxes concerning motion and division, Aristotle set a trend that would hold good for the next two thousand years: if physical bodies might in principle (i.e.
potentially) be infinitely divided, he argued, they never are so divided. Nothing existent is actually made up of infinitely small parts. As a result, if the infinite can be said to exist at all, it
must have an exclusively ‘potential existence’.^2 The story of the actually infinite in modern mathematics, then, is the story of the slow subversion of this eminently ‘sensible’ Aristotelian
approach. Most histories of mathematics distinguish three or four central episodes in this story: the discovery of irrational numbers (by the Greeks); the algebraicisation of geometry (with
Descartes); the discovery of the calculus and the controversial status of ‘infinitesimals’ (Leibniz and Newton); the discovery of non-Euclidean geometries (Lobachevsky, Riemann), and the consequent
search for a new arithmetic foundation for mathematics (Cantor and subsequent contributors to the development of modern set theory). Cantor and his followers provided precisely what previous
mathematicians and philosophers had almost unanimously declared impossible: a mathematically precise description of more-than-finite magnitudes qua numbers. Cantor established that the concept of
numerical order or succession is every bit as coherent in the realm of the actually infinite as it is in the realm of the finite. He showed that it made perfect sense to speak of the size (or
‘cardinality’) of different infinite quantities, conceived as completed wholes or sets.
In the Cahiers pour l’Analyse
In his article ‘Suture’ (CpA 1.3), Jacques-Alain Miller links his Lacanian conception of the subject to the indefinite repetition of numerical succession, where the movement of 1+1+1...+n (which
generates the infinite set of whole numbers) symbolises the infinite movement whereby one signifier (one ‘one’) represents a subject (a ‘zero’) for another signifier (or ‘one’), such that ‘the
definition of the subject comes down to the possibility of one signifier more [un signifiant de plus]’ (CpA 1.3:48). On this basis, Miller presents the ‘structure of the subject as a “flickering in
eclipses”, like the movement which opens and closes the number, and delivers up the lack in the form of the 1 in order to abolish it in the successor’ (49). He links the indefinite or unending
‘excess’ of the succession of signifiers to Richard Dedekind’s contribution to the early development of set theory:^3
Is it not ultimately to this function of excess that can be referred the power of thematisation, which Dedekind assigns to the subject in order to give to set theory its theorem of existence? The
possibility of existence of an enumerable infinity can be explained by this, that ‘from the moment that one proposition is true, I can always produce a second, that is, that the first is true and
so on to infinity.’^4
In her ‘Communications linguistique et spéculaire’ (CpA 3.3), Luce Irigaray also evokes ‘that “flickering in eclipses” [“battement en éclipses”] of the subject who, at all times, wants to vanish in
order to reappear as “one” [un], in a repetition irreducible to all temporal continuity, or to an infinity other than a denumerable, iterative succession’ (CpA 3.3:46).
Alain Grosrichard, in ‘Gravité de Rousseau’ (CpA 8.2), shows how the ‘idea of God’ emerges in Rousseau’s Emile at precisely the same moment as sexual desire. In Grosrichard’s formulation, ‘desire is
finitude aware of itself, the opening to an other, to infinity’ (CpA 8.2:53): desire serves here as a sort of conduit to an infinity beyond itself. Bernard Pautrat’s article on Hume, ‘Du Sujet
politique et de ses intérêts’ (CpA 6.5), considers the subject’s ‘infinity of desire’ from a very different point of view. On the assumption that ‘like all infinity’ this infinity of desire is
‘fictive’, Hume deprives the subject of its apparently ‘illusory autonomy’ (CpA 6.5:73), so as to allow for an analysis of the processes whereby subjects are led to obey the forms of authority that
confront and control them.
In his reading of Aragon’s La Mise à mort (CpA 7.2), Jean-Claude Milner considers the various ways a set of functions (‘love, depersonalisation, the novel’) are attributed to a series of figures or
characters, in which ‘each term can be multiplied to infinity’ (CpA 7.2:46). Characters, insofar as they come to bear a given function, ‘challenge the work as the delimited space in which they move,
and make it operative in an infinite world’ (47).
The most important meditations on infinity in the Cahiers appear in Volumes 9 and 10, most notably Cantor’s own ‘Fondements d’une théorie générale des ensembles’ (CpA 10.3), and Alain Badiou’s
article on the infinitesimal (CpA 9.8). Cantor’s article provides a sort of philosophical overview of the logic whereby Cantor came to accept the ‘point of view which considers the infinitely great
not merely in the form of something growing without limit’ but as something that can be ‘fixed mathematically by [distinct] numbers in the determinate form of the completed infinite’ (CpA 10.3:42).
Cantor distinguishes what he terms the ‘infinite improperly understood’ from the ‘infinite properly understood’. The ‘improper’ infinite is conceived as an infinite indetermination. Understood in
this way, the infinite remains derivative of the finite; the infinite is only ever understood as an extension of the finite, as an unending sequence of addition. This concept of the infinite is
intrinsically indeterminate, and it inheres in the concept itself that it remain forever open-ended. By contrast, for Cantor the infinite proper is defined by the fact that it is always presented
under a determinate (36). By inventing new ways of determining the ‘class’ or ‘power’ of numbers, Cantor demonstrates that ‘following the finite there is a transfinite (transfinitum), which might
also be called supra-finite (suprafinitum); that is, there is an unlimited ascending ladder of modes, which in its nature is not finite but infinite, but which can be determined as can the finite by
determinate, well-defined and distinguishable numbers’ (43).
By contrast, Jacques Bouveresse’s article on Wittgenstein’s philosophy of mathematics, published in the same tenth issue of the Cahiers, includes a discussion of his anti-Cantorian finitism and
constructivism. According to Wittgenstein, the statement that ‘there is no biggest cardinal number’ simply means (in keeping with his ‘behaviourist’ endorsement of the linguistic turn) ‘that the
authorization to “play” with cardinal numbers does not have an end’ (CpA 10.9:191). The mistake that Cantor makes, Wittgenstein claims, is to suppose that an ability to apply such a ‘technique
without end’ might be understood in ontological terms, and correlated with an actually infinite set: Cantor confuses the technique through which real numbers are constructed for use in calculations
with the actual being or extent of the set of real numbers itself.
In ‘La Subversion infinitésimale’ (CpA 9.8) Badiou defends (in keeping with what will be long-term anti-constructivist polemic) a post-Cantorian position, and like Cantor, begins with a discussion of
the conventional distinction between the finite and the infinite. Whereas for Hegel, the (metaphysically) infinite can be thought in the figure of a circle, as a self-sufficient whole, the
(mathematically) finite domain is determined as a merely linear progression which ceaselessly transgresses its own limit. Take any finite number n, however large: it is obviously possible, through
the addition of a ‘supplementary inscription’ (n+1), to generate a still larger number. Rather than consider a variable n simply as the mark of a potentially unending succession, Badiou argues that
such a succession already ‘presupposes a (unique) space of exercise’, a space that we tacitly assume as actually and already endless. ‘This is why the “potential” infinite, the indefiniteness of
progression, testifies retroactively to the “actual” infinity of its support’ (CpA 9.8:118).
Badiou turns, in sections three and four of his article, to consider a recent and especially significant instance of this general approach to the infinite – not in the domain of the infinitely large
but of the infinitely small. From Zeno and Aristotle through to Berkeley and Hegel (and indeed right through to Skolem’s path-breaking work of the 1930s), both philosophers and mathematicians tended
to accept that the notion of an infinitely small number was a self-evident ‘absurdity’ (123). Abraham Robinson, however, in work first published in 1961, was able to validate the affirmation of
infinitesimal numbers.^5 Robinson’s ‘non-standard’ approach serves to ‘reconstruct all the fundamental concepts of analysis’ in terms that are, for the first time, fully ‘systematic’. Robinson’s
approach exemplifies, for Badiou, the age-old ideological investment in the association of infinity with quality and continuity (and ultimate with a divine or meta-physical substance) - an investment
which dominated the early development of mathematical analysis and ‘structural’ thought (135).
Select bibliography
• Badiou, Alain. L’Etre et l’événement. Paris: Seuil, 1988. Being and Event, trans. Oliver Feltham. London, Continuum Press, 2005.
• Badiou, Alain. Le Nombre et les nombres. Paris: Seuil, 1990, ch. 16. Number and Numbers, trans. Robin Mackay. London: Polity, 2008, ch. 16.
• Boyer, Carl Benjamin. A History of Mathematics. New York: Wiley, 1968.
• Dauben, Joseph Warren. Georg Cantor: His Mathematics and Philosophy of the Infinite. Cambridge: Harvard University Press, 1979.
• Hegel, G.W.F. The Science of Logic, trans. A.V. Miller. NY: Humanity Books, 1999.
• Koyré, Alexandre. From the Closed World to the Infinite Universe. Baltimore: Johns Hopkins University Press, 1957.
• Lavine, Shaughan. Understanding the Infinite. Cambridge, Mass.: Harvard University Press, 1994.
• Moore, A.W. The Infinite. London: Routledge, 1990.
• Robinson, Abraham. Non-Standard Analysis. Amsterdam: North Holland Publishing Company, 1966.
1. Cf. Boyer, A History of Mathematics, 611. ↵
2. Aristotle, Physics, III, 4 and 5. ↵
3. See Erich Reck, ‘Dedekind’s Contributions to the Foundations of Mathematics’, Stanford Encyclopaedia of Philosophy, http://plato.stanford.edu/entries/dedekind-foundations/. ↵
4. Miller here cites Jean Cavaillès, Remarques sur la formation de la théorie abstraite des ensembles, in Philosophie Mathématique, chapter III: ‘Dedekind et la chaîne. Les Axiomatisations’, 124. ↵
5. See ‘Abraham Robinson’, Wikipedia, http://en.wikipedia.org/wiki/Abraham_Robinson. ↵ | {"url":"http://cahiers.kingston.ac.uk/concepts/infinity.html","timestamp":"2024-11-07T06:33:17Z","content_type":"application/xhtml+xml","content_length":"24600","record_id":"<urn:uuid:193859b2-18d1-4a19-be1b-d26466e9465d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00345.warc.gz"} |
Assertions are statements that introduce new compile-time facts. They are not comments, as removing them can prevent the code from compiling, but unlike other programming languages, Wuffs' assertions
have no run-time effect at all, not even in a “debug, not release” configuration. Compiling an assert will fail unless it can be proven.
The basic form is assert some_boolean_expression, which creates some_boolean_expression as a fact. That expression must be free of side-effects: any function calls within must be pure.
Arithmetic inside assertions is performed in ideal integer math, working in the integer ring ℤ. An expression like x + y in an assertion never overflows, even if x and y have a realized (non-ideal)
integer type like u32.
Some assertions can be proven by the compiler with no further guidance. For example, if x == 1 is already a fact, then x < 5 can be automatically proved. Adding a seemingly redundant fact can be
useful when reconciling multiple arms of an if-else chain, as reconciliation requires the facts in each arm's final situation to match exactly:
if x < 5 {
// No further action is required. "x < 5" is a fact in the 'if' arm.
} else {
x = 1
// At this point, "x == 1" is a fact, but "x < 5" is not.
// This assertion creates the "x < 5" fact.
assert x < 5
// Here, "x < 5" is still a fact, since the exact boolean expression "x < 5"
// was a fact at the end of every arm of the if-else chain.
TODO: specify what can be proved automatically, without naming an axiom.
Wuffs' assertion system is a proof checker, not an SMT solver or automated theorem prover. It verifies explicit proof targets instead of the more open-ended task of searching for implicit ones. This
involves more explicit work by the programmer, but compile times matter, so the Wuffs compiler is fast (and dumb) instead of smart (and slow).
The Wuffs syntax is regular (and unlike C++, does not require a symbol table to parse), so it should be straightforward to transform Wuffs code to and from file formats used by more sophisticated
proof engines. Nonetheless, that is out of scope of this respository and the Wuffs compiler per se.
Again for compilation speed, not every inference rule is applied after every line of code. Some assertions require explicit guidance, naming the rule that proves the assertion. These names are simply
strings that resemble mathematical statements. They are axiomatic, in that these rules are assumed, not proved, by the Wuffs toolchain. They are typically at a higher level than e.g. Peano axioms, as
Wuffs emphasizes practicality over theoretical minimalism. As they are axiomatic, they endeavour to only encode ‘obvious’ mathematical rules.
For example, the axiom named "a < b: a < c; c <= b" is about transitivity: the assertion a < b is proved if both a < c and c <= b are true, for some (pure) expression c. Terms like a, b and c here
are all integers in ℤ, they do not encompass floating point concepts like negative zero, NaNs or rounding. The axiom is invoked by extending an assert with the via keyword:
assert n_bits < 12 via "a < b: a < c; c <= b"(c: width)
This proves n_bits < 12 by applying that transitivity axiom, where a is n_bits, b is 12 and c is width. Compiling this assertion requires proving both n_bits < width and width <= 12, from existing
facts or from the type system, e.g. width is a base.u32[..= 12].
The trailing (c: width) syntax is deliberately similar to a function call (recall that when calling a Wuffs function, each argument must be named), but the "a < b: a < c; c <= b" named axiom is not a
function-typed expression.
The compiler's built-in axioms are listed separately. | {"url":"https://skia.googlesource.com/external/github.com/google/wuffs/+/e12f10eef0e0f2652b4e3b0fb7c8f7afdb652916/doc/note/assertions.md","timestamp":"2024-11-15T04:01:03Z","content_type":"text/html","content_length":"7060","record_id":"<urn:uuid:bfaedf1e-1e38-41b6-b0d4-15c7c7aa9edd>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00466.warc.gz"} |
1 .
What is the unit of current?
Amperes or Amps for short is the unit of current. It is usually denoted with an A after a value of current
2 .
What is potential difference?
Work done per coulomb of charge that passes between two points
Kinetic energy per coulomb of charge that passes between two points
Work done per coulomb of charge that passes between three points
Work done per ampere of current that passes between two points
Strictly speaking, it should really be called electrical potential difference but at GCSE, potential difference is acceptable, when used in the correct context
3 .
What is the unit of charge?
The coulomb is the unit of charge. It is denoted by a C after a value of charge and is named for the French scientist Charles-Augustin de Coulomb who carried out a lot of pioneering work on
electrical charge and magnetism
4 .
Which formula calculates the size of the current?
I = Q x t
I = ^Q⁄[t]
I = ^t⁄[Q]
Q = ^I⁄[t]
If you didn't remember the equation, you can work it out. Whenever you need to calculate the rate of something in physics, it is expressed as something per second. To get the 'per second' bit, you
know that you need to divide by time. That eliminates two of the alternative answers so the one that works out the current must be the correct one
5 .
What is the formula for potential difference?
V = ^W⁄[Q]
V = ^W⁄[2Q]
V = ^3W⁄[2Q]
V = ^5W⁄[3Q]
V is in volts when W is in joules and Q is in coulombs
6 .
What is electrical current?
Flow of electric charge
Flow of protons
Flow of neutrons
Flow of water
Current is the flow of electric charge around a circuit. The charge is carried by electrons which are emitted by the negative electrode of the cell and travel to the positive electrode of the cell
through the circuit
7 .
What can be calculated from current-potential difference graphs?
None of the above
The gradient gives the resistance
8 .
What is the work done in a circuit if the voltage is 10 V, the current is 4 A and the circuit is on for 25 seconds?
You need to rearrange the equation V = ^W⁄[Q] and work out the charge. Remember that one coulomb is one amp flowing for one second so if you have four amps flowing for 25 seconds, how many coulombs
is that?
9 .
What is the size of the current in a circuit if the charge, Q, is 100 C and lasts for 25 seconds?
Remember the definition - current is a measure of how much charge flows past a given point in one second. This should help you to get the right answer even if you can't recall the equation
10 .
What does the size of the electric current depend on?
The rate of flow of electric charge
The rate of flow of protons
The rate of flow of neutrons
All of the above
High currents = bigger flow rates
**Unlimited Quizzes Await You! 🚀**
Hey there, quiz champ! 🌟 You've already tackled today's free questions.
Ready for more?
🔓 Unlock UNLIMITED Quizzes and challenge yourself every day. But that's
not all...
🔥 As a Subscriber you can join our thrilling "Daily Streak" against other
quizzers. Try to win a coveted spot on our Hall of Fame Page.
Don't miss out! Join us now and keep the fun rolling. 🎉 | {"url":"https://www.educationquizzes.com/gcse/physics/electricity-electrical-circuits-01/","timestamp":"2024-11-13T21:54:57Z","content_type":"text/html","content_length":"53842","record_id":"<urn:uuid:84b877ef-511f-4be9-be4d-c5cc45e557fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00177.warc.gz"} |
Evony March Calculator - Temz Calculators
Evony March Calculator
Use the Evony March Calculator to optimize your strategies for both attacking and defending in Evony. By entering specific troop details and other parameters, you can determine the most effective
approach to achieve your objectives.
Evony March Calculation Formula
The Evony March Calculator uses the following formulas to determine the optimal attack or defense power:
Attack Power = Number of Troops * (Expected Victory Rate / 100)
Defense Power = Number of Troops * 1.2
< p>Variables:
• Attack Power is the calculated strength needed for a successful attack.
• Number of Troops is the total number of troops you are sending.
• Expected Victory Rate is the percentage chance of a successful attack.
• Defense Power is the calculated strength needed for an effective defense.
To calculate attack power, multiply the number of troops by the expected victory rate divided by 100. For defense, use a simplified multiplier.
What is Evony March Calculation?
Evony March Calculation involves determining the optimal number of troops and the necessary power to effectively execute an attack or defend against an enemy in the game Evony. This calculation helps
in strategizing marches to maximize success rates and minimize losses.
How to Use the Evony March Calculator?
Follow these steps to use the Evony March Calculator:
1. First, select the type of troops you are using and enter the number of troops.
2. Determine whether your march is for attacking or defending and input the relevant details.
3. For attack calculations, input the expected victory rate.
4. For defense, the calculator uses a simplified formula to estimate the defense power.
5. Use the calculator to get the results and adjust your strategy accordingly.
Example Problem:
For an attacking march, if you have 1000 infantry troops with an expected victory rate of 75%, the calculator will determine the required attack power.
1. What is an Evony march?
An Evony march refers to the movement of troops to attack or defend against enemies in the game Evony.
2. How does the calculator help with attacks?
The calculator helps by determining the required attack power based on the number of troops and the expected victory rate.
3. Can I use the calculator for different troop types?
Yes, the calculator supports various troop types including infantry, archers, cavalry, and siege units.
4. How accurate is the calculator?
The calculator provides estimates based on the input values. For precise strategies, consider consulting game-specific guides or forums.
5. Can the calculator be used for defensive strategies?
Yes, the calculator can also estimate the necessary defense power for a defensive march. | {"url":"https://temz.net/evony-march-calculator/","timestamp":"2024-11-07T10:46:03Z","content_type":"text/html","content_length":"74320","record_id":"<urn:uuid:673b0b2b-6d24-4cb7-bbd1-7c9db46a7800>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00385.warc.gz"} |
Posts tagged with r programming language
Sample assignment on R statistics help
Answer all questions. Marks are indicated beside each question. You should submit your solutions before the
You should submit both
• a .pdf file containing written answers (word processed, or hand-written and scanned), and
• an .R file containing R code.
For all answers include
• the code you have written to determine the answer, the relevant output from this code, and a justification of how you got your answer.
• Total marks: 60 1. Consider the one parameter family of probability density functions
fb for − b ≤ x ≤ b
where b > 0.
(a) Write R code to plot this pdf for various values of b > 0. [2 marks]
(b) Determine the method of moments estimator for the parameter b. (No R code necessary) [4 marks]
(c) Determine the Likelihood function for the parameter b. By writing R code to plot a suitable graph, determine that the derivative of this likelihood function is never zero. [4 marks] (d) Hence
find the Maximum Likelihood Estimator for the parameter b. (No R code necessary) [4 marks] (e) The data in the file Question 1 data.csv contains 100 independent draws from the probability
distribution with pdf fb(x), where the parameter b is unknown. Load the data into R using the command
D <- read . csv (path_to_f i l e )$x
where path_to_file indicates the path where you have saved the .csv file
e.g. path_to_file = “c:/My R Downloads/Question 1 data.csv”
Note that forward slashes are used to indicate folders (this is not consistent with the usual syntax for Microsoft operating systems).
Write R code to calculate an appropriate Method of Moments Estimate and a Maximum Likelihood Estimate for the parameter b, given this data. [4 marks]
2. The data in the file Question 2 data.csv is thought to be a realisation of Geometric Brownian Motion
St = S0eσWt+µt
where Wt is a Wiener process and σ,µ and S0 are unknown parameters. Load the data into R using the command
S <- read . csv (path_to_f i l e )
where path_to_file indicates the path where you have saved the .csv file.
(a) Write R code to determine the parameter S0. [2 marks]
(b) Write R code to determine if Geometric Brownian Motion is suitable to model this data.
You may do this by
• plotting an appropriate scatter plot/histogram, and/or • using an appropriate statistical test.
[6 marks]
(c) Write R code to determine an estimate for µ and σ2 using Maximum Likelihood Estimators.
(You do not have to derive these estimators). [5 marks]
3. The data in the file Question 3 data.csv is a matrix of transition probabilities of a Markov Chain. Load the data into R using the command
P <- as . matrix ( read . csv (path_to_f i l e ))
with an appropriate value for path_to_file.
(a) Verify that this Markov Chain is ergodic. (No R code necessary) [4 marks]
(b) Suppose that an initial state vector is given by
x=(0.1,0.2,0.4,0.1,0.2) (1)
Write R code to determine the state vector after 10 time steps. Do this without diagonalising the matrix
P. [3 marks]
(c) Write R code to verify this answer by diagonalising the matrix P. Note that the eigen(A) function produces the right-eigenvectors of a matrix A (solutions of Av = λv) However we want the
left-eigenvectors (solutions of vA= λv).
These are related by
v is a left-eigenvector of A if and only if vT is a right-eigenvector of AT.
[8 marks]
(d) Hence, or otherwise, determine the limiting distribution with the initial state vector given in (1).
[4 marks]
4. The data if the file Question 4 data.csv is a generator matrix for a Markov Process. Load the data into R using the command
A <- as . matrix ( read . csv (path_to_f i l e ))
with an appropriate value for path_to_file.
(a) Suppose that X0 =0. Write R code to simulate one realisation of the Markov Process Xt. The output should be two vectors (or one data frame with two variables).
• The first vector indicates transition times.
• The second vector indicates which state the Markov Process takes at this time (i.e. one of 0,1,2,3,4).
How to proceed:
• The first line of your code must read
set . seed (4311)
to ensure that this realisation is repeatable.
• For each Xt you must determine
– what the transition time s to the next state is,
– what the probabilities to transfer to each state are, and hence randomly select a suitable value for Xt+s.
[9 marks] (b) Write R code to plot an appropriate graph that describes this realisation. [1 mark] | {"url":"https://www.mymathlabhomeworkhelp.com/mymathlabanswers/tag/r-programming-language/","timestamp":"2024-11-12T20:11:00Z","content_type":"text/html","content_length":"24947","record_id":"<urn:uuid:777e1b75-50a8-41ea-9762-4572c1ef3d7b>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00673.warc.gz"} |
159 research outputs found
A methodology is developed for data analysis based on empirically constructed geodesic metric spaces. For a probability distribution, the length along a path between two points can be defined as the
amount of probability mass accumulated along the path. The geodesic, then, is the shortest such path and defines a geodesic metric. Such metrics are transformed in a number of ways to produce
parametrised families of geodesic metric spaces, empirical versions of which allow computation of intrinsic means and associated measures of dispersion. These reveal properties of the data, based on
geometry, such as those that are difficult to see from the raw Euclidean distances. Examples of application include clustering and classification. For certain parameter ranges, the spaces become CAT
(0) spaces and the intrinsic means are unique. In one case, a minimal spanning tree of a graph based on the data becomes CAT(0). In another, a so-called "metric cone" construction allows extension to
CAT($k$) spaces. It is shown how to empirically tune the parameters of the metrics, making it possible to apply them to a number of real cases.Comment: Statistics and Computing, 201
The probability density function (PDF) of a random variable associated with the solution of a partial differential equation (PDE) with random parameters is approximated using a truncated series
expansion. The random PDE is solved using two stochastic finite element methods, Monte Carlo sampling and the stochastic Galerkin method with global polynomials. The random variable is a functional
of the solution of the random PDE, such as the average over the physical domain. The truncated series are obtained considering a finite number of terms in the Gram-Charlier or Edgeworth series
expansions. These expansions approximate the PDF of a random variable in terms of another PDF, and involve coefficients that are functions of the known cumulants of the random variable. To the best
of our knowledge, their use in the framework of PDEs with random parameters has not yet been explored
Confidence nets, that is, collections of confidence intervals that fill out the parameter space and whose exact parameter coverage can be computed, are familiar in nonparametric statistics. Here, the
distributional assumptions are based on invariance under the action of a finite reflection group. Exact confidence nets are exhibited for a single parameter, based on the root system of the group.
The main result is a formula for the generating function of the coverage interval probabilities. The proof makes use of the theory of "buildings" and the Chevalley factorization theorem for the
length distribution on Cayley graphs of finite reflection groups.Comment: 20 pages. To appear in Bernoull
A general method of quadrature for uncertainty quantification (UQ) is introduced based on the algebraic method in experimental design. This is a method based on the theory of zero-dimensional
algebraic varieties. It allows quadrature of polynomials or polynomial approximands for quite general sets of quadrature points, here called â designs.â The method goes some way to explaining when
quadrature weights are nonnegative and gives exact quadrature for monomials in the quotient ring defined by the algebraic method. The relationship to the classical methods based on zeros of
orthogonal polynomials is discussed, and numerical comparisons are made with methods such as Gaussian quadrature and Smolyak grids. Application to UQ is examined in the context of polynomial chaos
expansion and the probabilistic collocation method, where solution statistics are estimated
There is a duality theory connecting certain stochastic orderings between cumulative distribution functions F_1,F_2 and stochastic orderings between their inverses F_1^(-1),F_2^(-1). This underlies
some theories of utility in the case of the cdf and deprivation indices in the case of the inverse. Under certain conditions there is an equivalence between the two theories. An example is the
equivalence between second order stochastic dominance and the Lorenz ordering. This duality is generalised to include the case where there is "distortion" of the cdf of the form v(F) and also of the
inverse. A comprehensive duality theorem is presented in a form which includes the distortions and links the duality to the parallel theories of risk and deprivation indices. It is shown that some
well-known examples are special cases of the results, including some from the Yaari social welfare theory and the theory of majorization.Comment: 23 pages, no figures, 2 Appendice
In previous work the authors defined the k-th order simplicial distance between probability distributions which arises naturally from a measure of dispersion based on the squared volume of random
simplices of dimension k. This theory is embedded in the wider theory of divergences and distances between distributions which includes Kullbackâ Leibler, Jensenâ Shannon, Jeffreysâ Bregman
divergence and Bhattacharyya distance. A general construction is given based on defining a directional derivative of a function Ï from one distribution to the other whose concavity or strict
concavity influences the properties of the resulting divergence. For the normal distribution these divergences can be expressed as matrix formula for the (multivariate) means and covariances. Optimal
experimental design criteria contribute a range of functionals applied to non-negative, or positive definite, information matrices. Not all can distinguish normal distributions but sufficient
conditions are given. The k-th order simplicial distance is revisited from this aspect and the results are used to test empirically the identity of means and covariances
We consider a measure Ï k of dispersion which extends the notion of Wilkâ s generalised variance for a d-dimensional distribution, and is based on the mean squared volume of simplices of dimension
kâ €d formed by k+1 independent copies. We show how Ï k can be expressed in terms of the eigenvalues of the covariance matrix of the distribution, also when a n-point sample is used for its
estimation, and prove its concavity when raised at a suitable power. Some properties of dispersion-maximising distributions are derived, including a necessary and sufficient condition for optimality.
Finally, we show how this measure of dispersion can be used for the design of optimal experiments, with equivalence to A and D-optimal design for k=1 and k=d, respectively. Simple illustrative
examples are presented
We apply the methods of algebraic reliability to the study of percolation on trees. To a complete $k$-ary tree $T_{k,n}$ of depth $n$ we assign a monomial ideal $I_{k,n}$ on $\sum_{i=1}^n k^i$
variables and $k^n$ minimal monomial generators. We give explicit recursive formulae for the Betti numbers of $I_{k,n}$ and their Hilbert series, which allow us to study explicitly percolation on $T_
{k,n}$. We study bounds on this percolation and study its asymptotical behavior with the mentioned commutative algebra techniques | {"url":"https://core.ac.uk/search/?q=author%3A(Wynn%2C%20Henry%20P.)","timestamp":"2024-11-04T16:47:04Z","content_type":"text/html","content_length":"137978","record_id":"<urn:uuid:848bd43c-a45b-4ea6-af82-e8768d15c572>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00753.warc.gz"} |
Detecting a cycle in a directed graph using Depth First Traversal - FcukTheCode
Detecting a cycle in a directed graph using Depth First Traversal
Graph Theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects.
Introduction To Graph Theory β ¦FTC π π π
A cycle in a directed graph exists if there’s a back edge discovered during a DFS. A back edge is an edge from a node to itself or one of the ancestors in a DFS tree. For a disconnected graph, we get
a DFS forest, so you have to iterate through all vertices in the graph to find disjoint DFS trees.
C++ implementation:
#include <iostream>
#include <list>
using namespace std;
#define NUM_V 4
bool helper(list<int> *graph, int u, bool* visited, bool* recStack)
list<int>::iterator i;
for(i = graph[u].begin();i!=graph[u].end();++i){
if(recStack[*i]) //if vertice v is found in recursion stack of this DFS traversal
return true;
else if(*i==u) //if there's an edge from the vertex to itself
return true;
else if(!visited[*i]){
if(helper(graph, *i, visited, recStack))
return true;
return false;
/The wrapper function calls helper function on each vertices which have not been visited. Helper
function returns true if it detects a back edge in the subgraph(tree) or false.
bool isCyclic(list<int> *graph, int V)
bool visited[V]; //array to track vertices already visited
bool recStack[V]; //array to track vertices in recursion stack of the traversal.
for(int i = 0;i<V;i++)
visited[i]=false, recStack[i]=false; //initialize all vertices as not visited and not recursed
for(int u = 0; u < V; u++) //Iteratively checks if every vertices have been visited
if(helper(graph, u, visited, recStack)) //checks if the DFS tree from the vertex contains a cycle
return true;
return false;
Driver function
int main()
list<int>* graph = new list<int>[NUM_V];
bool res = isCyclic(graph, NUM_V);
Output of the above program is : 1
Illustration of Output of above program in terminal
Result: As shown below, there are three back edges in the graph. One between vertex 0 and 2; between vertice 0, 1, and 2; and vertex 3. Time complexity of search is O(V+E) where V is the number of
vertices and E is the number of edges.
To store a graph, two methods are common:
• Adjacency Matrix
• Adjacency List
An adjacency matrix is a square matrix used to represent a finite graph. Adjacency Matrix (Storing Graphs) π π π
An Adjacency list is a collection of unordered lists used to represent a finite graph. Storing Graphs (Adjacency List) π π π
1. Quantum Mechanics relation with quantum computing
2. Unzipping Files using Python
3. An Algorithm which accepts n integer values and calculates the average and prints it.
4. How can Quantum Computing merge with Virtual Reality ? !
5. Blockchain and Cryptocurrency. And their relations with Virtual Reality !!
Morae Q! | {"url":"https://www.fcukthecode.com/ftc_detecting-a-cycle-in-a-directed-graph-using-depth-first-traversal-ftc/","timestamp":"2024-11-09T10:00:09Z","content_type":"text/html","content_length":"156484","record_id":"<urn:uuid:d000f437-bf97-482a-9419-88a4754a000c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00324.warc.gz"} |
3rd Grade Math
3rd Grade Math Test
Do you want to take a 3rd grade math test to assess your math knowledge for this grade level? The following online quizzes and tests are based on the third grade math standards. Check them out!
These online tests are designed to work on computers, laptops, iPads, and other tablets. There is no need to download any app for these activities.
3rd Grade Multiplication Test
Use models and arrays to multiply numbers and solve multiplication problems as repeated addition.
3rd Grade Division Test
Check your division skills and your ability to solve division word problems.
3rd Grade Place Value Test
When taking this test, students will identify place value of whole numbers through thousands and round numbers to the nearest ten or hundred.
3rd Grade Fractions
This is an interesting online test about fractions that 3rd grade students could take at the end of the chapter. | {"url":"http://www.math-tests.com/3rd-grade-math-test.html","timestamp":"2024-11-12T17:06:07Z","content_type":"text/html","content_length":"7148","record_id":"<urn:uuid:ae2c86bb-ce3c-4cb2-ba9c-c7fe3a65e80e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00332.warc.gz"} |
Gauss's flux theorem | Electricity - Magnetism
30-second summary
Gauss’s flux theorem
“Gauss’s law states that the net electric flux through any hypothetical closed surface is equal to 1/ε[0] times the net electric charge within that closed surface. “
Gauss’s flux theorem is a special case of the general divergence theorem (known also as Gauss’s – Ostrogradsky’s theorem). It can be considered as one of the most powerful and most useful theorem in
the field of electrical science.
In words:
Gauss’s theorem states that the net electric flux through any hypothetical closed surface is equal to 1/ε[0] times the net electric charge within that closed surface.
Φ[E] = Q/ε[0]
About Gauss’s flux theorem
In electromagnetism, Gauss’s law, also known as Gauss’s flux theorem, relates the distribution of electric charge to the resulting electric field. In its integral form, Gauss’s law relates the charge
enclosed by a closed surface (often called as Gaussian surface) to the total flux through that surface. When the electric field, because of its symmetry, is constant everywhere on that surface and
perpendicular to it, the exact electric field can be found. Gauss’ law and Coulomb’s law are different ways of describing the relation between charge and electric field in static situations. In such
special cases, Gauss’s law is easier to apply than Coulomb’s law. Gauss’s law involves the concept of electric flux, a measure of how much the electric field vectors penetrate through a given
Gauss’s law is useful method for determining electric fields when the charge distribution is highly symmetric. It was developed by Mr. Carl Friedrich Gauss, a German mathematician and physicist.
Like Ampere’s law, which is analogous to magnetism, Gauss’ law is one of four Maxwell’s equations (the first) and thus fundamental to classical electrodynamics.
What is Electric Flux
Gauss’s law involves the concept of electric flux, which refers to the electric field passing through a given area. In words:
Gauss’s law states that the net electric flux through any hypothetical closed surface is equal to 1/ε[0] times the net electric charge within that closed surface.
Φ[E] = Q/ε[0]
In pictorial form, this electric field is shown as a dot, the charge, radiating “lines of flux”. These are called Gauss lines. Note that field lines are a graphic illustration of field strength and
direction and have no physical meaning. The density of these lines corresponds to the electric field strength, which could also be called the electric flux density: the number of “lines” per unit
area. Electric flux is proportional to the total number of electric field lines going through a surface.
Electric flux depends on the strength of electric field, E, on the surface area, and on the relative orientation of the field and surface. For a uniform electric field E passing through an area A,
the electric flux E is defined as:
Φ = E x A
This is for the area perpendicular to vector E. We generalize our definition of electric flux for a uniform electric field to:
Φ = E x A x cosφ (electric flux for uniform E, flat surface)
What happens if the electric field isn’t uniform but varies from point to point over the area ? Or what if is part of a curved surface? For a non-uniform electric field, the electric flux dΦ[E]
through a small surface area dA is given by:
dΦ[E] = E x dA
We calculate the electric flux through each element and integrate the results to obtain the total flux. The electric flux Φ[E] is then defined as a surface integral of the electric field:
Gauss’s law formula – Integral
In its integral form, Gauss’s law relates the charge enclosed by a closed surface to the total flux through that surface. The precise relation between the electric flux through a closed surface and
the net charge Q[encl] enclosed within that surface is given by Gauss’s law:
where ε[0] is the same constant (permittivity of free space) that appears in Coulomb’s law. The integral on the left is over the value of E on any closed surface, and we choose that surface for our
convenience in any given situation. The charge Qencl is the net charge enclosed by that surface.
It doesn’t matter where or how the charge is distributed within the surface. Any charge outside this surface must not be included. A charge outside the chosen surface may affect the position of the
electric field lines, but will not affect the net number of lines entering or leaving the surface.
Gauss’s law formula – Differential
Gauss’s law can be used in its differential form, which states that the divergence of the electric field is proportional to the local density of charge. This divergence theorem is also known as
Gauss’s-Ostrogradsky’s theorem.
Frequently asked questions
What is the main application of Gauss’s law?
Gauss’s law is useful for determining electric fields when the charge distribution is highly symmetric. In choosing the surface, always take advantage of the symmetry of the charge distribution so
that E can be removed from the integral.
Which law is analogous to Gauss’s law.
Like Ampere’s law, which is analogous to magnetism, Gauss’ law is one of four Maxwell’s equations (the first) and thus fundamental to classical electrodynamics.
What is the unit of electric charge?
The coulomb (symbol: C) is the International System of Units (SI) unit of electric charge. The coulomb was defined as the quantity of electricity transported in one second by a current of one ampere:
1 C = 1 A × 1 s | {"url":"https://www.electricity-magnetism.org/electrostatics/gausss-law/gausss-flux-theorem/","timestamp":"2024-11-09T17:39:02Z","content_type":"text/html","content_length":"96419","record_id":"<urn:uuid:7ae52e88-7bef-411f-997e-a0406c52bc4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00557.warc.gz"} |
Adiabatic Process
Understanding Adiabatic Processes in Thermodynamics
The adiabatic process is a fundamental concept in thermodynamics, defined by the absence of heat exchange between a system and its surroundings. In such processes, any work performed on or by the
system results solely in a change in the system’s internal energy. Adiabatic processes are pivotal in various engineering applications, including heat engines, compressors, turbines, and aerospace
propulsion systems. Mastering the principles of adiabatic processes enables engineers to design systems that efficiently manage energy transfer without relying on external heat sources or sinks.
Did you know? In an adiabatic expansion, the temperature of an ideal gas decreases as it does work on its surroundings, even though no heat is lost.
This comprehensive guide explores the theoretical underpinnings of adiabatic processes, key equations and calculations, practical engineering applications, real-world examples, and the challenges
associated with optimizing these processes. Whether you’re a student delving into thermodynamics or an engineer seeking to enhance system efficiency, understanding adiabatic processes is essential
for appreciating how energy is managed and utilized in various systems.
How Do Adiabatic Processes Work in Thermodynamics?
An adiabatic process occurs when a system undergoes a change in pressure and volume without any heat transfer (\( Q = 0 \)) with its environment. According to the First Law of Thermodynamics, the
change in internal energy (\( \Delta U \)) of the system is equal to the work done (\( W \)) on or by the system:
For an ideal gas, this means that any work done during expansion or compression directly affects the internal energy and, consequently, the temperature of the gas. In an adiabatic expansion, the gas
does work on its surroundings, leading to a decrease in internal energy and temperature. Conversely, in an adiabatic compression, work is done on the gas, increasing its internal energy and
Important: Maintaining adiabatic conditions requires excellent insulation and rapid processes to minimize heat exchange with the environment.
The relationship between pressure and volume in an adiabatic process for an ideal gas is described by Poisson’s Equation:
\[ P V^\gamma = \text{constant} \]
• P = Pressure
• V = Volume
• \(\gamma\) = Heat capacity ratio (\( C_p/C_v \))
This equation highlights how pressure and volume are inversely related in an adiabatic process, with the heat capacity ratio (\( \gamma \)) being a property of the gas that affects the slope of the
process on a PV diagram.
Key Equations for Adiabatic Processes
Analyzing adiabatic processes involves several essential equations derived from the principles of thermodynamics. These equations enable engineers to calculate work done, changes in internal energy,
and the relationship between pressure and volume during adiabatic expansion or compression.
Poisson’s Equation:
\[ P V^\gamma = \text{constant} \]
P = Pressure (Pa) V = Volume (m³) \(\gamma\) = Heat capacity ratio (\( C_p/C_v \))
Poisson’s Equation defines the relationship between pressure and volume in an adiabatic process for an ideal gas. It indicates that as the volume increases, the pressure decreases, and vice versa,
with the rate of change determined by \( \gamma \).
Work Done in Adiabatic Expansion/Compression:
\[ W = \frac{P_1 V_1 – P_2 V_2}{\gamma – 1} \]
W = Work done (J) P₁ = Initial pressure (Pa) V₁ = Initial volume (m³) P₂ = Final pressure (Pa) V₂ = Final volume (m³) \(\gamma\) = Heat capacity ratio (\( C_p/C_v \))
This equation calculates the work done during an adiabatic process. Positive work indicates work done by the system (expansion), while negative work indicates work done on the system (compression).
Change in Internal Energy:
\[ \Delta U = W \]
\(\Delta U\) = Change in internal energy (J) W = Work done (J)
In an adiabatic process involving an ideal gas, the change in internal energy is equal to the work done by or on the system, since there is no heat transfer (\( Q = 0 \)).
Adiabatic Relation Between Temperature and Volume:
\[ T V^{\gamma – 1} = \text{constant} \]
T = Temperature (K) V = Volume (m³) \(\gamma\) = Heat capacity ratio (\( C_p/C_v \))
This equation relates temperature and volume in an adiabatic process for an ideal gas. It shows how temperature changes as the gas expands or compresses adiabatically.
These equations are essential tools for engineers when designing and analyzing systems that operate under adiabatic conditions. By applying these principles, engineers can optimize energy conversion
processes, enhance system efficiency, and ensure reliable performance in various applications.
Applications of Adiabatic Processes in Engineering and Technology
Adiabatic processes are integral to numerous engineering applications, enabling efficient energy transfer and system optimization. By leveraging the principles of adiabatic processes, engineers can
design systems that maximize work output, enhance energy efficiency, and maintain desired operational conditions.
Heat Engines
In heat engines, such as internal combustion engines and steam turbines, adiabatic processes are fundamental components of thermodynamic cycles. During the adiabatic compression and expansion
strokes, the working fluid undergoes rapid changes in pressure and volume without heat exchange, maximizing work output and engine efficiency.
Additionally, adiabatic processes are crucial in aerospace engineering, particularly in the design of jet engines and rocket propulsion systems. Adiabatic expansion of high-pressure gases through
nozzles generates thrust, while adiabatic compression in compressors enhances fuel-air mixing and combustion efficiency.
Aerospace Propulsion
Jet engines rely on adiabatic expansion of exhaust gases to produce thrust, propelling aircraft forward. Similarly, rocket engines utilize adiabatic processes to achieve the high velocities necessary
for space exploration, demonstrating the critical role of adiabatic principles in propulsion technologies.
In chemical engineering, adiabatic reactors are designed to operate without external heat exchange, relying on the heat generated or absorbed by chemical reactions to maintain temperature stability.
This approach simplifies reactor design and reduces energy consumption, enhancing process efficiency and sustainability.
Chemical Reactor Design
Adiabatic reactors facilitate optimal reaction conditions by maintaining a constant temperature, ensuring consistent reaction rates and product yields. By eliminating the need for external cooling or
heating, these reactors offer energy-efficient solutions for chemical synthesis and processing.
Furthermore, adiabatic processes are essential in refrigeration and air conditioning systems. During adiabatic compression of refrigerants, the gas temperature increases, enabling effective heat
rejection in condensers and efficient cooling of indoor environments.
Refrigeration and Air Conditioning
Refrigeration cycles utilize adiabatic compression and expansion of refrigerants to transfer heat from indoor spaces to the external environment. By optimizing adiabatic processes, engineers enhance
system efficiency, reduce energy consumption, and improve overall cooling performance.
For more insights into the applications of adiabatic processes in engineering, visit the Engineering Toolbox’s Adiabatic Processes Page.
Real-World Example: Adiabatic Expansion in a Gas Turbine
To illustrate the practical application of adiabatic processes, let’s examine the adiabatic expansion of air in a gas turbine. This example demonstrates how key thermodynamic principles and equations
are applied to optimize the performance and efficiency of turbine systems.
Analyzing Adiabatic Expansion in a Gas Turbine
Consider a gas turbine engine where air enters the turbine at an initial pressure (\(P_1\)) of 500 kPa and an initial volume (\(V_1\)) of 0.01 m³. The air undergoes adiabatic expansion to a final
volume (\(V_2\)) of 0.02 m³ at a constant temperature (\(T = 300 \, \text{K}\)).
Using Poisson’s Equation, we can determine the final pressure (\(P_2\)):
\[ P_1 V_1^\gamma = P_2 V_2^\gamma \]
Calculating Final Pressure:
\[ P_2 = P_1 \left( \frac{V_1}{V_2} \right)^\gamma \]
Assuming air as an ideal diatomic gas (\( \gamma = 1.4 \)):
\[ P_2 = 500 \, \text{kPa} \left( \frac{0.01}{0.02} \right)^{1.4} \approx 500 \times 0.377 = 188.5 \, \text{kPa} \]
Next, we calculate the work done (\(W\)) during the adiabatic expansion using the adiabatic work equation:
\[ W = \frac{P_1 V_1 – P_2 V_2}{\gamma – 1} \]
Calculating Work Done:
\[ W = \frac{500 \times 0.01 – 188.5 \times 0.02}{1.4 – 1} = \frac{5 – 3.77}{0.4} = \frac{1.23}{0.4} = 3.075 \, \text{kJ} \]
The work done by the air during adiabatic expansion is 3.075 kJ. This work is utilized to drive the turbine blades, converting thermal energy into mechanical work.
Additionally, since the process is adiabatic, the change in internal energy (\( \Delta U \)) is equal to the work done:
\[ \Delta U = W = 3.075 \, \text{kJ} \]
This example highlights how adiabatic processes are essential in the operation of gas turbines, enabling efficient energy conversion and optimal engine performance.
For more detailed examples and simulations of adiabatic processes in turbine systems, engineers often use thermodynamic modeling software. These tools provide precise calculations and visualizations
essential for system optimization. Explore Thermopedia’s Adiabatic Process Page for further insights.
This real-world example underscores the critical role of adiabatic processes in mechanical systems. By accurately calculating work and pressure changes, engineers can design turbines that maximize
energy conversion efficiency and ensure reliable performance under varying operational conditions.
Challenges in Applying Adiabatic Processes in Engineering
While adiabatic processes are invaluable in various engineering applications, optimizing their performance and efficiency presents several challenges. These challenges stem from idealized
assumptions, practical constraints, and the complexities of maintaining adiabatic conditions in real-world systems.
Challenge: Minimizing heat transfer with the environment to achieve true adiabatic conditions is difficult due to inevitable heat losses and material limitations.
One of the primary challenges is ensuring that the system remains insulated enough to prevent heat exchange with the surroundings. In practical applications, perfect insulation is unattainable,
leading to deviations from ideal adiabatic behavior. Engineers must design systems with effective insulation materials and configurations to minimize unwanted heat transfer.
Additionally, real gases often exhibit non-ideal behavior under high pressures or low temperatures, complicating the application of adiabatic process equations. These deviations require the use of
real gas models or empirical data to achieve accurate predictions and designs.
Consideration: Utilizing advanced materials with high thermal resistance and implementing precise control mechanisms can help mitigate heat losses and maintain adiabatic conditions more effectively.
Material limitations also pose significant challenges in designing systems that can withstand the operational stresses of adiabatic processes. Components such as compressors, turbines, and heat
exchangers must be constructed from materials that can endure rapid pressure and temperature changes without degrading, ensuring system longevity and reliability.
Furthermore, achieving large temperature and pressure changes during adiabatic processes can introduce material stress and thermal fatigue, impacting the structural integrity of engineering systems.
Balancing the desired energy transfer with material capabilities requires careful design and material selection.
Another significant challenge is managing irreversibilities and entropy production in adiabatic processes. In real systems, processes are not perfectly reversible, leading to energy losses and
reduced efficiency. Engineers must design systems that minimize irreversibilities through optimized component design and efficient energy conversion techniques.
Lastly, integrating adiabatic processes with other thermodynamic cycles introduces additional complexities. For instance, coupling adiabatic expansion with isothermal compression requires careful
synchronization to maintain overall system efficiency and stability.
For strategies on overcoming these challenges and improving the application of adiabatic processes in engineering, visit Engineering Toolbox’s Adiabatic Processes Page.
Adiabatic processes are a cornerstone of thermodynamics, providing essential insights into energy transfer and system efficiency. By maintaining constant temperature conditions, adiabatic processes
enable the efficient conversion of heat into work and vice versa, playing a crucial role in various engineering applications such as heat engines, aerospace propulsion, and chemical reactors.
Mastery of adiabatic process principles empowers engineers to design systems that manage thermal energy effectively, ensuring optimal performance and sustainability. Understanding the interplay
between pressure, volume, and temperature under adiabatic conditions allows for the development of innovative solutions that enhance energy efficiency and minimize operational costs.
Despite the challenges in maintaining adiabatic conditions and accounting for real-world deviations from ideal behavior, advancements in materials science, heat exchange technology, and computational
modeling continue to improve the application of adiabatic processes in engineering. These innovations pave the way for more efficient and reliable energy conversion systems, contributing to the
broader goals of energy sustainability and technological advancement.
Embracing the principles and challenges of adiabatic processes not only enhances engineering designs but also supports the development of sustainable and energy-efficient technologies. As the demand
for efficient energy solutions grows, adiabatic processes remain a fundamental tool in the quest for excellence in mechanical engineering and beyond.
To further explore thermodynamic principles and their applications, visit Khan Academy’s Thermodynamics Section. | {"url":"https://turn2engineering.com/mechanical-engineering/thermodynamics/adiabatic-process","timestamp":"2024-11-07T00:13:02Z","content_type":"text/html","content_length":"226826","record_id":"<urn:uuid:7f0dc0fa-21ea-4c4c-bc62-18b140996e23>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00306.warc.gz"} |
Game Solver
Go directly to the solver
What is a solution to a zero-sum game?
In a two-person zero-sum game, the payoff to one player is the negative of that going to the other. Although zero-sum games are not terribly interesting to economists, who typically study situations
where there are gains to trade, most common parlor games such as poker and chess are zero sum: one player wins, one loses. According to Von-Neumann's theory, every zero sum game has a value. Each
player can guarantee himself this value against any play of his opponent, and can prevent the other player from doing any better than this. We typically write a zero-sum game by forming a matrix and
allowing one player to choose the rows and the other the columns. The entries in the matrix are the payoffs to the row player. For example in the game of matching pennies, we can write the payoff
so that the row player is trying to match the column player and the column player is trying to guess the opposite of the row player. The value of the game may be calculated as either the minimum of
what the row player can achieve knowing the strategy of the column player (the minmax for the row player) or the maximum of what the column player can hold the row player to, knowing the strategy of
the row player (the maxmin for the row player). Von Neumann's famous minmax theorem shows that these two quantities are the same.
It is possible to solve a zero-sum game using the simplex algorithm or any other algorithm that can solve a linear programming problem. This is implemented below. To solve a zero sum game, fill in
the payoffs to the row player in the blank area below separated by commas. Do not enter blank lines. The program will then find the strategy for the column player that holds the row player's payoff
to a minimum. For example in the game of matching pennies:
Enter the payoffs to the row player:
The program finds the column player strategy that holds the row player's payoff to a minimum, and reports the value of the game to the row player.
If you have questions about the program or about zero-sum games, you should check out discussion on the forum. | {"url":"http://dklevine.com/Games/zerosum.htm","timestamp":"2024-11-14T04:25:42Z","content_type":"text/html","content_length":"6066","record_id":"<urn:uuid:1f8abb0c-ca6f-401f-b955-615e6ceff740>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00151.warc.gz"} |
Monad - (Greek Philosophy) - Vocab, Definition, Explanations | Fiveable
from class:
Greek Philosophy
A monad is a fundamental unit or entity that serves as an indivisible and self-sufficient building block in various philosophical frameworks, particularly in Pythagorean philosophy where it
represents unity and the origin of all things. In this context, the monad is seen as the source from which all numbers and mathematical principles emerge, emphasizing the significance of numbers in
understanding the cosmos and existence.
congrats on reading the definition of Monad. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. In Pythagorean philosophy, the monad is viewed as the first number, symbolizing unity and the beginning of all numerical concepts.
2. The concept of the monad highlights the idea that all things in existence can be traced back to a singular source or principle, reinforcing a belief in a structured universe.
3. For Pythagoreans, numbers were not just quantities but held profound spiritual significance, with the monad representing both a mathematical entity and a philosophical idea.
4. The monad's role is foundational; it represents not only individual entities but also serves as a precursor to more complex numerical relationships like pairs and sets.
5. Pythagoreans believed that understanding the monad and its implications could lead to greater insights into the nature of reality and existence itself.
Review Questions
• How does the concept of monad relate to other numbers in Pythagorean philosophy?
□ The concept of the monad relates to other numbers by serving as their foundational unit; it represents singularity and the beginning of numerical relationships. Following the monad, the dyad
introduces duality, demonstrating how everything emerges from unity into complexity. Thus, understanding the monad helps frame how numbers evolve into more intricate structures within
Pythagorean thought.
• Discuss the implications of viewing the monad as both a mathematical entity and a philosophical principle in Pythagorean thought.
□ Viewing the monad as both a mathematical entity and a philosophical principle underscores its dual significance in Pythagorean thought. Mathematically, it serves as the origin of all numbers,
while philosophically, it symbolizes unity and wholeness in a diverse universe. This perspective highlights how numbers are intertwined with deeper existential questions about reality,
knowledge, and existence itself.
• Evaluate how the principles surrounding the monad influence our understanding of reality according to Pythagorean philosophy.
□ The principles surrounding the monad influence our understanding of reality by suggesting that all existence arises from a singular source, emphasizing unity amidst diversity. This idea
encourages individuals to explore how complex phenomena can be rooted in simple beginnings. By analyzing how numbers reflect cosmic order through relationships derived from the monad, we gain
insights into the interconnectedness of all things and the underlying mathematical principles governing reality.
"Monad" also found in:
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/greek-philosophy/monad","timestamp":"2024-11-05T09:05:22Z","content_type":"text/html","content_length":"170367","record_id":"<urn:uuid:805e3d8a-79a6-4bfc-a997-a9829331f77b>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00584.warc.gz"} |
Online University Classes
I’ve now taken several online classes, with mostly positive experiences.
On Coursera, I took very useful classes in control of multiple robots and in image processing. I started a course in mathematical thinking, but found the prof quite annoying and dropped it (I had
already studied this material, but took it because the prof is famous, to see what it would be like).
I’ve used Khan Academy several times, for freshening up some subjects I needed. In particular, the Linear Algebra I took in 1960 has changed a bit of its terminology and emphasis, and I needed
updating for the robot control class.
There is one big practical difference between Coursera and Khan Academy. Coursera tries to force classes to be held on a particular schedule, while Khan Academy lets one work on any topic at any
time, whenever you need it.
There are advantages to keeping the classes together–all the students are working on the same concepts and problems at the same time, so the Forum discussions when sorted by time also tend to be
sorted by relevant topic. That means it’s easy to ask a question and find others who have just figured out the problem, who can help you understand where you’re going wrong.
But for people who can use Search to find what they need in the historical discussion archives, this synchronization isn’t that important.
Synchronization also helps with keeping exams and homework fresh, so that you work on them without knowing the answers until they are suddenly revealed and graded.
But when you just want to learn stuff, when you happen to need it, the Khan Academy approach is much more useful. If I had had to wait for the next class on Linear Algebra to start, I could not have
gotten the info I needed in time to be useful for my robotics class.
Another great resource is WikiPedia, which amazes me how often it has an article that explains exactly what I need to know. What a gigantic, enormous, immense improvement this is over the printed
encyclopedias of yesteryear!
Attitudes toward teaching vary
Some (few) people teach with the idea of making the subject as easy as possible to understand. This is very difficult, as it requires great effort to eliminate ambiguities and errors from the course
text and problem sets. Such things seem trivial to the teacher or to anyone who already knows the material, but present enormous barriers to the student, who has to go to great effort to finally
discover just what was in error. Humans normally work in environments with very high error rates (for example, people very often say literally the opposite of what they mean), and it works only
because of the large shared understanding that forms the background against which everything is evaluated. Much of that background does not yet exist for the student.
Most teachers take only normal efforts to remove errors and ambiguities, and when such an error is pointed out their reaction is often to point out that the student has thus been forced to learn much
more. The real world is full of errors and ambiguities, so the student has to learn to deal with them. An important goal for some teachers is to sort out the students, so that only the best go on to
the best schools or the best advanced classes.
I think it’s true that a student who successfully understands material that is laced with errors does indeed have a deeper understanding at the end of the process. However, it is an enormous amount
of work, and takes an enormous amount of time. The result is that many who would have been capable of learning the material simply can’t complete the job. They may finish the course, and even with a
decent grade perhaps, but they haven’t had time to get everything sorted out, so they progress to the next class with a shaky foundation.
Here’s where I think the Khan Academy really has it right. Their passing grade is 100%. You don’t move on until you really understand what you’re currently learning. That greatly speeds progress over
the long haul, because you have a reliable foundation on which to build.
What soured me on the mathematical thinking class was the prof’s tolerance of ambiguity. When ambiguities were pointed out, he had no interest in fixing them but blamed the students who chose the
wrong interpretation. Seems a bizarre attitude for a mathematician!
Anyway, I’ve only had a few experiences with courses where the prof really tried hard enough to eliminate mistakes and confusions, and I found these courses far more satisfying and stimulating than
the usual ones. There’s far more total learning, in my opinion, and far more students get equipped to go on to more advanced topics. I think it’s win-win. But it’s really, really hard work for the
prof. Most egos won’t admit to enough possibility of errors to make this approach possible for them.
Khan tells of a class where one student stalled on a basic concept for a week or so. In a normal class, she would have been moved to a slower track, for less capable students, which would have
lowered her whole future achievement level. But suddenly she got it, and then zoomed ahead, eventually finishing second in the class! It’s quite unlikely for good things like this to happen in
traditional classes, where the most significant thing she would have learned is that she’s not good at the subject, something that isn’t even true. | {"url":"http://dave.gustavson.info/2013/05/27/online-university-classes/","timestamp":"2024-11-06T10:19:14Z","content_type":"application/xhtml+xml","content_length":"24773","record_id":"<urn:uuid:35ced364-5db6-4901-bac0-6fa44bcb19c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00469.warc.gz"} |
Layer Space Transforms
Layer Space Transforms¶
Use layer space transform methods to transform values from one space to another, such as from layer space to world space. The from methods transform values from the named space (composition or world)
to the layer space.
The to methods transform values from the layer space to the named space (composition or world). Each transform method takes an optional argument to determine the time at which the transform is
computed; however, you can almost always use the current (default) time.
Use Vec transform methods when transforming a direction vector, such as the difference between two position values.
Use the plain (non-Vec) transform methods when transforming a point, such as position.
Composition (comp) and world space are the same for 2D layers. For 3D layers, however, composition space is relative to the active camera, and world space is independent of the camera.
toComp(point, t=time)¶
Transforms a point from layer space to composition space.
point Array (2- or 3-dimensional)
t Number
Array (2- or 3-dimensional)
fromComp(point, t=time)¶
Transforms a point from composition space to layer space. The resulting point in a 3D layer may have a nonzero value even though it is in layer space.
point Array (2- or 3-dimensional)
t Number
Array (2- or 3-dimensional)
toWorld(point, t=time)¶
Transforms a point from layer space to view-independent world space.
toWorld.effect("Bulge")("Bulge Center")
Dan Ebberts provides an expression on his MotionScript website that uses the toWorld method to auto-orient a layer along only one axis. This is useful, for example, for having characters turn
from side to side to follow the camera while remaining upright.
Rich Young provides a set of expressions on his AE Portal website that use the toWorld method link a camera and light to a layer with the CC Sphere effect.
point Array (2- or 3-dimensional)
t Number
Array (2- or 3-dimensional)
fromWorld(point, t=time)¶
Transforms a point from world space to layer space.
See Expression example: Create a bulge between two layers for an example of how this method can be used.
point Array (2- or 3-dimensional)
t Number
Array (2- or 3-dimensional)
toCompVec(vec, t=time)¶
Transforms a vector from layer space to composition space.
vec Array (2- or 3-dimensional)
t Number
Array (2- or 3-dimensional)
fromCompVec(vec, t=time)¶
Transforms a vector from composition space to layer space.
Example (2D layer):
dir = sub(position, thisComp.layer(2).position);
vec Array (2- or 3-dimensional)
t Number
Array (2- or 3-dimensional)
toWorldVec(vec, t=time)¶
Transforms a vector from layer space to world space.
p1 = effect("Eye Bulge 1")("Bulge Center");
p2 = effect("Eye Bulge 2")("Bulge Center");
toWorld(sub(p1, p2))
vec Array (2- or 3-dimensional)
t Number
Array (2- or 3-dimensional)
fromWorldVec(vec, t=time)¶
Transforms a vector from world space to layer space.
vec Array (2- or 3-dimensional)
t Number
Array (2- or 3-dimensional)
fromCompToSurface(point, t=time)¶
Projects a point located in composition space to a point on the surface of the layer (zero z-value) at the location where it appears when viewed from the active camera. This method is useful for
setting effect control points. Use with 3D layers only.
point Array (2- or 3-dimensional)
t Number
Array (2-dimensional) | {"url":"https://ae-expressions.docsforadobe.dev/layer-space.html","timestamp":"2024-11-01T20:29:25Z","content_type":"text/html","content_length":"28034","record_id":"<urn:uuid:ff24c22d-cc7e-43c4-a422-ba37a1732399>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00710.warc.gz"} |
Geometry of the Canada 150 logo maple leaf
When we first started trying to use Maple to create a maple leaf like the one in the Canada 150 logo, we couldn’t find any references online to the exact geometry, so we went back to basics. With our
trusty ruler and protractor, we mapped out the geometry of the maple leaf logo by hand.
Our first observation was that the maple leaf could be viewed as being comprised of 9 kites. You can read more about the meaning of these shapes on the Canada 150 site (where they refer to the shapes
as diamonds).
We also observed that the individual kites had slightly different scales from one another. The largest kites were numbers 3, 5 and 7; we represented their length as 1 unit of length. Also, each of
the kites seemed centred at the origin, but was rotated about the y-axis at a certain angle.
As such, we found the kites to have the following scales and rotations from the vertical axis:
1, 9: 0.81 at +/-
2, 8: 0.77 at +/-
3, 5, 7: 1 at +/-
4, 6: 0.93 at +/-
This can be visualized as follows:
To draw this in Maple we put together a simple procedure to draw each of the kites:
# Make a kite shape centred at the origin.
opts := thickness=4, color="#DC2828":
MakeKite := proc({scale := 1, rotation := 0})
local t, p, pts, x;
t := 0.267*scale;
pts := [[0, 0], [t, t], [0, scale], [-t, t], [0, 0]]:
p := plot(pts, opts);
if rotation<>0.0 then
p := plottools:-rotate(p, rotation);
end if;
return p;
end proc:
The main idea of this procedure is that we draw a kite using a standard list of points, which are scaled and rotated. Then to generate the sequence of plots:
shapes := MakeKite(rotation=-Pi/4),
MakeKite(scale=0.77, rotation=-2*Pi/5),
MakeKite(scale=0.81, rotation=-Pi/2),
MakeKite(scale=0.93, rotation=-Pi/8),
MakeKite(scale=0.93, rotation=Pi/8),
MakeKite(scale=0.81, rotation=Pi/2),
MakeKite(scale=0.77, rotation=2*Pi/5),
plot([[0,-0.5], [0,0]], opts): #Add in a section for the maple leaf stem
plots:-display(shapes, scaling=constrained, view=[-1..1, -0.75..1.25], axes=box, size=[800,800]);
This looked pretty similar to the original logo, however the kites 2, 4, 6, and 8 all needed to be moved behind the other kites. This proved somewhat tricky, so we just simply turned on the point
probe in Maple and drew in the connected lines to form these points.
shapes := MakeKite(rotation=-Pi/4),
MakeKite(scale=0.81, rotation=-Pi/2),
MakeKite(scale=0.81, rotation=Pi/2),
plot([[0,-0.5], [0,0]], opts):
plots:-display(shapes, scaling=constrained, view=[-1..1, -0.75..1.25], axes=box, size=[800,800]);
Happy Canada Day!
Tags are words are used to describe and categorize your content. Combine multiple words with dashes(-), and seperate tags with spaces. | {"url":"https://beta.mapleprimes.com/posts/208361-Geometry-Of-The-Canada-150-Logo-Maple-Leaf","timestamp":"2024-11-14T03:41:30Z","content_type":"text/html","content_length":"122159","record_id":"<urn:uuid:4c2f8ba5-068b-4975-8239-d1cbca5a9da6>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00103.warc.gz"} |
Average Bow Draw Length
Average Bow Draw Length - The right draw length allows for a steady aim and consequently improves accuracy. Finding the right draw length is vital for achieving proper shooting form and maximizing
accuracy. For example, if you have a wingspan of 70 inches, your draw length would be 28 inches. Bows are ibo speed rated at 30 draw length. How do i know if my draw length is correct?
Here, we break down 5 different ways to measure and determine your draw length. Web draw length is the distance you have to pull a bow’s string back (as measured from the riser) to reach your perfect
anchor point (where your string hand meets your face) at full draw. For almost all shooters, this method is very accurate and the most simple route. Having a bow with a draw length that suits you
perfectly can truly make or break your archery experience. Web this measurement is simple, but crucial to determine the size of your bow, the size of your arrows, and allows you to shoot your very
best. Recurve bow length based of height the above chart shows suggested recurve bow sizes based on amo (archery manufacturers organization) height figures. Also, learn about full draw, anchor points
& more!
How To Measure A Bows Draw Length AimCampExplore
Web the bow’s draw length is the distance the bowstring is pulled from the rest position to a full draw. Web this calculation helps you find a draw length that allows you to comfortably pull.
How To Measure A Bows Draw Length AimCampExplore
I have also written a guide measuring draw length, once you have gotten the results (which takes 2 minutes) you will be able to workout the right bow size. What is the average draw length.
How to measure your draw length Archery Recurve Bows
Web draw length is the distance you have to pull a bow’s string back (as measured from the riser) to reach your perfect anchor point (where your string hand meets your face) at full draw..
3 Ways to Measure Draw Length for a Bow wikiHow
Here, we break down 5 different ways to measure and determine your draw length. Web to measure your draw length, stand upright with your back against a well, and stretch your arms out to the.
Bow Draw Length Chart Sportsman's Warehouse
The right draw length allows for a steady aim and consequently improves accuracy. For almost all shooters, this method is very accurate and the most simple route. Below is a table that takes your
Average Draw Length for 6 Foot Man Bow Hunting Advise
Bows are ibo speed rated at 30 draw length. So your height in inches minus 15 and then divided by 2 will be your draw length, or at least a very good starting point. How.
How To Determine Draw Length
‘’draw length is the distance at the archer’s full draw, from the nocking point on the string to the pivot point of the bow grip plus 1 3/4 inches.’’ Web to measure your draw length,.
Does Your Bow Fit You Properly? How to Measure Your Draw Length
So your height in inches minus 15 and then divided by 2 will be your draw length, or at least a very good starting point. What is the average draw length for a 5’10” person?.
How to Easily Determine Your Bow Draw Length Boss Targets
Web for instance, my wingspan is 72.75 inches, so 72.75/2.5 is 29.1” and i shoot a 29” draw length. Knowing how to measure draw length is a basic, yet crucial part of archery. Measure the.
Choosing The Right Arrows For A Compound Bow All You Need To Know
So your height in inches minus 15 and then divided by 2 will be your draw length, or at least a very good starting point. For example, if you have a wingspan of 70 inches,.
Average Bow Draw Length Below is a table that takes your draw length and then tells you what size bow you should get. Also, learn about full draw, anchor points & more! The standard measurement of
draw length is described by the ata (archery trade association) as: Finding the right draw length is vital for achieving proper shooting form and maximizing accuracy. So your height in inches minus
15 and then divided by 2 will be your draw length, or at least a very good starting point.
Average Bow Draw Length Related Post : | {"url":"https://classifieds.independent.com/print/average-bow-draw-length.html","timestamp":"2024-11-06T17:32:39Z","content_type":"application/xhtml+xml","content_length":"24177","record_id":"<urn:uuid:3511201b-da0b-4e6c-93f9-dd41984134e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00027.warc.gz"} |
Causal inference | methods@manchester | The University of Manchester
Causal inference
Richard Emsley, Centre for Biostatistics.
Causal inference is concerned with the quantifying the relationship between a particular exposure (the ‘cause’) and an outcome (the ‘effect’). Implicitly or explicitly, causal inference is the
primary aim of most empirical investigations, especially in medicine and behavioural science.
It can be summarised as explicitly defining the estimand of interest, formalising the assumptions required for traditional statistical models to estimate causal parameters and developing new
statistical models and theory to estimate causal parameters. There has been a huge growth in publications relating to causal inference in the literature in the previous three decades.
In this talk we will:
• Give a brief recent history of causal inference
• Introduce the main concepts underpinning the dominant causal inference approach, known as potential outcomes or counterfactuals
• Give a general overview of the assumptions required for statistical models including structural equation modelling to produce causal effects
• Discuss a new set of causal estimands relating to mediation analysis
• Discuss a new class of models specifically for causal inference
• Highlight expertise and opportunities available at Manchester, including the first UK causal inference meeting
The talk is not overly technical, and focuses on the concepts rather than statistical theory so as to be accessible to everyone.
PDF slides
Download PDF slides of the presentation 'What is causal inference?' | {"url":"https://www.methods.manchester.ac.uk/themes/survey-and-statistical-methods/causal-inference/","timestamp":"2024-11-05T22:12:21Z","content_type":"text/html","content_length":"34698","record_id":"<urn:uuid:f86fd13d-0183-411b-94cf-bc6e44b819ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00673.warc.gz"} |
ABCD is a Rhombus. DPR and CBR are straight lines. Prove that: DP.CR=DC.PR - Creativeakademy
ABCD is a rhombus. DPR and CBR are straight lines.
Prove that:
Given ABCD is rhombus .
DPR and CBR are straight lines.
∴ AD||CR
we need to Prove : DP.CR=DC.PR
In ∆ ADP and ∆ PCR
We have :
∠ APD = ∠ CPR
∠ ADP = ∠ PRC
∠ DAP = ∠ PCR
∴ ∆ ADP and ∆ PCR are similar triangle .
∴ we can write
Or AD.PR = DP.CR
∴ DC .PR = DP.CR Proved
2 responses to “ABCD is a Rhombus. DPR and CBR are straight lines. Prove that: DP.CR=DC.PR”
For two similar triangles [ADP and PCR] which angles are equal. The ratio of sides of one angle can be equal to the ratio of sides of other triangle . Please read about similar triangles , you
can get this property. Hope I am able to clarify your query.
How is AD/DP?
You must be logged in to post a comment. | {"url":"https://creativeakademy.org/2016/05/abcd-is-a-rhombus-dpr-and-cbr-are-straight-lines-prove-that-dp-crdc-pr.html","timestamp":"2024-11-09T00:37:37Z","content_type":"text/html","content_length":"109271","record_id":"<urn:uuid:e2ead488-a677-4142-8b44-afc681f9e81f>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00842.warc.gz"} |
American Mathematical Society
Extensions of isomorphisms between affine algebraic subvarieties of $k^ n$ to automorphisms of $k^ n$
HTML articles powered by AMS MathViewer
Proc. Amer. Math. Soc. 113 (1991), 325-334
DOI: https://doi.org/10.1090/S0002-9939-1991-1076575-3
PDF | Request permission
We derive a criterion, when an isomorphism between two closed affine algebraic subvarieties in an affine space can be extended to an automorphism of the space. References
• Shreeram S. Abhyankar and Tzuong Tsieng Moh, Embeddings of the line in the plane, J. Reine Angew. Math. 276 (1975), 148–166. MR 379502
• Hyman Bass, Edwin H. Connell, and David Wright, The Jacobian conjecture: reduction of degree and formal expansion of the inverse, Bull. Amer. Math. Soc. (N.S.) 7 (1982), no. 2, 287–330. MR 663785
, DOI 10.1090/S0273-0979-1982-15032-7
• Phillip Griffiths and Joseph Harris, Principles of algebraic geometry, Pure and Applied Mathematics, Wiley-Interscience [John Wiley & Sons], New York, 1978. MR 507725
• Robert C. Gunning and Hugo Rossi, Analytic functions of several complex variables, Prentice-Hall, Inc., Englewood Cliffs, N.J., 1965. MR 0180696
• Zbigniew Jelonek, The extension of regular and rational embeddings, Math. Ann. 277 (1987), no. 1, 113–120. MR 884649, DOI 10.1007/BF01457281 Sh. Kaliman, On extensions of isomorphisms of affine
subvarieties of ${C^n}$ to automorphisms of ${C^n}$, Trans, of the 13-th All-Union School on the Theory of Operators on Functional Spaces, Kuibyshev, 1988, p. 84. (Russian) V. Y. Lin and M. G.
Zaidenberg, An irreducible simply connected curve in ${C^2}$ is equivalent to quasihomogeneous curves, Soviet Math. Dokl. 28 (1983), 200-204.
• I. R. Shafarevich, Basic algebraic geometry, Die Grundlehren der mathematischen Wissenschaften, Band 213, Springer-Verlag, New York-Heidelberg, 1974. Translated from the Russian by K. A. Hirsch.
MR 0366917, DOI 10.1007/978-3-642-96200-4
• Masakazu Suzuki, Propriétés topologiques des polynômes de deux variables complexes, et automorphismes algébriques de l’espace $\textbf {C}^{2}$, J. Math. Soc. Japan 26 (1974), 241–257 (French).
MR 338423, DOI 10.2969/jmsj/02620241
• Oscar Zariski and Pierre Samuel, Commutative algebra, Volume I, The University Series in Higher Mathematics, D. Van Nostrand Co., Inc., Princeton, New Jersey, 1958. With the cooperation of I. S.
Cohen. MR 0090581
Similar Articles
• Retrieve articles in Proceedings of the American Mathematical Society with MSC: 14E09
• Retrieve articles in all journals with MSC: 14E09
Bibliographic Information
• © Copyright 1991 American Mathematical Society
• Journal: Proc. Amer. Math. Soc. 113 (1991), 325-334
• MSC: Primary 14E09
• DOI: https://doi.org/10.1090/S0002-9939-1991-1076575-3
• MathSciNet review: 1076575 | {"url":"https://www.ams.org/journals/proc/1991-113-02/S0002-9939-1991-1076575-3/?active=current","timestamp":"2024-11-12T12:58:21Z","content_type":"text/html","content_length":"60769","record_id":"<urn:uuid:52ab465f-89b5-4fa5-b230-f09deb6d96d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00158.warc.gz"} |
Lesson: The constant of proportionality | KS3 Maths | Oak National Academy
Switch to our new maths teaching resources
Slide decks, worksheets, quizzes and lesson planning guidance designed for your classroom.
Lesson details
Key learning points
1. In this lesson, we will learn about the the term 'constant of proportionality' and how it relates to how proportion works.
This content is made available by Oak National Academy Limited and its partners and licensed under Oak’s terms & conditions (Collection 1), except where otherwise stated.
5 Questions
Which graph does not show to direct proportion?
Which graph shows direct proportion?
Which table does not shows direct proportion?
Which table does not shows direct proportion?
x and y are directly proportional as shown in the graph. What is the value of y when x = 10?
8 Questions
Complete the sentence: When two quantities are in ........................ , when one variable doubles, the other doubles too.
constant of proportionality
Complete the sentence: When two quantities are in direction proportion, when one variable is zero, the other is .............
x and y are in direction proportion. When x is 4 y is 11. What is the value of y when x = 12?
The cost of T shirts online at £12 each. The postage charge is £3.50 per order. How much will it cost to order 2 T shirts?
Think about the previous question: Are the number of T-shirts and the cost in direct proportion?
Which table shows direct proportion?
What is the constant of proportionality in the table below?
What is the constant of proportionality between m and n? | {"url":"https://www.thenational.academy/teachers/programmes/maths-secondary-ks3-l/units/direct-and-indirect-proportion-e7d4/lessons/the-constant-of-proportionality-c8tk2r","timestamp":"2024-11-06T11:00:08Z","content_type":"text/html","content_length":"271500","record_id":"<urn:uuid:0eabe82e-fc83-4501-9878-a3455b206ec9>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00543.warc.gz"} |
Layman’s Introduction to BackpropagationData Science News
Layman’s Introduction to Backpropagation
Layman’s Introduction to BackpropagationTraining a neural network is no easy feat but it can be simple to understand itRishi SidhuBlockedUnblockFollowFollowingJun 13Backpropagation is the process of
tuning a neural network’s weights to better the prediction accuracy.
There are two directions in which information flows in a neural network.
Forward propagation—also called inference—is when data goes into the neural network and out pops a prediction.
Backpropagation—the process of adjusting the weights by looking at the difference between prediction and the actual result.
Photo by Pankaj Patel on UnsplashBackpropagation is done before a neural network is ready to be deployed in the field.
One uses the training data, which already has known results, to perform backpropagation.
Once we are confident that the network is sufficiently trained we start the inference process.
These days backpropagation is a matter of using a single command from a myriad of tools.
Since, these tools readily train a neural net most people tend to skip understanding the intuition behind backpropagation.
Understandably so, when the mathematics looks like this.
Andrew Ng’s Coursera course: https://www.
org/learn/machine-learningBut it makes a lot of sense to get an intuition behind the process that is at the core of so much machine intelligence.
The role weights play in a neural networkBefore trying to understand backpropagation let’s see how weights actually impact the output i.
the prediction.
The signal input at the first layer of propagates ahead into the network through weights that control the connection strength between neurons of adjacent layers.
Source: http://tuxar.
uk/brief-introduction-artificial-neural-networks/Training a network means fine tuning its weights to increase the prediction accuracyTuning the weightsAt the outset the weights of a neural network
are random and hence the predictions are all wrong.
So how do we change the weights such that when shown a cat the neural network predicts it as a cat with a high confidence?One Weight At a Time: One very rudimentary way to train a network could be to
change one weight keeping others fixed at a time.
Weight Combinations: Another approach could be to set all weights randomly within a range (let’s say from 1 to 1000).
One could start with all 1’s and then all 1’s and one 2 and so on.
The combinations would look like this—(1,1,1), (1,1,2), (1,2,1), (1,2,2), (2,2,2), (2,2,3)Why both of these approaches are bad?They are because theyIf we are to try all possible combinations of N
weights, each ranging from 1 to 1000 only, it would take a humongous amount of time to sift through the solution space.
For a processor running at 1GHz a 2 neuron netowork would take 1⁰⁶/1⁰⁹ = 1 millisecond.
For a 4 neuron network the corresponding processing time would be 16 minutes and would keep on increasing exponentially for bigger networks.
The Curse of DimensionalityFor 5 neuron network that would be 11.
5 days.
That’s the curse of dimensionality.
A real neural network will have 1000s of weights and would take centuries to complete.
This is where backpropagation comes to the rescueThe mathematics is easy to understand once you have an intuitive idea of what goes on in backpropagation.
Let’s say a company has 2 sales person, 3 managers and 1 CEO.
Backpropagation is CEO giving feedback to middle management and them in turn doing the same to sales people.
Each layer employee reports to all the succeeding layer managers with varying ‘strengths’ indicated by the width of the arrow.
So a few employees report more regularly to one manager than others.
Also the CEO considers the inputs of Design Manager (bottom neuron in the 2nd layer) more seriously than the inputs of the Sales Manager (bike rider).
Everytime a sale is made the CEO computes the difference between the predicted result and the expected result.
The CEO readjusts slightly how much ‘weight’ does he want to place on which manager’s words.
The manager reporting structure in turn changes in accordance to the CEO’s guidelines.
Each manager tries to readjusts the weights in a way that keep him/her better informed.
Top Down: Backpropagation happening after CEO receives wrong inputs from Middle ManagementCEO readjusts his trust on the middle management.
This propagation of feedback back into the previous layers is the reason it is called backpropagation.
CEO keeps changing his levels of trust until the Customer Feedback starts matching the expected results.
Slight bit of MathsSource: https://www.
com/435/A pinch of mathematicsThe success of a neural network is measured by the cost function.
Less the cost better has it been trained.
The goal is to modify weights in such a way so as to minimize the cost function.
So two important terms —Error = Difference between Expectation and RealityGradient Of Cost Function = The change in cost upon changing weights.
The aim is to compute how much each weight in the preceding layer needs to be changed to bring expectation closer to reality i.
bringing the error close to 0Backpropagation contains the word ‘back’ in it.
What that means is that we go from output back towards the input.
We look at how error propagates backward.
Look how δˡ depends on δˡ⁺¹http://neuralnetworksanddeeplearning.
htmlIf you look closely there is a z in the second term of the equation.
We are measuring there the speed with which z changes.
That tells us how fast the error would change because δˡ also depends on z = the output of a neuron.
htmlThe first term on the right, just measures how fast the cost is changing as a function of the jth output activation.
The second term on the right, measures how fast the activation function σ is changing at the output of jth neuron.
Results flow forward, error flows backward.
Since we backpropagate the error we need a numerical representation of how error flows between two adjacent layers.
Rate of Change of cost with respect to weights is given byhttp://neuralnetworksanddeeplearning.
htmlWith these basics in place the backpropagation algorithm can be summarized as followshttp://neuralnetworksanddeeplearning.
htmlPractical considerations from a machine learning perspectiveEven backpropagation takes time.
It’s better to divide input data into batches.
(Stochastic Gradient Descent).
The more the training data, the better the weight tuning.
Generally neural nets requires thousands of pre-labelled training examples.
Backpropagation, though popular now, faced its share of opposition when it was introduced.
A lot of prominent scientists and cognitive psychologists (including the famous British cognitive psychologist Geoff Hinton) have been known to distrust backpropagation.
Backpropagation faces opposition on account of a lot of reasons including it not being representative of how brain learns.
Quora answerNevertheless it is still a very widely used and a very efficient method for training neural networks.
Specialized hardware is being developed these days to perform backpropagation even more efficiently.
Self-Learning AI: This New Neuro-Inspired Computer Trains ItselfA team of researchers from Belgium think that they are close to extending the anticipated end of Moore's Law, and they…futurism.
You must be logged in to post a comment. | {"url":"https://datascience.sharerecipe.net/2019/06/14/laymans-introduction-to-backpropagation/","timestamp":"2024-11-08T12:18:31Z","content_type":"text/html","content_length":"37074","record_id":"<urn:uuid:237c73bd-bb13-4c62-9203-00b15a7c60e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00418.warc.gz"} |
National Curriculum Primary Keystage 2 Year 3 Mathematics
Important note: National Curriculum content shared on this website is under the terms of the Open Government Licence. To view this licence, visit http://www.nationalarchives.gov.uk/doc/
open-government-licence/. You can download the full document at http://www.gov.uk/dfe/nationalcurriculum
Number – number and place value
Statutory requirements
Pupils should be taught to:
• count from 0 in multiples of 4, 8, 50 and 100; find 10 or 100 more or less than a given number
• recognise the place value of each digit in a three-digit number (hundreds, tens, ones)
• compare and order numbers up to 1000
• identify, represent and estimate numbers using different representations
• read and write numbers up to 1000 in numerals and in words
• solve number problems and practical problems involving these ideas.
Notes and guidance (non-statutory)
Pupils now use multiples of 2, 3, 4, 5, 8, 10, 50 and 100.
They use larger numbers to at least 1000, applying partitioning related to place value using varied and increasingly complex problems, building on work in year 2 (for example, 146 = 100 + 40 and 6,
146 = 130 + 16).
Using a variety of representations, including those related to measure, pupils continue to count in ones, tens and hundreds, so that they become fluent in the order and place value of numbers to
Number – addition and subtraction
Statutory requirements
Pupils should be taught to:
• add and subtract numbers mentally, including:
• a three-digit number and ones
• a three-digit number and tens
• a three-digit number and hundreds
• add and subtract numbers with up to three digits, using formal written methods of columnar addition and subtraction
• estimate the answer to a calculation and use inverse operations to check answers
• solve problems, including missing number problems, using number facts, place value, and more complex addition and subtraction.
Notes and guidance (non-statutory)
Pupils practise solving varied addition and subtraction questions. For mental calculations with two-digit numbers, the answers could exceed 100.
Pupils use their understanding of place value and partitioning, and practise using columnar addition and subtraction with increasingly large numbers up to three digits to become fluent (see
Mathematics Appendix 1).
Number – multiplication and division
Statutory requirements
Pupils should be taught to:
• recall and use multiplication and division facts for the 3, 4 and 8 multiplication tables
• write and calculate mathematical statements for multiplication and division using the multiplication tables that they know, including for two-digit numbers times one-digit numbers, using mental
and progressing to formal written methods
• solve problems, including missing number problems, involving multiplication and division, including positive integer scaling problems and correspondence problems in which n objects are connected
to m objects.
Notes and guidance (non-statutory)
Pupils continue to practise their mental recall of multiplication tables when they are calculating mathematical statements in order to improve fluency. Through doubling, they connect the 2, 4 and 8
multiplication tables.
Pupils develop efficient mental methods, for example, using commutativity and associativity (for example, 4 × 12 × 5 = 4 × 5 × 12 = 20 × 12 = 240) and multiplication and division facts (for example,
using 3 × 2 = 6, 6 ÷ 3 = 2 and 2 = 6 ÷ 3) to derive related facts (for example, 30 × 2 = 60, 60 ÷ 3 = 20 and 20 = 60 ÷ 3).
Pupils develop reliable written methods for multiplication and division, starting with calculations of two-digit numbers by one-digit numbers and progressing to the formal written methods of short
multiplication and division.
Pupils solve simple problems in contexts, deciding which of the four operations to use and why. These include measuring and scaling contexts, (for example, four times as high, eight times as long
etc.) and correspondence problems in which m objects are connected to n objects (for example, 3 hats and 4 coats, how many different outfits?; 12 sweets shared equally between 4 children; 4 cakes
shared equally between 8 children).
Statutory requirements
Pupils should be taught to:
• count up and down in tenths; recognise that tenths arise from dividing an object into 10 equal parts and in dividing one-digit numbers or quantities by 10
• recognise, find and write fractions of a discrete set of objects: unit fractions and non-unit fractions with small denominators
• recognise and use fractions as numbers: unit fractions and non-unit fractions with small denominators
• recognise and show, using diagrams, equivalent fractions with small denominators
• add and subtract fractions with the same denominator within one whole [for example, + =]
• compare and order unit fractions, and fractions with the same denominators
• solve problems that involve all of the above.
Notes and guidance (non-statutory)
Pupils connect tenths to place value, decimal measures and to division by 10.
They begin to understand unit and non-unit fractions as numbers on the number line, and deduce relations between them, such as size and equivalence. They should go beyond the [0, 1] interval,
including relating this to measure.
Pupils understand the relation between unit fractions as operators (fractions of), and division by integers.
They continue to recognise fractions in the context of parts of a whole, numbers, measurements, a shape, and unit fractions as a division of a quantity.
Pupils practise adding and subtracting fractions with the same denominator through a variety of increasingly complex problems to improve fluency.
Statutory requirements
Pupils should be taught to:
• measure, compare, add and subtract: lengths (m/cm/mm); mass (kg/g); volume/capacity (l/ml)
• measure the perimeter of simple 2-D shapes
• add and subtract amounts of money to give change, using both £ and p in practical contexts
• tell and write the time from an analogue clock, including using Roman numerals from I to XII, and 12-hour and 24-hour clocks
• estimate and read time with increasing accuracy to the nearest minute; record and compare time in terms of seconds, minutes and hours; use vocabulary such as o’clock, a.m./p.m., morning,
afternoon, noon and midnight
• know the number of seconds in a minute and the number of days in each month, year and leap year
• compare durations of events [for example to calculate the time taken by particular events or tasks].
Notes and guidance (non-statutory)
Pupils continue to measure using the appropriate tools and units, progressing to using a wider range of measures, including comparing and using mixed units (for example, 1 kg and 200g) and simple
equivalents of mixed units (for example, 5m = 500cm).
The comparison of measures includes simple scaling by integers (for example, a given quantity or measure is twice as long or five times as high) and this connects to multiplication.
Pupils continue to become fluent in recognising the value of coins, by adding and subtracting amounts, including mixed units, and giving change using manageable amounts. They record £ and p
separately. The decimal recording of money is introduced formally in year 4.
Pupils use both analogue and digital 12-hour clocks and record their times. In this way they become fluent in and prepared for using digital 24-hour clocks in year 4.
Geometry – properties of shapes
Statutory requirements
Pupils should be taught to:
• draw 2-D shapes and make 3-D shapes using modelling materials; recognise 3-D shapes in different orientations and describe them
• recognise angles as a property of shape or a description of a turn
• identify right angles, recognise that two right angles make a half-turn, three make three quarters of a turn and four a complete turn; identify whether angles are greater than or less than a
right angle
• identify horizontal and vertical lines and pairs of perpendicular and parallel lines.
Notes and guidance (non-statutory)
Pupils’ knowledge of the properties of shapes is extended at this stage to symmetrical and non-symmetrical polygons and polyhedra. Pupils extend their use of the properties of shapes. They should be
able to describe the properties of 2-D and 3-D shapes using accurate language, including lengths of lines and acute and obtuse for angles greater or lesser than a right angle.
Pupils connect decimals and rounding to drawing and measuring straight lines in centimetres, in a variety of contexts.
Statutory requirements
Pupils should be taught to:
• interpret and present data using bar charts, pictograms and tables
• solve one-step and two-step questions [for example, ‘How many more?’ and ‘How many fewer?’] using information presented in scaled bar charts and pictograms and tables.
Notes and guidance (non-statutory)
Pupils understand and use simple scales (for example, 2, 5, 10 units per cm) in pictograms and bar charts with increasing accuracy.
They continue to interpret data presented in many contexts. | {"url":"https://teacherworksheets.co.uk/national-curriculum/primary/keystage-2/year-3/mathematics","timestamp":"2024-11-11T13:11:25Z","content_type":"text/html","content_length":"67507","record_id":"<urn:uuid:7a03ee58-426c-4821-a368-e8d48fbf1ce4>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00233.warc.gz"} |
Proposal: Attributable Consensus Solution for DV Clusters – Part 3
Albert Garreta, Ignacio Manzur, and Aikaterini Panagiota-Stouka on behalf of Nethermind Research.
Introduction and overview
This is the third installment of our series of posts on the topic of “Attributable consensus”. If you haven’t done so already, please check the First and Second Posts for all the necessary
In the first two posts we 1) introduced the problem; 2) described the main components of our solution; 3) explained how operators create so-called Optimistic Participation Proofs; 4) described how
the clusters claim participation points and how these proofs are used when doing so; and 5) we explained how whistleblowers can trigger an on-chain verification in case some of proof (or amount of
points claimed) is incorrect.
In this post we introduce a new type of participation proof, which we call VDF-Participation Proof. Here VDF stands for Verifiable Delay Function. These proofs can be created by an operator O
individually without the need of interacting with any other cluster operator. This creates an “escape hatch” for the operator O, in that it allows O to prove its participation (or readiness to
participate), regardless of whether other operators are offline or maliciously ignoring O. Hence, VDF PPs provide resilience against some of the challenges we set up to tackle in Post 1. However, VDF
PPs are more costly computationally than optimistic PPs, and so, as we will see, we envision a scenario where, as long as things run smoothly, operators rely solely on optimistic PPs, and rarely
submit VDF PPs. The main idea is that, whenever O detects a large number of inactive cluster operators, O starts computing VDF PPs. Then, at the end of a c-epoch, if its not possible for O to get its
points via the submission of Optimistic Participation Proofs, then O submits its VDF PPs instead. As with Optimistic PPs, we rely on whistleblowers to make sure the content of the VDF PPs is correct.
After discussing VDF PPs, we proceed to describe how cluster rewards are assigned to the cluster operators as a function of the points each operator has earned. Then, we explain how to reward
whistleblowers that correctly raised an alarm, and how to penalize the parties at fault in that case.
VDF-based Participation Proofs
In this section we describe how and when the VDF PPs are created, and how they are used afterwards. We further discuss how whistleblowers check the validity of these proofs, and how on-chain
verification is performed in case it is needed. We also discuss how operators can gain participation points with their VDF PPs.
Preliminaries on VDFs
A Verifiable Delay Function (VDF) is a primitive allowing a party to produce a succinct proof that it performed a certain computation that took a certain amount of time. This computation must possess
some additional properties that ensure that an adversary cannot, say, produce a proof faster than it can perform the computation. Often times, VDFs are constructed by sequentially chaining
computations that take a minimal incompressible amount of time to perform. This could be hashing, or squaring in an unknown order group. The VDF then is a way to produce a succinct proof that one
performed a certain number of iterations of that computation. We refer to this paper for an overview of the topic.
The idea we present is that if we randomly make the state of a blockchain (or in our case, the L2 block headers) be a part of the input in these chained sequential computations, by having the prover
submit the VDF proof at a specific point in time we can ensure that it had access to the chain state. This is particularly useful when the cluster is inactive and DV nodes have no way of reaching
A recent discussion with members from the EF have brought our attention to the fact that the security properties of VDFs are not that well studied and understood. In particular the candidate VDF for
Ethereum, MinRoot, has been shown to have weaker security guarantees than anticipated. We can instead define hash-based VDFs, but we should keep in mind that the security (in particular, the
sequentiality) of these cryptosystems have yet to be proven formally.
That being said, a well-studied VDF is the RSA-based type. These VDFs compute successive exponentiation in an RSA group, and we believe that the trusted setup of the RSA group could be performed by
Obol, for all clusters to use.
RSA-based VDF
As we mentioned, the setup is an RSA group \mathbb{G}=\Z/N\Z where N=pq is the product of two large primes (with additional properties, these could be generated by Obol), together with an efficiently
computable hash function H:\mathcal{X}\to \mathbb{G}, where \mathcal{X} are the 256-bit integers. We propose to use Wesolowski’s scheme regardless of the security loss incurred by applying
Fiat-Shamir, because the proof is a single group element independently of the length of the computation. The prover will have to perform more computation (compared to Pietrzak), but the information
published on L2 will be significantly reduced.
The difficulty parameter T is fixed in the original Wesolowski scheme, the claim by the prover is as follows: “I know h, g\in\mathbb{G} such that h=(g)^{2^T}, where g=H(x) for some $x\in\mathcal{X}
$”. The interactive scheme works as follows, and can be made non-interactive by using Fiat-Shamir:
1. The verifier checks that g,h \in \mathbb{G} and outputs reject if not,
2. The verifier sends to the prover a random prime \ell sampled from Primes(\lambda) (the set formed by the first 2^\lambda primes).
3. The prover computes q,r\in \mathbb{Z} such that 2^T=q\ell+r with 0\leq r < \ell, and sends \pi\gets g^q to the verifier.
4. The verifier computes r\gets 2^T \text{ mod } \ell and outputs accept if **\pi \in \mathbb{G} and h=\pi^\ell g^r in \mathbb{G}.
Important. In the non-interactive version, obtained by applying the Fiat-Shamir heuristic, the random prime MUST BE SAMPLED FROM \mathrm{Primes}(2\lambda) (see the previously mentioned survey for an
Creating VDF-based PPs
In this section we discuss how an operator can create a VDF-based proof during a c-epoch.
We fix a certain number of L2 blocks M such that every M blocks, the prover computing a VDF PP needs to include information about the latest L2 block. The computation will start at a block s, and the
prover will need to include the L2 information at blocks s, s+M, s+2M, \dots until it stops the computation. We reward VDFs only with respect to the time interval they cover, independently of which
duties were supposed to be performed (as explained later in this post). To ensure sequentiality, we need the computation to be chained (the output of the previous VDF should contribute to the input
of the next VDF computation and so on) and submitted right after the computation ends (to prevent attacks). If the operators include L2 information too often, or if the submission period happens at
large time intervals, we need to post too much information. Operators will upload the relevant VDF proof information at the end of the c-slot (see below for the details). The number $M$should be
chosen such that even in the worst case scenario —which happens when an operator notices it is alone just after the beginning of a c-slot—, the operator does not need to include the L2 state more
than k times, for some small integer (e.g. k\sim 20).
Once the RSA group \mathbb{G} and M are chosen, we should also choose a time difficulty T which approximately corresponds to the time that passes in M blocks. Because the computation of the
Wesoloswki VDF with difficulty T takes times (1+1/s)T with s processors and space s (see the this survey for more details), we could choose the minimum integer T such that 2T (this is worst case, but
it could be tweaked) is over the time that passes between M blocks. Of course, some operators will compute these faster, but not faster than in time T (so they can cheat and come back online after M/
2 blocks have passed, but that is it), and they will need to wait for the remaining blocks until we hit the next block of the form s+kM to continue the computation.
With M and T chosen, the prover that wants to show that it was active between L2 blocks s and s+kM will need to provide the L2 block number s, and 2k group elements h_1,\pi_1, \dots, h_k, \pi_k\in\
mathbb{G} such that:
For all 1\leq i\leq k, h_i=(g_i)^{2^T}, where we set g_1=H(b_1), and for all 2\leq i\leq k, we have g_i=h_{i-1}+H(b_i) where b_i is the L2 block header at slot s+(i-1)\cdot M
For all i, \pi_i is a proof of the correctness of the equation computed according to the non-interactive version of Wesolowski’s scheme. To be precise, \pi_i=g_i^{q_i}, 2^T=q_i\ell_i+r_i (0\leq
r_i<\ell_i) and \ell_i has been computed using the Fiat-Shamir heuristic as \mathcal{H}(\mathbb{G}, g_i, h_i, T ) for some hash function \mathcal{H}: \{0,1\}^*\to \mathrm{Primes}(2\lambda).
When should the prover stop its computation and submit this information? The prover should continue the computation until the last block of the form s+i\cdot M before the end of the c-slot. In
general, if the end of the c-slot happens between blocks s+kM and s+(k+1)M the prover stops computing the VDF proof that included the L2 block header at block s+kM, and submits a commitment Hash_{L2}
(s, h_1,\pi_1 \dots, h_k,\pi_k) to the L2 SC in the appropriate time window. At the end of the c-epoch, the operators will submit the vector in full in a blob on L1. For typical RSA moduli (we can
give ourselves an N\sim 1000 bits), assuming k\leq 20 this vector is at most 40K bits, or 5kB.
If a VDF computation started at block s, from the length of the vector of proofs one can deduce the integer k, and therefore the last block s+kM where the operator stopped the computation. This must
be the last slot of the form s+i\cdot M before the block where the submission period started, otherwise the computation is not considered valid. This prevents some attacks where operators start
computing VDFs late.
VDF-based PPs at the end of a c-slot, and of a c-epoch
The commitment to the VDF proofs is sent to the L2 Smart Contract at the end of the c-slot, this is the hash of the VDF proof data for the c-slot (if any). Then, at the end of the epoch, each
operator is responsible for submitting its full VDF proof(s) as blob data (this is outside the race for submitting the blob data corresponding to optimistic participation proofs), with the same time
constraints \Delta_{data}. The versioned hash of the KZG commitment to that blob data also needs to be stored in the L1 contract (see our Post 2).
Because the VDF reward (see Points for VDF duties) gives strictly less rewards than attestations (and we do not check the reality of attestations) a possible strategy for an operator that notices it
is alone is to optimistically believe that the cluster will come back online before the end of the c-slot, and not submit the VDF so that the cluster can claim points for the whole cluster through
participation points at the end of the c-slot. This is a risky strategy, because the time of the end of the c-slot is a uniform random variable with expectancy equal to a day. There is no guarantee
that the cluster will come back before the end of the c-slot. In general, operators should compute and submit VDFs if they are alone (especially close to the end of a c-slot). These VDFs are then
valid if there is no optimistic participation proof at the end of the c-slot which covers the same time interval.
VDF-based PPs at the end of a c-epoch
Next, we describe how the VDF-based PPs are used at the end of a c-epoch. Recall that, at the end of a c-epoch, a point settlement phase begins. During this phase, in normal circumstances, clusters
create Optimistic Participation Proofs and make them publicly available as part of blob data. Additionally, the clusters submit the amount of points each operator has earned. A set of whistleblowers
monitors the proofs and the claimed point amounts and triggers on-chain verifications if a problem is detected.
Additionally, each operator may also opt to submit a VDF-based PP as blob data. More precisely, if the operator O_j has computed m VDF proofs VDF_j=(s^1,h_1^1, \pi_1^1, \dots, h_{k_1}^1, \pi_{k_1}^
1), \dots, (s^m,h_1^m, \pi_1^m, \dots, h_{k_m}^m, \pi_{k_m}^m) (meaning it has submitted the corresponding hashes of the VDF proofs to the L2 SC at the end of some c-slot) during the c-epoch, then it
• Create an Ethereum blob transaction that includes VDF_j and a BLS signature \sigma_j=\sigma(VDF_j) as blob data (created with its own VDF BLS key, it could even be one of its partial validator
• The transaction also calls the L1 SC and has it store the versioned hash of VDF_j and the KZG commitment to VDF_j.
• The L1 SC stores an identifier for the operator submitting the transaction (for compensation/penalization purposes), and reverts if the submitter does not belong to the cluster C.
• The L1 SC checks that the tx is processed before block number B_{epoch-end} + Δ_{data}. If it is not, it reverts. The number B_{epoch-end} is the number of the block at which the L1 SC receives
the END_OF_EPOCH message from the L2 SC.
• The L1 SC stores the gas cost of the tx (since this cost will be shared evenly among cluster members). (See here for an explanation of how this can be done)
The gas costs of the transaction is covered by the operator.
Remark: In some scenarios submitting the VDF proof data as a blob may be too expensive. This is because each block has a small number of blobs attached, and the cost of attaching a blob is the same
regardless of whether the blob contains a lot o data or not. Hence, depending on the cost of blob gas and the amount of VDF data, it could be more profitable for the operator to submit the blob data
as calldata instead.
Alternatively, we think it is reasonable to assume there will be a third party service that aggregates small pieces of data from different users into a big blob. The existence of such service is only
natural, as it allows to use blob space much more efficiently, and the provider of such service can earn potential good money by charging a fee for the aggregation of data into full blobs. If such a
service exists, then the operator submitting VDF proofs should use it.
Converting VDF proofs into blob data format
Here we detail how VDF PPs are organized into blob data in a way that is later accessible in a verifiable manner. The approach is very similar to the one used for Optimistic Participation Proofs, see
Post 2.
A VDF proof for one operator O_j, is a collection over c-slots c of vectors VDF_j(c)=(s^1,h_1^1, \pi_1^1, \dots, h_{k_1}^1, \pi_{k_1}^1), \dots, (s^m,h_1^m, \pi_1^m, \dots, h_{k_m}^m, \pi_{k_m}^m),
and a BLS signature \sigma_j(VDF_j) on VDF_j which is a concatenation over all c-slots in the c-epoch of the VDF_j(c). Usually, m=1 (i.e. the operator was online for the whole duration of the
computation) but it can happen that the operator goes offline and then back online and restarts a VDF computation. In any case, we expect m to be a very small integer.
Then we proceed similarly as in Post 2, that is:
• For each c-slot number s, let
where |VDF_j(c)| denotes the bit-length of VDF_j(c).
• Then we store VDF_j(c) in the blob entries w_1,\ldots, w_{k_1}. In each w_i we use at most 253 bits of w_i, and keep the first bit as 0.
• For the word w_{k_1+1}, we store the string 1\overset{254}{\ldots}1 (254 copies of 1), to indicate that VDF_j(c) ends right before this entry.
• We proceed similarly with the rest of the VDF proofs.
• Once this is done, we store 1\overset{254}{\ldots}1 in the next word, and we then store the operator’s signature \sigma_j(VDF_j).
Overall, the blob of data looks as follows:
( \widetilde{VDF_j(s_1)},1\overset{254}{\ldots}1, \ldots, \widetilde{VDF_j(s_m)}, 1\overset{254}{\ldots}1,\widetilde{\sigma_j(VDF_j)} )
where \widetilde{VDF_j(s)} denotes the data VDF_j(s) split into k_{ s} consecutive 254-bit strings, each starting with the prefix 0. Similarly, \widetilde{\sigma_j(VDF_j)} denotes \sigma_j(VDF_j)
split into 254-bit chunks (in this case we do not reserve any special prefix).
Whistleblowers and VDF-based PPs
Here the whistleblower claims that there is some VDF proof (s, h_1, \pi_1, \dots, h_k,\pi_k) submitted by operator O_j that is incorrect. The whistleblower has all the relevant information: it knows
M, T, it can deduce from the vector (s, h_1 , π_1 ... , h_k , π_k ) the initial block/slot s as well as k, it can recompute any of the $g_i$’s by looking at the relevant chain state and it can check
the correctness of the computation against π_i.
More precisely, the whistleblower can complain about the following properties:
1. For some i, h_i\neq \pi_i^{\ell_i}\cdot (H(b_i)+h_{i-1})^{r_i} (we set h_0=0), where b_i is the L2 block header (resp. the beacon chain state) of block number (resp. slot) s+(i-1)\cdot M, \ell_i=
\mathcal{H}(\mathbb{G}, H(b_i)+h_{i-1}, h_i, T) and q_i, 0\leq r_i< \ell_i are such that 2^T=q_i\ell_i+r_i.
2. The slot s+kM is not the last slot of the form s+i\cdot M before the end of the c-slot.
3. The commitment submitted to L2 and the data submitted is inconsistent.
For all three claims, the relevant blob data is made available to the L2 SC through the auxiliary procedure. The whistleblower performs the auxiliary procedure. The L2 SC verifies the signature on
the VDF proof. Then:
For 1., the verification procedure should be as follows. The contract should compute (we assume that the L2 SC has access to the execution environment) \ell_i = H(G,H(b_i)+h_{i−1},h_i,T), then it
should compute the Euclidean division 2T = q_i\ell_i + r_i, and finally it should check whether h_i \overset{?}{=} π^\ell_i ⋅ (H(b_i) + h_{i−1})^{r_i}.
For 2., the contract has the information of the slot s' where the c-slot ended ; it can compute s + (k + 1)M and check whether it is bigger than s'.
For 3., the contract computes Hash_{L2} of the VDF proofs and compares with the commitment.
Additionally, the whistleblower can also complain that the blob data commitments stored on L1 are inconsistent with the information provided. This is handled in Issue 7.
Recall that, at the end of a c-epoch, a number of points are assigned to each cluster operator. Later on, the cluster accumulated rewards are distributed among the operators unevenly, depending on
the amount of points earned by each operator. In this section we explain how points are assigned and how the reward distribution is made. Our design choices are made to fulfill each of the following
• It should be irrational for operators to exclude other operators from the participation proofs.
• An operator that has a very small amount of faults should receive a negligible penalty.
• An operator with a significant amount of faults should receive almost no rewards.
These goals are in accordance to the general objective we set ourselves to solve in Post 1.
We proceed to describe our solution. First we treat the case where there were no VDF proofs (we will see that VDF proofs do not modify the system in a significant way). In this situation, operators
P_1, \dots, P_n claim, through optimistic participation proofs, that over some c-epoch there were:
• d_1 attestation duties, worth k_1 points each.
• d_2 block proposal duties, worth k_2 points each.
• d_3 beacon aggregation duties, worth k_3 each.
• d_4 sync aggregation duties, worth k_4 each.
• d_5 sync committee duties, worth k_5 points each.
The total amount of points that the cluster should receive with perfect participation is P_{total}=n\cdot\sum_{i=1}^5d_i\cdot k_i. The cluster claims that each operator O_i obtained p_i points. The
claimed points p_i have to be coherent with the participation lists signed by the operators. We provide the details of how p_i is to be computed later in this section, when we discuss how to
incorporate VDF proofs in our point-reward system.
A key idea is to reduce the total reward pool by a factor which depends on the participation of the operators in the cluster. This is to prevent rational operators from excluding other operators from
the Optimistic Participation Proofs. In detail, we compute:
\begin{align*} \rho&=\frac{1}{P_{total}}\sum_{i=1}^n{p_i} \end{align*}
and reduce the amount of rewards to be distributed by the factor \rho, i.e. we distribute R rewards where R=\rho\cdot R_{total}, and R_{total} is the total amount of CL rewards obtained by the
cluster in the prior c-epoch.
This does not include leftover rewards from eventual missed duties from previous c-epochs.
Next, we compute, for each operator O_j, its score S({p}_i) as the following logistic function:
\forall i,\,\,S(p_i)=(1+\exp(-\alpha({p_i}-m)))^{-1}
where \alpha>0 and m are globally fixed parameters. In the figures below we depict the function S({p}_i) and how different choices of \alpha and m afect the function. Intuitively, \alpha is a shape
parameter that controls how sharp the drop is around m; and m represents the amount of points necessary to obtain a score of 1/2, which we should express as a percentage of an estimate of the number
of points that each cluster operator should get during a c-epoch.
This graph shows the effect of changing the shape parameter \alpha. Here the total number of points is 225 (the number of validator attestations duties in a day) and m=112.5.
This graph shows the effect of modifying m instead.
We finally define for all i the weights w_i=S_i/n, and then assign rewards R_i to operator O_i, where
R_i=w_i\cdot R.
It is clear that \rho\cdot w_i< 1/n for all i (equality is never reached because of the exponential, but we can get very close). Furthermore, it is clear that for all i, the function (\rho\cdot w_i)
({p_1}, \dots,{p_n}) is strictly increasing in {p_i} for j\neq i, and this is also easy to see for the {p_i} variable. This means that independently of the number of duties, their respective value,
and whether these duties correspond to real on-chain duties, operators have an incentive to include as many operators as possible in every duty. Hence, our first design goal is met. Further, the
other two design goals are met because of the logistic shape (i.e. ``S shape’') of the score function.
Overall, the points-to-rewards system invites operators to uphold a certain standard of individual (controlled by the m parameter) and collective (controlled by the \rho parameter) participation. It
leads to more interesting and fair dynamics regarding reward distribution. In the two scenarios in the figure below, if we had a strictly proportional reward distribution each node would get the same
amount of rewards (1/4 of the total) in each case. However, it is clear that in the second scenario one operator should get less rewards, as is the case with our reward scheme.
Two scenarios that showcase the kind of dynamic that can arise with our reward distribution scheme.
Taking a power of \rho
One consequence of this choice of reward scheme is that it is difficult to obtain a 1/n fraction of the rewards even if one individually participates perfectly. If one participates in all possible
duties, one should have a score very close to 1. Looking at the case where 2n/3 operators participate perfectly, and n/3 participate just enough —meaning m times, where m is chosen as a 70\% fraction
of the total number of duties (for simplicity assume all duties are worth 1 point)—, then \rho=0.9. The operators that participate perfectly get a 0.9/n fraction of the rewards.
To mitigate this effect, we can take a root \rho^{r} for 0<r<1, and compute R=\rho^rR_{total}. We leave this discussion for later.
How to compute the points earned by each operator
Every c-epoch, the operators claim the amount of points for each operator in the cluster. This is also done optimistically in a race, where a designed operator claims that each operator O_j
(including itself) should get p_j points for their participation in the last c-epoch.
The operators submit VDF proofs at the end of the c-slot. These proofs are then superseded without any further condition by optimistic participation proofs submitted at the end of the same c-slot. It
is difficult to write the point computation formula, but conceptually it is very simple. For each operator, and each c-slot, if there was no optimistic participation proofs for that c-slot then all
points come from VDF proofs. If there was an optimistic participation proof it supersedes all VDF proofs, and the operator receives points depending on the number of each different duties that were
completed, if it signed the optimistic participation proof. Formally, the point computation looks like this:
\tilde p_j=\sum_{c\in\{c-slots\}}\left(\chi_{VDF}(c)\cdot pts_{VDF}(\sum_k\mathsf{len}(VDF_j^k))+(1- \chi_{VDF}(c))\cdot\chi_{sgn}(j, L_c)\cdot \left(p_1(N_v\cdot N- L_c.attestations) + \sum_{i=2}^4
\#L_c.slots.i\cdot p_{i}\right)\right)
• \chi_{VDF}(c) is 1 if there was at least one VDF submission during c-slot c and no participation proof for c-slot c; and 0 if there was a participation proof submission for c-slot c.
• pts_{VDF}(\sum_k\mathsf{len}(VDF_j^k)) is computed as specified in the next section, if operator O_j submitted correct VDF participation proofs VDF_j^1, \dots during c-slot c covering \sum_k\
mathsf{len}(VDF_j^k) L2 blocks.
• \chi_{sgn}(j, L_c) is 1 if O_j signed the optimistic participation proof L_c, and it is 0 otherwise.
• \#L_c.slots.i is the length of the vector L_c.slots.i, and each task of type i is worth p_{i} points.
Points for VDF duties
Points for VDF computations do not modify the reward reduction coefficient \rho. Instead, VDF proofs simply increase the amount of points that each operator that computed a VDF gets. Again, these
extra points are not involved in the computation of \rho, they only influence the score S_j of each operator O_j that computed valid VDF proofs for a c-slot in which there was no optimistic
participation proof.
To prevent some attacks, and to clearly establish that VDF computations are less rewarding than following the consensus protocol (fast and slow), we set the points for a VDF computation that spans b_
{L2} L2 blocks to be:
0.95\times \#\text{epochs}\times\#\text{validators}\times \min_i(k_i)
where, recall, the k_i 's are the task weights (defined at the beginning of this “Points-to-Rewards” section); \#\text{validators} is the number of validators owned by the operator; and the number \#
\text{epochs} of epochs is computed as:
where t_{L2} is the time in seconds that passed during b_{L2} L2 blocks (If the L2 block time is variable we should use an upper bound. Typically Starknet is moving towards constant block time, and
Arbitrum’s block time is consistently around 0.26±0.03 seconds) ; and t_e is the length of an Ethereum epoch (384 seconds).
Rewards and penalties after successful whistleblower complaints
As already mentioned, the Participation Proofs (PP) and the points claimed with them are verified only when a whistleblower raises an alarm (by calling the L2 Smart Contract and depositing a bond b).
If the whistleblower’s alarm about a cluster’s PP or claimed points is proved to be correct then:
1. The cluster rewards and the points are adapted taking into account the data submitted by the whistleblower. Thus, when a whistleblower spots a malicious action and raises an alarm, then the
rewards R that will be allocated to the cluster operators and principal providers will not be higher than the rewards that would be shared if the cluster had submitted the correct PPs and points.
In more detail:
□ Issue 1: the PP that corresponds to the incorrect blob data reported by the whistleblower is not taken into account in the calculation of the points.
□ Issues 2.a, 2.b, 2.d: the PP that is incorrect is not taken into account in the calculation of the points.
□ Issues 2.e: the fake duties reported by the whistleblower will not be taken into account in the calculation of the points.
□ Issue 3: the incorrect VDF is not taken into account in the calculation of the points.
□ Issue 4: the points are updated based on the whistleblower’s claim.
□ Issue 6., 7. : the PP that corresponds to the incorrect threshold signature or the incorrect KZG commitment is not taken into account in the calculation of the points.
2. A f_w fraction of the cluster rewards R is removed from the cluster and is given to to the whistleblower. Recall that R are the cluster rewards after taking into account the data submitted by the
3. A f_{obol} fraction of R is removed from the cluster and is kept by the network (in our case, Obol). Note that we do not give all the rewards to the whistleblower, because we cannot exclude a
scenario where the whistleblower and the cluster operators are the same entity (in optimistic rollups a similar approach is followed, where the challenger does not receive the whole bond that is
removed from the asserter; some amount is burned in order to prevent collusions between the asserter and the challenger).
4. The bond b is returned to the whistleblower.
5. The whistleblower is reimbursed for the gas that it spent to raise the alarm.
If the whistleblower’s claim is proved to be false, then it loses its bond and it does not get reimbursed for the gas it needed to raise the alarm.
In every case we assume that the whistleblower incurs an extra cost c to report the claim that is not reimbursed (e.g the cost to monitor the PPs and spot errors).
In the following paragraph we describe which are the sufficient properties for f_{obol}, f_w to disincentivize the cluster operators to deviate from the honest behaviour (described in the previous
sections) when they construct and submit their PPs and their points. By disincentivize we mean among others that at an equilibrium, the cluster operators construct and submit their PPs and their
points honestly, and the whistleblower raises an alarm only if the cluster operators deviate from the honest behaviour.
Sufficient Conditions
We need to select f_{obol}, f_w such that:
1. f_{obol}>0,
2. f_w\cdot R-c>0, where c is the cost of the whistleblower that is not reimbursed.
3. b>0, where b is the bond.
In order to perform a game theoretic analysis we will consider a game of two rounds.
In the first round, all the cluster operators play simultaneously and choose either to play honestly or to perform at least one deviation described in the issues in the previous section.
In the second round, the whistleblower learns the strategies of the cluster operators in the first round and it can select either to give a bond b, incur cost c and raise an alarm or to do nothing.
Following the model described in Section 7 here we can consider its strategy as an algorithm that takes as input the outcome of the first round and outputs either \mathsf{report} or \mathsf{not
We assume that every coalition (i.e. every set of cluster operators and/or whistleblower that collude) tries to maximize its joint utility which is defined as the joint profit of all the members in
the coalition.
We argue that if the above sufficient conditions hold, then we have:
1. The following strategy profile \vec S (vector that in position i has the strategy of player i) is a Subgame Perfect Equilibrium (even if we examine coalitions):
the cluster operators construct and submit their PPs and their points as described in the previous sections (honestly), and the whistleblower raises an alarm only if the cluster operators
deviate from the honest behaviour (in terms of correct construction and submission of PPs and points).
2. A whistleblower who spots a malicious behaviour in the first round will have a positive profit if it raises an alarm.
3. A malicious whistleblower (who does not care about maximizing its profit) cannot spam the system by submitting false alarms without a cost.
Next we explain why these hold:
Property 1. holds because:
• A whistleblower cannot increase its utility by raising an alarm when the PPs and the points are constructed and submitted honestly (i.e. when the cluster operators chose a strategy as described
in \vec S). This is because if the alarm is proved to be false, then it does not earn any reward, it loses its bond and it does not get reimbursed for the gas it incurred to call the L2 Smart
• Any coalition that consists of the cluster operators and potentially the whistleblower cannot increase its utility by deviating because: (i) when a whistleblower spots a malicious action and
raises an alarm then the rewards R that will be allocated to the cluster operators and principal providers will not be higher than the rewards that would be shared if the cluster had submitted
correct PPs and points. (ii) f_{obol}>0. Note that even if the whistleblower colludes with the cluster operators and gives them back a fraction of f_w\cdot R that it receives, there is still an
amount equal to f_{obol} \cdot R that the cluster will lose.
Property 2. holds because f_w\cdot R-c>0.
Finally, Property 3. holds because b>0 and thus when whistleblower raises a false alarm it loses a non zero bond (apart from the gas that it has spent). | {"url":"https://community.obol.org/t/proposal-attributable-consensus-solution-for-dv-clusters-part-3/109?ref=blog.obol.org","timestamp":"2024-11-11T14:49:19Z","content_type":"text/html","content_length":"72172","record_id":"<urn:uuid:2e8e9785-5d3a-43ed-90ca-494e658470a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00586.warc.gz"} |
IF AND in Excel| Excelchat
Hey Guys, I have a problem. In my database, i have several injuries mentioned (eg.: head,half-head, nose, finger, hand, torso...), all injuries are written down in one cell. Most of the time, the
medical personal writes down all parts of the body mentioned. I wand to find out, how many injuries in a special group of injuries have been attended within this year. Group 1: head, nose, eyes...
Group 2: hand, finger, arm... With countif and sumproduct i only get the number per injury. How can i avoid double-counting? E.g. there are 25 injuries on half-head and 31 on head. (6 whole head + 25
half-head)... Thanks in advance!
Solved by C. B. in 30 mins | {"url":"https://www.got-it.ai/solutions/excel-chat/excel-help/if-and-function?page=12","timestamp":"2024-11-06T15:31:53Z","content_type":"text/html","content_length":"350332","record_id":"<urn:uuid:8ed2c65c-3dae-47ba-bb81-aba1cfb7f7a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00798.warc.gz"} |
How does a tuner... tune?
How does a tuner… tune?
Most ham operators know that by combining inductors and capacitors in various configurations, they can tune their antenna impedance to match their transceiver. Many people, due to their experience,
can also estimate the µH and pF ranges they need to tune on the various bands.
But, what is the basic principle that actually allows a tuner to tune? And is it true that the power reflected by the mismatched load is dissipated by the tuner?
A real-world experiment
To have something real to work with, I prepared a test load by connecting in series three 8Ω-5W badly inductive resistors, thus presenting both some resistance and some reactance, as it is usual when
tuning antennae. I selected an operating frequence of 10.100MHz and connected the test load to my newtork analyzer to see how it looked like:
The test load connected to the VNA
The response of the VNA shows that at 10.100MHz, this load shows R=29.65Ω, X=213.33Ω, the inductance of the resistors is 3.36µH and the VSWR of this load is quite high, 32.87:
To tune this load I selected a simple Hi-pass, step-up L tuner layout:
Using a roller inductor and a big variable capacitor I created my L-tuner and I tuned the test load at 10.100MHz using my network analyzer to measure the resulting VSWR:
By carefully setting the knobs, I have been able to tune a perfect match with R=49.8Ω, X=0.05Ω and VSWR=1.02:
So far, so good.
Measuring the components
Next step is to measure the inductance of the inductor and the capacitance of the capacitor:
We have that C=66.24pF and L=0.939µH. Why are these exact values able to tune our load?
The magic of circuits
The principle that allows a tuner to “tune” is the same of the trick that every electronic technician, even beginners, use to “create” resistor values by combining other resistors in series or
Let’s take two resistors, R1 and R2:
We have R1=100.048Ω and R2=327.47Ω.
If we combine them in series, we have that [1] Rs=R1+R2=427.518Ω, confirmed by the measurement of 427.51Ω.
If we combine them in parallel, we have that [2] Rp=1/((1/R1) + (1/R2))=76.635Ω that is also confirmed by the measurement (76.639Ω). The operation 1/x is called the reciprocal of x, so [2] can be
expressed as the reciprocal of the sum of the reciprocal of each resistance.
From resistance to impedance
The resistance case seen above is just a special DC case of more general AC concept called impedance. The impedance describes a component in terms of resistance and reactance that show at a given
frequency; if the frequency is 0Hz, we have back the “DC” case.
When we measure an aerial with our antenna analyzer, we can read on the display the resistance (R) and the reactance (X) which form the impedance of that object at the measurement frequency.
It is important to underline that the impedance is always formed by two numbers: R and X. When people use a single number for the impedance, there is always something implied: for example, when
people talk of a “50Ω antenna”, they mean “a R=50Ω X=0Ω antenna” where “X=0Ω” is implied.
The impedance pair of vaules R,X describes an equivalent circuit made of an ideal resistor of resistance R in series with a capacitor (if X is negative) or an inductor (if X is positive). If X is
zero, the impedance is equivalent to a simple resistor and it is defined resonant.
How do we calculate the “X” reactance?
While the “R” value of the impedance is simply series resistance in Ω of the object being measured, the “X” value is a bit more complicated. Its value depends on the capacitance (if negative) or the
inductance (if positive) and the frequency according to the formulae below:
In case of inductive reactance, we have: X=2π·f·L
In case of capacitive reactance, we have: X=-1 / 2π·f·C
• f is the frequency in MHz
• C is the capacitance in µF
• L is the inductance in µH
• π is 3.14159
Now, we shall calculate the impedance of the inductor and the capacitor of our tuner.
The inductor was L=0.939µH; at 10.1MHz its reactance would be X=2π·10.1MHz·0.939µH=59.589Ω. Our inductor is not ideal but shows also a series resistance of about 1Ω. Therefore, the impedance of our
tuner inductor is:
Z[L]: 0.939µH@10.1MHz → R[L]=1Ω, X[L]=59.589Ω
Now we can calculate the same value for our capacitor that measured C=66.24pF=0.00006624µF (remember that we are using MHz and µF: do not use pF!). We have X=-1/2π·10.1MHz·0.00006624µF=-237.891Ω.
Capacitors like mine have negligible series resistance (ESR) so we can safely take 0Ω:
Z[C]: 66.24pF@10.1MHz → R[C]=0Ω, X[C]=-237.891Ω
The tuned circuit
Our tuned circuit now has three components: the load (i.e. the antenna or whatever we connected at the tuner output) plus the capacitor and the inductor of the tuner. We also know exactly the R,X
impedance of each of them.
They are configured in this circuit, where Z[A] is the antenna or our load, Z[C] is the capacitor and Z[L] is the inductor:
The first thing we notice is that Z[C] (the tuner capacitor) and Z[A] (the load on the “antenna” port) are in series:
If they were simple resistors, we would know how to replace them with a simple equivalent component (another resistor): by would simply add them.
The interesting fact is that also impedances can be replaced by a third impedance which is the sum of the other two. The new impedance Z[B]=Z[C]+Z[A] will be made of two components, R[B] and X[B],
calculated as follows:
R[B] = R[C] + R[A] = 0Ω + 29.65Ω = 29.65Ω
X[B] = X[C] + X[A] = -237.891Ω + 213.33Ω = -24.561Ω
Thus we can replace Z[C] and Z[A] in series with a single impedance Z[B] having R[B] = 29.65Ω and X[B] = -24.561Ω and obtain the following equivalent circuit:
Now we have left two components in parallel: what is their combined impedance?
The formula for parallel resistances [2] suggests that we should calculate the reciprocal of Z[L] and Z[B], sum them together and then calculate the reciprocal of the resulting value. But how do we
calculate the reciprocal of an impedance, which is formed by two values R and X?
The reciprocal values R’ and X’ of a given impedance R, X is to be calculated as follows:
R’ = R/(R^2+X^2)
X’ = -X/(R^2+X^2)
So let’s start calculating the reciprocal of Z[L]:
R[L]‘ = R[L]/(R[L]^2+X[L]^2) = 1/(1^2+59.589^2) = 0.00028154
X[L]‘ = -X[L]/(R[L]^2+X[L]^2) = -59.589/(1^2+59.589^2) = -0.0167769
Now we can calculate the reciprocal of Z[B]:
R[B]‘ = R[B]/(R[B]^2+X[B]^2) = 29.65/(29.65^2+(-24.561)^2) = 0.020002
X[B]‘ = -X[B]/(R[B]^2+X[B]^2) = -(-24.561)/(29.65^2+(-24.561)^2) = 0.016569
Now, according to formula [2], we have to add those values together and calculate once more the reciprocal to obtain Z[D]:
We first calculate Z[D]‘:
R[D]‘ = R[L]‘ + R[B]‘ = 0.00028154 + 0.020002 = 0.02028354
X[D]‘ = X[L]‘ + X[B]‘ = -0.0167769 + 0.016569 = -0.0002079
Then we have to calculate the reciprocal of Z[D]‘ to obtain Z[D] and the “magic” is completed:
R[D] = R[D]‘/(R[D]‘^2+X[D]‘^2) = 0.02028354/(0.02028354^2+(-0.0002079)^2) = 49.2959
X[D] = -X[D]‘/(R[D]‘^2+X[D]‘^2) = 0.0002079/(0.02028354^2+(-0.0002079)^2) = 0.5053
The final impedance of the entire circuit is R=49.2959Ω X=0.5053Ω is almost a perfect match for the target impedance of R=50Ω X=0Ω and very close to the measured value of R=49.8Ω, X=0.05Ω.
Power lost in the tuner
How much power is actually “lost” in the tuner? Power can only be dissipated by the resistive part of the impedance. If we could make a tuner with ideal capacitors and inductors (i.e. purely
reactive), the power dissipated by the tuner would be none.
The power actually dissipated by each component can be calculated with the same basic circuit rules that allows us to calculate the power dissipated by each resistor in a resistor network. However,
usually the resistance of the tuner components is not high enough to cause significant losses in the tuner itself in most tuning cases.
This fact is clearly shown by my tuner in action under a thermal camera: as we can see, only the load resistors are hot while the tuner components are not dissipating any relevant heat.
A tuner is able to transform the impedance of the load into the impedance required by the generator by creating a circuit that, combined in series/parallel with the load, adds up to the required
impedance. Since the components added by the tuner are reactive, no power is dissipated by an ideal tuner and all power is trasferred to the load. Real tuners are designed to have very low loss
components, so usually almost 100% of the power actually reaches the load.
This is an article about how a tuner does the tuning process and it efficiency, not about what a load does of the energy it received.
If you connect a transmission line between your tuner and a mismatched antenna, the trasmission line will dissipate more power than usual, sometimes a lot of power: in this case, blame the
transmission line, not the tuner. To know more about transmission lines on mismatched loads, please refer to The myth of reflected power.
6 Comments
1. Finalmente ho capito come funziona un accordatore. Prima mi sembrava qualcosa di magico, che in trasmissione non facesse passare la corrente riflessa verso l’rtx, mentre in ricezione ne
coadiuvasse il transito. Cosa ovviamente impossibile. Quindi quello dhe prima mi appariva come una sorta di diodo nella sola trasmissione ora non ha più veli. Eppure ne ho cercata di
documentazione per capirci qualcosa. Once again, grazie Davide.
2. “Since the components added by the tuner are reactive, no power is dissipated by an ideal tuner and all power is trasferred to the load.” — That’s true if you consider the load to be the coax
attached to the output of the tuner or if the tuner is located at the antenna input.
But if the antenna is not a perfect match to the coax, power is dissipated by the coax.
□ Yes, that’s true. However, this is an article on tuners, not transmission lines.
Each component has its own role in transferring the energy and we can not blame the tuner for the energy dissipated by the transmission line.
I already treated the argument of energy dissipated by mismatched transmission lines here: The myth of reflected power
3. Good article, thanks. Antenna turners do work and will result in getting more power into the antenna/radiator. What the antenna does with this power is another issue. But the article addresses
how to get the match between the radio and load/antenna system.
4. Does an antenna tuner , if tune properly, actually result in more RF power being radiated by the antenna? I still find this confusing. I like you explanation of how it works but does it indeed
mean you can transmit further than you could without the tuner with a non resonance antenna?
5. THIS “ANTENNA TUNER” IS NOT THE PROPER TERM FOR A DEVICE THAT MATCHES A SOURCE TO A LOAD. WE KNOW THAT WHEN Z SOURCE = Z LOAD, MAX. POWER TRANSFER OCCURS. WE DONT TUNE AN ANTENNA, MORE
ACCURATELY, WE MATCH THE IMPEDANCE OF THE ANTENNA “SYSTEM”. EVERYTHING BETWEEN THE SO-239 TO THE ANTENNA. A BETTER TERM FOR THESE IS, ANTENNA COUPLER, MATCHBOX,TRANSMATCH, ANYTHING BUT TUNER..
CUT A 65’ DIPOLE AND YOU HAVE TUNED IT TO 7.2 MHZ. THE OMLY WAY TO TUNE, RETUNE, DETUNE, MISTUNE AN ANTENNA IS TO ALTER THE PHYSICAL DIMENSIONS OF THE RADIATING (DRIVEN) ELEMENT. MAKE THAT
DIPOLE’S LENGTH 65’ 11”. AND IT IS TUNED TO 7.1 MHZ. THE BOX ON YOUR DESK SAYING ANTENNA TUNER CANT DO THAT. AN “AUTO TUNER” CANT EITHER. WHEN YOU CHANGED THE LENGTH OF YOUR RADIATING ELEMENT
(DIPOLE), YOU TUNED IT. THE JOB OF MATCHING SOURCE TO LOAD IMPEDANCE IS WHAT IS PERFORMED BY AN ANTENNA TUNER, A TRANSMATCH, COUPLER ETC. THERE IS NO MAGIC, JUST EQUALIZING THE REACTANCES AS BEST
AS POSSIBLE TO A NET ZERO. THATS IT. | {"url":"https://www.iz2uuf.net/wp/index.php/2019/03/22/how-does-a-tuner-tune/","timestamp":"2024-11-02T12:17:48Z","content_type":"text/html","content_length":"69645","record_id":"<urn:uuid:5481d2c9-cdd9-4d88-b2ee-68a1fa22b566>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00824.warc.gz"} |
Rounding a Price to the Nearest Dollar using Excel Formula in Python
In this guide, we will learn how to round a price to the nearest dollar using an Excel formula in Python. Rounding a price is a common task in financial calculations and can be easily achieved using
the ROUND function. This formula allows you to round a number to the nearest whole number, which in this case, is the nearest dollar.
To round a price to the nearest dollar, you can use the following Excel formula:
The ROUND function takes two arguments: the number to be rounded and the number of decimal places to round to. In this case, we want to round the price to the nearest dollar, so the second argument
is set to 0.
To use this formula, you need to reference the cell that contains the price you want to round. In the example formula, the price is referenced by cell A1. You can change this reference to the cell
that contains your price.
The ROUND function rounds the number to the nearest whole number. If the decimal portion is 0.5 or greater, the number is rounded up. If the decimal portion is less than 0.5, the number is rounded
Let's look at some examples to better understand how this formula works:
If we apply the formula =ROUND(A2, 0) to these prices, we would get the following results:
• 10.2 would be rounded to 10
• 15.7 would be rounded to 16
• 20.9 would be rounded to 21
• 25.1 would be rounded to 25
In conclusion, the Excel formula =ROUND(A1, 0) allows you to round a price to the nearest dollar in Python. This formula is useful for financial calculations where you need to work with whole
numbers. By following the step-by-step explanation and examples provided, you can easily implement this formula in your Python code.
An Excel formula
Formula Explanation
This formula uses the ROUND function to round a price to the nearest dollar.
Step-by-step explanation
1. The ROUND function takes two arguments: the number to be rounded and the number of decimal places to round to.
2. In this case, we want to round the price to the nearest dollar, so the second argument is set to 0.
3. The number to be rounded is referenced by cell A1. You can change this reference to the cell that contains the price you want to round.
4. The ROUND function rounds the number to the nearest whole number. If the decimal portion is 0.5 or greater, the number is rounded up. If the decimal portion is less than 0.5, the number is
rounded down.
For example, if we have the following prices in column A:
| A |
| |
| 10.2 |
| 15.7 |
| 20.9 |
| 25.1 |
The formula =ROUND(A2, 0) would round the price 10.2 to 10, 15.7 to 16, 20.9 to 21, and 25.1 to 25. | {"url":"https://codepal.ai/excel-formula-generator/query/HXpJzgbO/excel-formula-round-price-nearest-dollar","timestamp":"2024-11-02T05:01:25Z","content_type":"text/html","content_length":"94755","record_id":"<urn:uuid:cebd067b-e98e-4b4d-837c-5fa3fd818306>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00267.warc.gz"} |
What is center of gravity in simple words?
The center of gravity (CG) of an object is the point at which weight is evenly dispersed and all sides are in balance. A human’s center of gravity can change as he takes on different positions, but
in many other objects, it’s a fixed location.
What is centre of gravity class 11 physics?
The centre of gravity of a body is that point where the total gravitational torque on the body is zero.
What is centre of gravity 7th class?
The centre of gravity of an object is the point where the entire weight of the object appears to act. The centre of gravity of the geometrically shaped object lies in the geometric centre of the
What is centre of gravity in physics class 10?
The centre of gravity of a body is the point about which the algebraic sum of moments of weights of all the particles constituting the body is zero.
What is centre of gravity with example?
The definition of center of gravity is the place in a system or body where the weight is evenly dispersed and all sides are in balance. An example of center of gravity is the middle of a seesaw.
What is the formula of centre of gravity?
To find the CG of a two dimensional object, use the formula Xcg = ∑xW/∑W to find the CG along the x-axis and Ycg = ∑yW/∑W to find the CG along the y-axis. The point at which they intersect is the
center of gravity.
What is center gravity formula?
What is center of gravity for Class 9?
The centre of gravity is a point in an object where the distribution of weight is equal in all directions, and it does depend on the gravitational field. However, an object’s centre of mass and
centre of gravity lies at the same point in a uniform gravitational field.
What is centre of gravity of an object?
The center of gravity (CG) location is the average location of all the weight of an object. The center of gravity is the balance point of an object, also expressed as the point where all the mass
appears to be located.
Why center of gravity is important?
The Centre of gravity is a theoretical point in the body where the body’s total weight is thought to be concentrated. It is important to know the centre of gravity because it predicts the behaviour
of a moving body when acted on by gravity. It is also useful in designing static structures such as buildings and bridges.
What is the unit of centre of gravity?
The center of gravity (CG) is the point of action of the entire weight (Force) of an object. If the location of the center of gravity is exactly known, then one can balance the object on a pin point
at this location….
Weight (oz) Distance (inches)
2.0 5.0
1.0 12.0
1.0 23.0
.5 (stick) 18.0
What is centre of gravity Wikipedia?
The center of gravity of a body is the point around which the resultant torque due to gravity forces vanishes. | {"url":"https://cowetaamerican.com/2022/05/27/what-is-center-of-gravity-in-simple-words/","timestamp":"2024-11-04T16:57:48Z","content_type":"text/html","content_length":"58203","record_id":"<urn:uuid:810ce3ca-2107-462e-a46c-fcc7056bfca1>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00836.warc.gz"} |
Practical Machine Learning
Random Selection with Average
The goal of this project was to “randomly” select numbers from a predefined set, with replacement, in a way that the mean of the selected numbers would equal (or come close to) a specified number.
For example, and the original motivation, was to select 100 numbers from the set: $$ X = \left\{0, 0.1, 0.25, 0.5, 0.75, 0.8, 1.0\right\} $$ so that the mean of the selected numbers was $ \approx
0.75 $
Said a different way, given $ n \in \mathbb{N}$, $X = \left\{x_1, x_2, \ldots, x_m\right\}$ and $ \mu $ find $ a_1, a_2, \ldots, a_m \in \mathbb{N}$ such that: $$ \frac{1}{n}\sum_{i = 1}^m a_i x_i \
approx \mu \quad \text{and} \quad \sum_{i = 1}^m a_i = n $$
$a_i$ will tell us how many times to select each Now obviously, this isn’t something that can be solved deterministically, and there might be many different ways of selecting our $$ a_i $$. For
example, consider $$ X = \left{0, 0.5, 1.0\right} $$, $$ \mu = 0.5 $$, and $$ n = 20 $$. We would be right to select 20 “0.5”s, or 10 “0”s and 10 “1.0”s. Both methods would create a mean exactly
equal to 0.5. Therefore, it would be nice to have some sort of parameter that determined the “shape” of our selection, whether the selection was all from the extremes, or tightly grouped around the
In order to solve both these issues, I decided to randomly select numbers from a probability distribution with finite support on $$ [\min(X), \max(X)] $$ and round to the nearest $$ x \in X $$.
Consider the beta distribution
$$ f(x;\alpha,\beta) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{\int_0^1 u^{\alpha-1} (1-u)^{\beta-1}\, du} $$
where $$ \alpha > 0 $$ and $$ \beta > 0 $$. The beta distribution is a nice choice for this problem for two reasons. First is because of its finite support on the interval $$ [0, 1] $$. Second, its
mean is very easy to calculate.
$$ \mu = \frac{\alpha}{\alpha + \beta} $$
This means we can intelligently select $$ \alpha $$ and $$ \beta $$ (or select one and fix the other using the above), select random numbers, round to the nearest $$x \in X $$ and the average should
be pretty close to $$ \mu $$. This turned out to be good enough for my purposes, and the program to do this is written below. | {"url":"http://bkanuka.com/posts/random-selecting-with-average/","timestamp":"2024-11-02T01:16:59Z","content_type":"text/html","content_length":"8998","record_id":"<urn:uuid:dd6c99ce-ad0e-4df7-baef-b1c2252d690d>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00768.warc.gz"} |
3D Design
How to Make Apple’s Mac Pro Holes
June 16, 2019 – 11:15 am
Apple’s recently introduced Mac Pro features a distinctive pattern of holes on the front grill. I’m not likely to own one anytime soon (prices for a well configured machine approach a new car), but
that pattern is very appealing, and re-creating it is a fun exercise.
The best clue about the pattern comes from this page pitching the product. About halfway down, by the heading “More air than metal” is a short video clip showing how the hemispherical holes are
milled to create the pattern.
Let’s start with a screen grab of the holes from the front. The holes are laid out with their centers equally spaced apart and the tops of the lower circles fall on the same line as the bottom of the
circles above them. So the circles are spaced 2r apart vertically, where r is the radius of the circle.
The horizontal spacing is a bit more work. The angles of the equilateral triangle formed by the centers are 180°/3 = 60° (or π/3 as they say in the ‘hood). If you draw a vertical line from the center
of the top circle to the line connecting the centers of the bottom circles, that line (as you see above) is 2r long. With a bit of trig, you can find half the horizontal spacing x by using the right
triangle formed by that line, x and the side of the equilateral triangle. The angle from the vertical center line to the equilateral triangle edge is half of π/3, π/6. So,
\[x=2r\tan \frac{\pi }{6}\]
and 2x is the horizontal spacing of the circles.
The hemispherical holes are milled into both sides of the plate, but the holes on the other side are offset so the hole centers on one side fall exactly in the middle of the triangle formed by the
hole centers on the other side:
You already know the horizontal offset for the centers from one side to the other is x, but how far up do you go to hit the center of that triangle? Let’s call that h.
You’ll use the same tan(π/6) trick we used above, this time using the triangle formed by x and h. Like the triangle used to find x, the angle here is also π/6. So:
\[h=x\tan \frac{\pi }{6}\]
Let’s clean this up a bit:
\[h=x\tan \frac{\pi }{6}=2r\tan \frac{\pi }{6}\tan \frac{\pi }{6}\]
\[\tan \frac{\pi }{6}=\frac{1}{\sqrt{3}}\] so…
There’s still the issue of how thick the plate is, relative to the size of the holes. I took screen grabs of the film clip and compared them by counting pixels:
Examining the images, the thickness was about 101, with the diameter (2r) of the holes coming in at 176. Now, these numbers aren’t at all precise, because of the perspective introduced by whatever
animation software was used. But I can’t help but notice the following coincidence:
\[ \frac{101}{176}=0.573\approx \tan \frac{\pi }{6}=\frac{1}{\sqrt{3}}=0.577\]
Yeah. The ratio of the plate thickness to the hole diameter is just like the ratio of the hole horizontal spacing to the hole diameter. So let’s turn this around, and summarize by saying for a plate
of any thickness t, use:
\[x=2r\tan \frac{\pi }{6}=\frac{2r}{\sqrt{3}}=t\]
Where r is the radius (half the diameter) of the spheres and 2x is the horizontal spacing of the sphere centers on a given row. For the next row, the centers are offset by x horizontally from the
centers of the previous row. The rows are spaced 2r apart vertically, from sphere center to center. The same grid of spheres carved into the back side is displaced by x horizontally, and h vertically
from the spheres in the front. The centers of the front spheres are on the front surface of the plate, the back spheres on the back.
So to CAD this up, all you need to do is start with a rectangular block of thickness t, and use the formulas above to place the centers of the spheres (with diameter 2r) on the front and back of the
If you just want to quickly print or look at the result in 3D, I’ve posted some sample STL files on Thingiverse.
Tags 3D Design, Mac Pro | Comments (6) | {"url":"https://saccade.com/blog/tag/3d-design/","timestamp":"2024-11-14T23:58:14Z","content_type":"application/xhtml+xml","content_length":"37266","record_id":"<urn:uuid:712b89d6-7654-4503-a7b2-5305ce3a391c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00847.warc.gz"} |
Descriptive and Inferential Statistics. Choose and Apply
Scientists produce terabytes of data every day, but without proper analysis and tools, the numbers are irrelevant. It’s the data scientist’s task to take the data and turn it into a story worth
sharing. Two of the basic techniques used by social study researchers and other scientists are descriptive and inferential statistics. While both are used to turn data points into a story, the
approaches and results differ. Today we’ll cover both methods to help you choose the appropriate way to understand your data set.
Descriptive Statistics
If you have conducted experiments or surveys, compiled and prepared the data for analysis, descriptive statistical methods are your best choice to turn numbers into an impactful story. Descriptive
statistics does not produce new information. Instead, its tools provide you a chance to understand the compiled data and draw conclusions. The most common of descriptive techniques include:
• Average values of the data set, such as the median, mode, and mean.
• Spread defined by the standard deviation or the range.
• Correlation outlining relationships between pairs of values.
A graphical representation is often the most potent and descriptive tool of the data scientist. Histograms, pie charts, maps, and graphs enable you and other people to assess the data at a glance,
define relationships and spread of the data points. It’s no wonder infographics have taken the Internet by storm and turned into a significant trend that will not go out of style any time soon.
Inferential Statistics
Unlike descriptive methods, inferential statistics uses the data sample to draw conclusions or make inferences about the larger population. For example, if you ask 1,000 students whether they are
happy with their Statistics professor, you might be able to learn whether other learners enjoy the professor’s classes.
Considering the purpose of the inferential approach, its tools differ greatly from the descriptive ones. Standard techniques include linear correlation analysis, regression analysis, logistic
regression analysis, structural equation modeling, and more.
Unlike descriptive methods that return an absolute value of a median, standard deviation, or other parameters, inferential statistics returns a range and a probability. The results of inferential
calculations can never be 100% accurate, so the results reflect a probability distribution. For example, before the ultrasound machines had been invented, pregnant women knew the probability of
giving birth to a girl was 50%, the same as the probability of having a boy.
In statistical hypothesis testing, the significance is a critical value that defines the probability of the calculation results being caused by chance, tuning the hypothesis null. In social studies,
the significance is usually set at 0.05 or 5%. However, other fields of study use lower significance levels, depending on the number of tests performed.
Despite the differences between descriptive and inferential statistics, you should always remember that statistic is the function of the sample data. If your research or experiments render faulty
data points, no amount of calculations, manipulations, and charts will be able to make a positive difference. Therefore, be very careful with documenting your data aggregation methods and prepare the
data, cleaning it out and normalizing, before using it to describe the results or make inferences. Otherwise, neither your graphs nor your probability distributions will make sense.
Calculating the mean or the median, building a pie chart or a histogram may not seem complicated when the data set is small and problem-free. However, most Statistics assignments come with a twist to
make the student’s life miserable. If you feel out of your depth with either inferential or descriptive statistics, don’t be afraid to ask for assistance. We have been providing Statistics assignment
help for years and have experienced data scientists on our side. High grades are within your reach, just contacts us and get help. | {"url":"https://home-work-for.me/statistics/descriptive-and-inferential-statistics","timestamp":"2024-11-13T01:27:56Z","content_type":"text/html","content_length":"90228","record_id":"<urn:uuid:6f9c381c-05e1-460f-a4eb-f013a8992bab>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00272.warc.gz"} |
A Simple Method to Determine Consumer Preference
October 2003 // Volume 41 // Number 5 // Tools of the Trade // 5TOT4
A Simple Method to Determine Consumer Preference
Statistically significant consumer preference determinations are possible by Extension personnel in the field using available clientele and without complicated statistical analysis. Clientele such as
shoppers at farmers' markets can provide ratings for sensory attributes such as look, feel, taste, or smell of a particular treatment. The statistical analysis used involves comparing the rank means
of the raw rating data. This procedure factors out consumer variation. The example given uses SAS to complete the analysis.
Extension field trials often involve consumer preference. This may be the look of a turf grass; the feel of a textile; the taste of a cooked, raw, or processed food product; or the smell of a
Statistical analysis of consumer preference often requires a trained consumer panel to show significant results. Even then, simple statistical procedures, such as analysis of variance, can be
inappropriate for this type of data due to panelist variation: e.g., sensory preferences, personality differences, and variation in the use of the rating scale. Use of a 1 to 10 preference scale
often varies, even among trained panelists. Some panelists use the lower range, some the higher, and some rate all choices somewhere around the middle of the scale.
Thus, analysis of preference data often is complex, using techniques such as multiplicative mixed models (Smith, 2003) or principle components analysis (M. McDaniel, personal communication, August
27, 2003). These advanced statistical methods make analysis more precise by factoring out much of the panelist variation interfering with analysis of the data using a simple analysis of variance (M.
McDaniel, personal communication, August 27, 2003).
In Extension field trials, trained test panels are not available. Advanced statistical methods require help from statisticians, often located some distance from the field site. A simple, practical
consumer preference technique is needed to evaluate field data.
Consumers at field tours, local Master Gardeners, farmers' market shoppers, 4-H members, parents and leaders, and commodity producers are available to extension personnel, and willingly volunteer
their services to rate products. Using these consumers, statistically valid tests for preferences can be conducted using the method described here.
Collection of Data
Prepare and present samples to be evaluated in an identical manner. Provide evaluation forms and pencils for the consumers to record their preferences. Tables and chairs arranged around the central
distribution area can make the evaluation process comfortable for the consumers.
Collection Example
An example uses specialty potato cultivars sliced and boiled in an identical manner. Cultivars were identified by number and placed on paper plates for sampling by consumers (shoppers at a local
farmers' market). Evaluation form instructions read, "On a scale from 1 to 10 (10 being the best, 1 being the worst), rate the following potato cultivars. Please take a sip of water between samples."
Paper cups and cold water were provided.
Analysis of Data
This procedure uses the SAS 1999 statistical software for analysis. This is easily installed on an Extension office computer.
Enter raw rating data into a spreadsheet such as Excel. Number each consumer (replication), and record the rating for each product attribute (texture, flavor, and appearance in the potato example)
for each treatment (each cultivar in the potato example). This data must be saved as a text file to be read by the SAS program.
Use a nonparametric statistical technique to analyze the mean ratings for each treatment. The rank of each rating is computed. By ranking the 1 to 10 ratings of the evaluators, the variation due to
differences in use of the rating scale is minimized.
Next, employ a signed rank test to assess statistical differences between treatments. A chi-square test will indicate the level of significance of the data. This will tell you if the mean ranks for
each attribute of the treatments are significantly different.
To perform mean comparisons on significantly different attributes, use the standard error of the mean ranks for each attribute. The mean rank, plus or minus two standard errors, approximates a 95%
confidence interval. Therefore, any two means differing by more than two standard errors are implied to be significantly different. You can then state which treatments are statistically significantly
different. In the potato example, after data analysis, such statements as the following can be made:
• "At the Twin Falls (Idaho) Farmers' Market, consumers preferred the texture of 'Caribe' and 'Huckleberry' potatoes over all other cultivars tested."
• "Taste test results indicate that 'Caribe,' 'German Butterball,' "Yukon Gold,' 'Viking Red,' and 'NorDonna' rank high"(Olsen, 2003).
Analysis Example
Using SAS, the following nonparametric procedure was applied to the specialty potato data. Where:
• Location is either 1 or 2 since the test was performed at two separate farmers' markets.
• Cultivar is the specialty potato type (these are the treatments)
• Texture is the mouth feel of the boiled sample
• Flavor is how the boiled sample tasted
• Appearance is how the boiled sample looked
• Rep is each individual consumer completing the entire evaluation
• A:\Potatotaste00.txt is the location and the name of the text data file
• The procedure NPAR1WAY calculates the mean ranks
The SAS program is as follows.
Data Pottaste2;
Infile'A:\Potatotaste00.txt' delimiter='09'x;
Input location rep cultivar texture flavor appearance;
Proc sort;
by location cultivar texture flavor appearance rep
Proc MEANS;
by location cultivar;
Proc NPAR1WAY WILCOXON;
by location;
Class cultivar;
OUTPUT OUT=DataT Wilcoxon;
VAR texture flavor appearance;
Proc rank out=rankdata data=pottaste2;
var texture flavor appearance;
rankst f a;
by location;
Proc sort data=rankdata;
by location cultivar;
Proc means mean stderr;
var t f a;
by location cultivar;
The results of this SAS program provide, for each attribute at each location for each treatment:
1. The rating mean--the raw score averaged for each attribute
2. The rank mean--the rank scores averaged for each attribute
Also provided are:
1. The chi-square for the ranked means. This will tell you if there are significant differences between the rank means.
2. The standard error for the rank means. If the chi-square shows significance, the standard error will tell you which rank means are different from each other.
This consumer preference technique and statistical method allows Extension professionals in remote field situations to measure sensory attributes of products using available consumer clientele and
statistical analysis possible on an office computer.
Thanks to William Price, Statistician, Statistical Programs, College of Agriculture and Life Sciences, University of Idaho, Moscow, Idaho for help with SAS programming and interpretation.
Olsen, N., Robbins, J., Brandt, T., Lanting, R., Parr, J., Jayo, C., & Falen, C. (2003). Specialty potato production and marketing in southern Idaho. University of Idaho College of Agricultural and
Life Sciences CIS 1110.
SAS Institute Inc., 1999. SAS OnlineDoc®, Version 8, Cary, NC.
Smith, A.,Cullis, B., Brockhoff, P., & Thompson, R. (2003). Multiplicative mixed models for the analysis of sensory evaluation data. Food Quality and Preference. Vol. 14 (5/6), 387-395. | {"url":"https://archives.joe.org/joe/2003october/tt4.php","timestamp":"2024-11-07T21:51:31Z","content_type":"application/xhtml+xml","content_length":"16477","record_id":"<urn:uuid:05f6d842-b0a5-461b-8a16-f54aeb36cd4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00309.warc.gz"} |
Bi-covering: Covering edges with two small subsets of vertices
We study the following basic problem called Bi-Covering. Given a graph G(V;E), find two (not necessarily disjoint) sets A ⊆ V and B ⊆ V such that A [ B = V and such that every edge e belongs to
either the graph induced by A or the graph induced by B. The goal is to minimize maxfjAj; jBjg. This is the most simple case of the Channel Allocation problem [R. Gandhi et al., Networks, 47 (2006),
pp. 225{236]. A solution that outputs V; ; gives ratio at most 2. We show that under a similar strong Unique Games Conjecture by Bansal and Khot [Optimal long code test with one free bit, in
Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science, FOCS'09, IEEE, 2009, pp. 453{462] there is no 2-ϵ ratio algorithm for the problem, for any constant ϵ ≥ 0. Given a
bipartite graph, Max-Bi-Clique is a problem of finding the largest k × k complete bipartite subgraph. For the Max-Bi-Clique problem, a constant factor hardness was known under a random 3-SAT
hypothesis of Feige [Relations between average case complexity and approximation complexity, in Proceedings of the 34th Annual ACM Symposium on Theory of Computing, ACM, 2002, pp. 534{543] and also
under the assumption that NP ⊈ ∩ ϵ > 0 DTIME(2nϵ ) [S. Khot, SIAM J. Comput., 36 (2006), pp. 1025{1071]. It was an open problem in [C. Ambühl, M. Mastrolilli, and O. Svensson, SIAM J. Comput., 40
(2011), pp. 567{596] to prove inapproximability of Max-Bi- Clique assuming weaker conjecture. Our result implies a similar hardness result assuming the Strong Unique Games Conjecture. On the
algorithmic side, we also give better than 2 approximation for Bi- Covering on numerous special graph classes. In particular, we get 1.876 approximation for chordal graphs, an exact algorithm for
interval graphs, 1 + o(1) for minor free graphs, 2 - 4δ=3 for graphs with minimum degree δn, 2=(1 + δ2=8) for δ-vertex expander, 8=5 for split graphs, 2 - (6=5) 1=d for graphs with minimum constant
degree d, etc. Our algorithmic results are quite nontrivial. In achieving these results, we use various known structural results about the graphs combined with the techniques that we develop tailored
to getting better than 2 approximation.
Bibliographical note
Publisher Copyright:
© 2017 Society for Industrial and Applied Mathematics.
• Bi-covering
• Max-Bi-clique
• Unique games
Dive into the research topics of 'Bi-covering: Covering edges with two small subsets of vertices'. Together they form a unique fingerprint. | {"url":"https://cris.openu.ac.il/en/publications/bi-covering-covering-edges-with-two-small-subsets-of-vertices","timestamp":"2024-11-13T19:28:25Z","content_type":"text/html","content_length":"53810","record_id":"<urn:uuid:76257fa7-3363-4320-a210-7f21d203baa8>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00132.warc.gz"} |
Reconstruction and Prediction from Three Images of Uncalibrated Cameras
(1995) Proceedings of 9th Conference on Theory and Applications of Image Analysis II p.113-126
This paper deals with the problem of reconstructing the locations of a number of points in space from three different images taken by uncalibrated cameras. It is assumed that the correspondences
between the points in the different images are known. In the case of six points this paper shows that there are in general three solutions to the problem of determining the shape of the object,
but some of them may be complex and some may not be physically realisable (e.g. points behind the camera). The solutions are given by a third degree polynomial with coefficients depending on the
coordinates of the points in the image. It is also shown how a priori information of the object, such as planarity of subsets of the points, can be used to make... (More)
This paper deals with the problem of reconstructing the locations of a number of points in space from three different images taken by uncalibrated cameras. It is assumed that the correspondences
between the points in the different images are known. In the case of six points this paper shows that there are in general three solutions to the problem of determining the shape of the object,
but some of them may be complex and some may not be physically realisable (e.g. points behind the camera). The solutions are given by a third degree polynomial with coefficients depending on the
coordinates of the points in the image. It is also shown how a priori information of the object, such as planarity of subsets of the points, can be used to make reconstruction. In this case the
reconstruction is unique and it is obtained by a linear method. Furthermore it is shown how additional points in the first two images can be used to predict the location of the corresponding
point in the third image, without calculating the epipoles. Finally, a linear method for the reconstruction in the case of at least seven point matches are given (Less)
publishing date
Chapter in Book/Report/Conference proceeding
publication status
host publication
[Host publication title missing]
Borgefors, G.
113 - 126
World Scientific Publishing
conference name
Proceedings of 9th Conference on Theory and Applications of Image Analysis II
conference location
Uppsala, Sweden
conference dates
LU publication?
c4a6650f-6396-4869-9238-2a0d88d46102 (old id 787277)
date added to LUP
2016-04-04 11:34:07
date last changed
2021-01-25 10:54:24
abstract = {{This paper deals with the problem of reconstructing the locations of a number of points in space from three different images taken by uncalibrated cameras. It is assumed that the correspondences between the points in the different images are known. In the case of six points this paper shows that there are in general three solutions to the problem of determining the shape of the object, but some of them may be complex and some may not be physically realisable (e.g. points behind the camera). The solutions are given by a third degree polynomial with coefficients depending on the coordinates of the points in the image. It is also shown how a priori information of the object, such as planarity of subsets of the points, can be used to make reconstruction. In this case the reconstruction is unique and it is obtained by a linear method. Furthermore it is shown how additional points in the first two images can be used to predict the location of the corresponding point in the third image, without calculating the epipoles. Finally, a linear method for the reconstruction in the case of at least seven point matches are given}},
author = {{Heyden, Anders}},
booktitle = {{[Host publication title missing]}},
editor = {{Borgefors, G.}},
isbn = {{981 02 2448 6}},
keywords = {{computer vision; image reconstruction; matrix algebra; stereo image processing}},
language = {{eng}},
pages = {{113--126}},
publisher = {{World Scientific Publishing}},
title = {{Reconstruction and Prediction from Three Images of Uncalibrated Cameras}},
year = {{1995}}, | {"url":"https://lup.lub.lu.se/search/publication/787277","timestamp":"2024-11-13T05:17:36Z","content_type":"text/html","content_length":"37686","record_id":"<urn:uuid:81bb3553-2d35-4c5b-b07e-346cd385c2c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00358.warc.gz"} |
Measure properties of 3-D volumetric image regions
stats = regionprops3(BW,properties) measures a set of properties for each connected component (object) in the 3-D volumetric binary image BW. The output stats denotes different properties for each
regionprops3 finds unique objects in volumetric binary images using 26-connected neighborhoods. For more information, see Pixel Connectivity. To find objects using other types of connectivity,
instead use bwconncomp to create the connected components, and then pass the result to regionprops3 using the CC argument.
For all syntaxes, you can omit the properties argument. In this case, regionprops3 returns the "Volume", "Centroid", and "BoundingBox" measurements.
stats = regionprops3(CC,properties) measures a set of properties for each connected component (object) in CC, which is a structure returned by bwconncomp.
stats = regionprops3(L,properties) measures a set of properties for each labeled region in the 3-D label image L.
stats = regionprops3(___,V,properties) measures a set of properties for each labeled region in the 3-D volumetric grayscale image V. The first input (BW, CC, or L) identifies the regions in V.
Estimate Centers and Radii of Objects in 3-D Volumetric Image
Create a binary image with two spheres.
[x,y,z] = meshgrid(1:50,1:50,1:50);
bw1 = sqrt((x-10).^2 + (y-15).^2 + (z-35).^2) < 5;
bw2 = sqrt((x-20).^2 + (y-30).^2 + (z-15).^2) < 10;
bw = bw1 | bw2;
Get the centers and radii of the two spheres.
s = regionprops3(bw,"Centroid","PrincipalAxisLength");
centers = s.Centroid
centers = 2×3
diameters = mean(s.PrincipalAxisLength,2)
diameters = 2×1
radii = 2×1
Get All Statistics for Cube Within a Cube
Make a 9-by-9 cube of 0s that contains a 3-by-3 cube of 1s at its center.
innercube = ones(3,3,3);
cube_in_cube = padarray(innercube,[3 3],0,'both');
Get all statistics on the cube within the cube.
stats = regionprops3(cube_in_cube,'all')
stats=1×18 table
Volume Centroid BoundingBox SubarrayIdx Image EquivDiameter Extent VoxelIdxList VoxelList PrincipalAxisLength Orientation EigenVectors EigenValues ConvexHull ConvexImage ConvexVolume Solidity SurfaceArea
______ ___________ ______________________________________ ___________________________________ _______________ _____________ ______ _____________ _____________ _______________________ ___________ ____________ ____________ _____________ _______________ ____________ ________ ___________
27 5 5 2 3.5 3.5 0.5 3 3 3 {[4 5 6]} {[4 5 6]} {[1 2 3]} {3x3x3 logical} 3.7221 1 {27x1 double} {27x3 double} 3.873 3.873 3.873 0 0 0 {3x3 double} {3x1 double} {24x3 double} {3x3x3 logical} 27 1 41.07
Input Arguments
BW — Volumetric binary image
3-D logical array
Volumetric binary image, specified as a 3-D logical array.
regionprops3 sorts the objects in the volumetric binary image from left to right based on the top-left extremum of each component. When multiple objects have the same horizontal position, the
function then sorts those objects from top to bottom, and again along the third dimension. regionprop3 returns the measured properties, stats, in the same order as the sorted objects.
Data Types: logical
CC — Connected components
Connected components of a 3-D volumetric image, specified as a structure returned by bwconncomp using a 3-D connectivity value, such as 6, 18, or 26. CC.ImageSize must be a 1-by-3 vector.
Data Types: struct
L — Label image
3-D numeric array | 3-D categorical array
Label image, specified as one of the following.
• A 3-D numeric array. Voxels labeled 0 are the background. Voxels labeled 1 make up one object; voxels labeled 2 make up a second object; and so on. regionprops3 treats negative-valued voxels as
background and rounds down input voxels that are not integers. You can get a numeric label image from labeling functions such as watershed or labelmatrix.
• A 3-D categorical array. Each category corresponds to a different region.
Data Types: single | double | int8 | int16 | int32 | uint8 | uint16 | uint32 | categorical
properties — Type of measurement
"basic" (default) | comma-separated list of strings or character vectors | cell array of strings or character vectors | "all"
Type of measurement, specified as a comma-separated list of strings or character vectors, a cell array of strings or character vectors, "all" or "basic".
• If you specify "all", then regionprops3 computes all the shape measurements. If you also specify a grayscale image, then regionprops3 returns all of the voxel value measurements.
• If you specify "basic" or do not specify the properties argument, then regionprops3 computes only the "Volume", "Centroid", and "BoundingBox" measurements.
The following table lists all the properties that provide shape measurements. The Voxel Value Measurements table lists additional properties that are valid only when you specify a grayscale image.
Shape Measurements
Property Name Description Code
"BoundingBox" Smallest cuboid containing the region, returned as a 1-by-6 vector of the form [ulf_x ulf_y ulf_z width_x width_y width_z]. ulf_x, ulf_y, and ulf_z specify the Yes
upper-left front corner of the cuboid. width_x, width_y, and width_z specify the width of the cuboid along each dimension.
"Centroid" Center of mass of the region, returned as a 1-by-3 vector. The three elements specify the (x, y, z) coordinates of the center of mass. Yes
"ConvexHull" Smallest convex polygon that can contain the region, returned as a p-by-3 matrix. Each row of the matrix contains the x-, y-, and z-coordinates of one vertex of the No
"ConvexImage" Image of the convex hull, returned as a volumetric binary image with all voxels within the hull filled in (set to on). The image is the size of the bounding box of No
the region.
"ConvexVolume" Number of voxels in ConvexImage, returned as a scalar. No
"EigenValues" Eigenvalues of the voxels representing a region, returned as a 3-by-1 vector. regionprops3 uses the eigenvalues to calculate the principal axes lengths. Yes
"EigenVectors" Eigenvectors of the voxels representing a region, returned as a 3-by-3 vector. regionprops3 uses the eigenvectors to calculate the orientation of the ellipsoid that Yes
has the same normalized second central moments as the region.
"EquivDiameter" Diameter of a sphere with the same volume as the region, returned as a scalar. Computed as (6*Volume/pi)^(1/3). Yes
"Extent" Ratio of voxels in the region to voxels in the total bounding box, returned as a scalar. Computed as the value of Volume divided by the volume of the bounding box. Yes
[Volume/(bounding box width * bounding box height * bounding box depth)]
"Image" Bounding box of the region, returned as a volumetric binary image that is the same size as the bounding box of the region. The on voxels correspond to the region, and Yes
all other voxels are off.
Euler angles [2], returned as a 1-by-3 vector. The angles are based on the right-hand rule. regionprops3 interprets the angles by looking at the origin along the x-,
"Orientation" y-, and z-axis representing roll, pitch, and yaw respectively. A positive angle represents a rotation in the counterclockwise direction. Rotation operations are not Yes
commutative so they must be applied in the correct order to have the intended effect.
"PrincipalAxisLength" Length (in voxels) of the major axes of the ellipsoid that have the same normalized second central moments as the region, returned as 1-by-3 vector. regionprops3 Yes
sorts the values from highest to lowest.
"Solidity" Proportion of the voxels in the convex hull that are also in the region, returned as a scalar. Computed as Volume/ConvexVolume. No
"SubarrayIdx" Indices used to extract elements inside the object bounding box, returned as a cell array such that L(idx{:}) extracts the elements of L inside the object bounding Yes
"SurfaceArea" Distance around the boundary of the region [1], returned as a scalar. No
"Volume" Count of the actual number of on voxels in the region, returned as a scalar. Volume represents the metric or measure of the number of voxels in the regions within the Yes
volumetric binary image, BW.
"VoxelIdxList" Linear indices of the voxels in the region, returned as a p-element vector. Yes
"VoxelList" Locations of voxels in the region, returned as a p-by-3 matrix. Each row of the matrix has the form [x y z] and specifies the coordinates of one voxel in the region. Yes
The voxel value measurement properties in the following table are valid only when you specify a grayscale volumetric image, V.
Voxel Value Measurements
Property Name Description Code
"MaxIntensity" Value of the voxel with the greatest intensity in the region, returned as a scalar. Yes
"MeanIntensity" Mean of all the intensity values in the region, returned as a scalar. Yes
"MinIntensity" Value of the voxel with the lowest intensity in the region, returned as a scalar. Yes
"VoxelValues" Value of the voxels in the region, returned as a p-by-1 vector, where p is the number of voxels in the region. Each element in the vector contains the value of a voxel Yes
in the region.
Center of the region based on location and intensity value, returned as a p-by-3 vector of coordinates. The first element of WeightedCentroid is the horizontal
"WeightedCentroid" coordinate (or x-coordinate) of the weighted centroid. The second element is the vertical coordinate (or y-coordinate). The third element is the planar coordinate (or z Yes
Data Types: char | string | cell
V — Volumetric grayscale image
3-D numeric array
Volumetric grayscale image, specified as a 3-D numeric array. The size of the image must match the size of the binary image BW, connected component structure CC, or label matrix L.
Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32
Output Arguments
stats — Measurement values
Measurement values, returned as a table. The number of rows in the table corresponds to the number of objects in BW, CC.NumObjects, or max(L(:)). The variables (columns) in each table row denote the
properties calculated for each region, as specified by properties. If the input image is a categorical label image L, then stats includes an additional variable with the property "LabelName".
The order of the measurement values in stats is the same as the sorted objects in binary image BW, or the ordered objects specified by CC or L.
[2] Shoemake, Ken, Graphics Gems IV. Edited by Paul S. Heckbert, Morgan Kaufmann, 1994, pp. 222–229.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™. (since R2023a)
Usage notes and limitations:
• regionprops3 supports the generation of C code (requires MATLAB^® Coder™). Note that if you choose the generic MATLAB Host Computer target platform, regionprops3 generates code that uses a
precompiled, platform-specific shared library. Use of a shared library preserves performance optimizations but limits the target platforms for which code can be generated. For more information,
see Types of Code Generation Support in Image Processing Toolbox.
• Supports only binary images or numeric label images. Input label images of data type categorical are not supported.
• Passing a cell array of properties is not supported. Use a comma-separated list instead.
• All properties are supported except "ConvexVolume", "ConvexHull", "ConvexImage", "Solidity", and "SurfaceArea".
Thread-Based Environment
Run code in the background using MATLAB® backgroundPool or accelerate code with Parallel Computing Toolbox™ ThreadPool.
This function fully supports thread-based environments. For more information, see Run MATLAB Functions in Thread-Based Environment.
Version History
Introduced in R2017b
R2023a: Support for C code generation
regionprops3 now supports the generation of C code (requires MATLAB Coder).
R2022b: Support for thread-based environments
regionprops3 now supports thread-based environments. | {"url":"https://de.mathworks.com/help/images/ref/regionprops3.html","timestamp":"2024-11-11T23:46:29Z","content_type":"text/html","content_length":"118193","record_id":"<urn:uuid:18c96e5c-e224-4564-a65c-6527407bd95f>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00346.warc.gz"} |