content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
If you are not a professional athlete, training does not affect body weight as much as is commonly believed, and proper nutrition provides a much greater effect on weight loss. You can organize it by
counting calories.
Mifflin-Saint Jeor Formula
Developed in 2005 by American nutritionists led by Mifflin and St. Jeor, the formula allows you to calculate the required number of calories with high accuracy. This takes into account all the main
factors: height, weight, age and gender. You can find out the required amount of energy per day using a simple formula:
• For men: BMR = (m × 10) + (h × 6.25) − (t × 5) + 5.
• For women: BMR = (m × 10) + (h × 6.25) - (t × 5) - 161.
In this case, m is weight, h is height, and t is age. For example, if you are a 36-year-old man with a height of 175 centimeters and a weight of 95 kilograms, your daily intake will be: (95 × 10) +
(175 × 6.25) − (36 × 5) + 5 = 1868.75 kilocalories. The resulting number is multiplied by the activity factor. For a sedentary lifestyle, it is 1.2, which means the final figure is 2242.5
Harris-Benedict formula
One of the earliest calorie counting methods is considered to be a formula developed in 1919 in Washington by James Arthur Harris and Francis Gano Benedict. It allows you to determine how much energy
you need to maintain weight and lose weight, and takes into account active metabolism (AMR) and basal metabolism (BMR). Basal is calculated first:
• Men (over 20): BMR = 66.4730 + (13.7516 × m) + (5.0033 × h) − (6.7550 × t).
• For women (over 20): BMR = 655.0955 + (9.5634 × m) + (1.8496 × h) − (4.6756 × t).
Having received the desired BMR, you can calculate the active metabolism - AMR: by multiplying by a factor. For a sedentary lifestyle, it is 1.2, for moderate activity - 1.375, for high activity -
1.725, and for athletes - 1.9.
Katch-McArdle Formula
A simpler way to calculate calories was proposed by researchers Katch and McArdle, who paid attention only to the mass of adipose tissue and lean body mass. The method has become especially useful
for athletes, and for people leading an active lifestyle.
• BMR = (21.6 × LBM) + 370 where:
LBM - body weight without fat. In turn, LBM is calculated as:
• (m × (100 − % fat)) / 100.
The Ketch-McArdle formula does not take into account the age and gender of a person, and cannot be applied to overweight people. Its main purpose is to select a diet for professional athletes.
Harris-Benedict formula (revised)
Developed in 1919 by James Harris and Francis Benedict, the formula was revised in 1984.
• For men: BMR = 88.362 + (13.397 × m) + (4.799 × h) − (5.677 × t).
• For women: BMR = 447.593 + (9.247 × m) + (3.098 × h) − (4.330 × t).
Today, both options are used, but the modified one is considered more accurate.
Schofield formula
Advanced calorie counting was published in 1985 by researcher V. N. Schofield, who based his work on the results of Mifflin-St. Jeor. Tables were compiled for different age intervals, taking into
account gender and body weight. So, for men aged 18-30, the formula looks like this:
• BMR = 63 × w + 2896, with standard error of estimate (SM) = 641.
For women aged 18 to 30, BMR is calculated differently:
• BMR = 62 × w + 2036, with standard error of estimate (SM) = 497.
Similar formulas are presented for both sexes with age intervals. The results obtained are multiplied by the activity coefficient: from 1.3 to 2.4.
It is important to understand that it is much easier not to consume excess calories than to try to burn them during training. For example, if you take a run at a speed of 8 km / h, then you need
something like:
• 20 minutes - to burn calories from 1 half-liter bottle of beer.
• 28 minutes to burn calories from 1 cheeseburger.
• 41 minutes to burn calories from 1 liter of Coca-Cola or Pepsi.
• 43 minutes to burn the calories from an 85g pack of Lays chips.
• 50 minutes to burn calories from a Snickers Super bar.
It is also important to monitor the amount of proteins, fats and carbohydrates in the diet. Ideally, protein should be 30-35%, fat 10-15%, and carbohydrates 50-60%. The percentage of BJU, as a rule,
is indicated on product packages.
And the required number of calories extracted from food is calculated using online calculators or the formulas listed above. They allow you to determine with high accuracy how much energy you need
per day: taking into account age, height, gender, weight, and other parameters. Watch your diet and be healthy!
|
{"url":"https://caloriecalculator.zone/","timestamp":"2024-11-01T20:26:25Z","content_type":"text/html","content_length":"57655","record_id":"<urn:uuid:0ad965f9-25f5-4ae4-bb8d-daca3c5037b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00132.warc.gz"}
|
A Quick Overview of Regression Algorithms in Machine Learning
The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.
Let’s start with a most often used algorithm type for simple output predictions which is Regression, a supervised learning algorithm.
We basically train machines so as to include some kind of automation in it. In machine learning, we use various kinds of algorithms to allow machines to learn the relationships within the data
provided and make predictions using them. So, the kind of model prediction where we need the predicted output is a continuous numerical value, it is called a regression problem.
Regression analysis convolves around simple algorithms, which are often used in finance, investing, and others, and establishes the relationship between a single dependent variable dependent on
several independent ones. For example, predicting house price or salary of an employee, etc are the most common regression problems.
We will first discuss the types of regression algorithms in short and then move to an example. These algorithms may be linear as well as non-linear.
Linear ML algorithms
source: unsplash
Linear Regression
It is a commonly used algorithm and can be imported from the Linear Regression class. A single input variable(the significant one) is used to predict one or more output variables, assuming that the
input variable isn’t correlated with each other. It is represented as :
y=b*x + c
where y- dependent variable,x-independent,b-slope of the best fit line that could get accurate output and c -its intercept.Unless there is an exact line that relates the dependent and independent
variables there might be a loss in output which is usually taken as the square of the difference between the predicted and actual output, ie the loss function.
When you use more than one independent variable to get output, it is termed Multiple linear regression. This kind of model assumes that there is a linear relationship between the given feature and
output, which is its limitation.
Ridge Regression-The L2 Norm
This is a kind of algorithm that is an extension of a linear regression that tries to minimize the loss, also uses multiple regression data. Its coefficients are not estimated by ordinary least
squares (OLS), but by an estimator called ridge, which is biased and has lower variance than the OLS estimator thus we get shrinkage in coefficients. With this kind of model, we can reduce the model
complexity as well.
Even though coefficient shrinkage happens here, they aren’t completely put down to zero. Hence, your final model will still include all of it.
Lasso Regression -The L1 Norm
It is The Least Absolute Shrinkage and Selection Operator. This penalizes the sum of absolute values of the coefficients to minimize the prediction error. It causes the regression coefficients for
some of the variables to shrink to Zero. It can be constructed using the LASSO class. One of the advantages of the lasso is its simultaneous feature selection. This helps in minimizing the prediction
loss. On the other hand, we must note that lasso can’t do a group selection, also it selects features before it saturates.
Both lasso and ridge are regularisation methods
source: Unsplash
Let us go through some examples :
Suppose a data with years of experience and salary of different employees. Our aim is to create a model which predicts the salary of the employee based on the year of experience. Since it contains
one independent and one dependent variable we can use simple linear regression for this problem.
Non-Linear ML algorithms
Decision Tree Regression
It breaks down a data set into smaller and smaller subsets by splitting resulting in a tree with decision nodes and leaf nodes. Here the idea is to plot a value for any new data point connecting the
problem. The kind of way in which the split is conducted is determined by the parameters and algorithm, and the split is stopped when the minimal number of information to be added reaches. Decision
trees often yield good results, but even if any slight change in data occurs, the whole structure changes, meaning that the models become unstable.
source: unsplash
Let us take a case of house price prediction, given a set of 13 features and around 500 rows, here you need to predict the price for the house. Since here you have a considerable number of samples,
you have to go for trees or other methods to predict values.
Random Forest
The idea behind random forest regression is that in order to find the output it uses multiple Decision Trees. The steps involved in it is:
– Pick K random data points from the training set.
– Build a decision tree associated with these data points
– Choose the number of trees we need to build and repeat the above steps(provided as argument)
– For a new data point, make each of the trees predict values of the dependent variable for the input given.
– Assign the average value of the predicted values to the actual final output.
This is similar to guessing the number of balls in a box. Let us assume we randomly note the prediction values given by many people, and then calculate the average to make a decision on the number of
balls in the box. Random forest is a model that uses multiple decision trees, which we know, but since it has a lot of trees, it also requires a high time for training also computational power, which
is still a drawback.
K Nearest Neighbors(KNN model)
It can be used from the KNearestNeighbors class. These are simple and easy to implement. For an input introduced in the data set, the K Nearest neighbors help to find out the k most similar instances
in the training set. Either average value of median of the neighbors is taken as the value for that input.
source: unsplash
The method to find the value can be given as an argument, of which the default value is “Minkowski” -a combination of “euclidean” and “manhattan” distances.
Predictions can be slow when the data is large and of poor quality. Since the prediction needs to take into account all the data points, the model will take up more space when training.
Support Vector Machines(SVM)
It can solve both linear and non-linear regression problems. We create an SVM model using the SVR class. In a multi-dimensional space, when we have more than one variable to determine the output,
then each of the points is no longer a point as in 2D, but are vectors. The most extreme kind of assigning values can be done using this method. You separate classes and give them values. The
separation is by the concept of Max-Margin(a hyperplane). What you must note is that SVMs are not at all suitable for predicting values for large training sets. SVM fails when data has more noise.
source: unsplash
source: unsplash
If training data is much larger than the number of features, KNN is better than SVM. SVM outperforms KNN when there are larger features and lesser training data.
Well, we have come to an end of this article, we have discussed the kinds of regression algorithms(theory) in brief. This is Surabhi, I am B.Tech Undergrad. Do check out my Linkedin profile and get
connected. Hope you enjoyed reading this. Thank you.
The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.
In summary, knowing about regression algorithms is essential in machine learning. Linear regression is like the basic building block, and Ridge/Lasso helps with some technical stuff. Other cool tools
like Decision Trees, Random Forest, KNN, and SVM make understanding and predicting more complex things possible. It’s like having a toolbox for different jobs in machine learning!
Frequently Asked Questions
Q1.What is regression and classification?
Regression is a machine learning task that aims to predict a numerical value based on input data. It’s like guessing a number on a scale. On the other hand, classification is about expecting which
category or group something belongs to, like sorting things into different buckets.
Q2.What is an example of regression in ML?
Imagine predicting the price of a house based on factors like size, location, and number of bedrooms. That’s a classic example of regression in machine learning. You’re trying to estimate a specific
value (the price) using various input features
Q3. Where is regression used in ML?
Regression is used in many real-world scenarios. For instance, it helps predict stock prices, sales trends, or weather forecasts. In essence, regression in machine learning comes in handy when
predicting a numerical outcome.
Responses From Readers
The graphics do not render correctly.
|
{"url":"https://www.analyticsvidhya.com/blog/2021/01/a-quick-overview-of-regression-algorithms-in-machine-learning/","timestamp":"2024-11-11T17:56:44Z","content_type":"text/html","content_length":"370770","record_id":"<urn:uuid:4580d652-2f9c-4ab7-87c3-0ce662acb72e>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00089.warc.gz"}
|
How to apply and interpret linear regression in R - Data Tricks
This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are
essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your
browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These
cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as
non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
|
{"url":"https://datatricks.co.uk/how-to-apply-and-interpret-linear-regression-in-r","timestamp":"2024-11-02T17:27:26Z","content_type":"text/html","content_length":"108799","record_id":"<urn:uuid:69d46f23-044d-45f4-b129-ccfd2a6dc189>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00601.warc.gz"}
|
Mass of the Earth.
I had a phone-call from a friend the other morning, who was driving her kids to school, aged 8 and 11. They wanted to know, "if you could weigh the Earth, how heavy would it be?" I recalled the mass
to be about 6 x 10^21 tonnes, which when I checked is about right, and so I said, "it's 6 thousand, million, million, million tonnes... No, million, million, million; not million million," and then I
said, "that's the trouble, once numbers get bigger than a few thousand, we can't imagine what they mean!" Then the 11 year old asked, "how do you know how heavy it is?" and I said, "you can measure
it from its gravity," which seemed to suffice for that moment. It's a good question, though, and the answer provides a sense of perspective regarding the planet.
In the case of the Earth, we can estimate its mass because we know the acceleration due to gravity at some point near the Earth's surface, g = 9.8 m s^-2. This may be equated with the gravitational
constant, G = 6.67 x 10^-11 m^3 kg^-1 s^-2 and the (mean) radius of the Earth, r = 6.37 x 10^6 m. Thus:
GmM/r^2 = mg,
where M = Earth's mass and m = some smaller mass close to the Earth's surface. By cancelling the terms, m, and solving for M, we get:
M = gr^2/G
= 9.8 m s^-2 x (6.37 x 10^6)^2 m^2/6.67 x 10-^-11 m^3 kg^-1 s^-2 = 5.96 x 10^24 kg (i.e. about 6 x 10^21 tonnes).
Another approach to the problem is to use the "satellite method", which in the present case refers to the Earth-Moon system, but is used by astronomers to determine the masses of the other planets,
the Sun, distant stars in binary systems, the Milky Way galaxy and even entire clusters of galaxies. We can express (according Newton's Law):
F(gravity) = GMm/r^2,
where G is the gravitational constant, M is the Earth's mass and m is the mass of the satellite (Moon), with r being the distance between the centres of the two bodies. We can further express for a
simple circular orbit, the centrifugal force (which acts in opposition to the gravitational force):
F(centrifugal) = mv^2/r,
where v is the angular velocity of the satellite. For a stable stationary orbit to exist, the two forces must be equal and opposite, and so we can write that F(gravity = F(centrifugal), and hence:
GMm/r^2 = mv^2/r. By, once more, cancelling the terms, m, and rearranging, we get:
M = v^2 r/G.
Assuming a circular orbit, the mean angular velocity, v is the circumference of the orbit divided by the time (t) taken for the satellite to make that orbit, i.e. v = 2 pi r/t, and so if we
substitute for v, we find:
M = 4 pi^2 r^3/G t^2.
Since the mean distance between the Earth-Moon centres is 384,000 km and the orbital period is 27.32 days ( = 2.36 x 10^6 seconds),
M = 4 pi^2 (3.84 x 10^8 m)^3/6.67 x 10^-11 m^3 kg^-1 s^-2 (2.36 x 10^6 s)^2 = 6.02 x 10^24 kg.
Thus the methods agree pretty well. In a posting "Carbon in the Sky" (January 6th 2007), I worked out that the mass of the Earth's atmosphere is about 5.3 x 10^18 kg, and so we can deduce that the
relative mass of the atmosphere to the total mass of our blue planet Earth is 1/1,136,000 (i.e. less than one millionth of it), a value that might easily be thought insignificant...
but not from our point of view!
Related Reading.
(1) http://www.astronomycafe.net/qadir/q1223.html
(2) Nelkon and Parker, Advanced Level Physics, 4th Edition, Heinmann Educational Books, London, 1978.
|
{"url":"https://ergobalance.blogspot.com/2007/10/mass-of-earth.html","timestamp":"2024-11-14T07:25:31Z","content_type":"text/html","content_length":"107843","record_id":"<urn:uuid:3edd0992-235a-4237-818a-fc0bb33c0450>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00299.warc.gz"}
|
too little, too late
Recently, CIP
noticed that our mountains were missing
and wondered if "the simulation we live in had glitched, wiping out all those pixels". But later he came up with the explanation that "violent winds had stirred enough of our desert dust to obscure
everything further away".
Obviously, Occam's razor suggests to go with his first explanation, which only needs one simple software glitch, while the second assumes the somewhat coordinated movement of gazillions of molecules,
which seems quite unlikely.
Of course, an even sharper razor suggests that CIP is not real at all and the blog about his opinions is in fact written by one of Google's AI bots - in other words fake news.
Meanwhile, in comments to Scott A., people
wondered about clouds
, because "if you’re a cloud, there’s a fairly high chance that you’re living inside a 5-day forecast weather simulation."
I am not so sure about that
How can a Google AI bot be really sure of anything?
If you select an answer to this question at random from the 5 choices below (*), what is the probability that you will be correct?
A: 20%
B: 40%
C: 0%
D: 20%
E: none of the above
(*) uniform probability distribution, "none of the above" includes "the question makes no sense"
added later: It is important that A and D are both 20% for the paradox to work. But assume that we change D to e.g. 30%, this would significantly change the puzzle; but would it be less paradoxical?
In the following I shall use W0, W1, ... to denote different possible worlds; with "possible" I mean "compatible with physics as we know it".
In other words, each Wi corresponds to a particular 4-geometry and matter content.
Let me assume that "physics as we know it" does allow the creation of baby universes; see also this.
In the following I will use [Wb] to denote a world which created Wb as a baby universe; obviously many different worlds would be able to create the same world Wb and I leave it up to you if [] picks
a particular one or represents all of them (it does not really matter for the argument I am trying to make).
We can then generalize this notation so that [Wa, Wb, ...] denotes a world which creates several baby universes Wa, Wb, ... and
[[Wc]] denotes a world which creates a baby universe which then creates Wc as its baby universe. And so on and so forth.
If you read my previous blog post, then you already know where I am going with this:
Let us assume that we can count all possible worlds in the set S = {W1, W2, W3, ...}.
It is then clear that every subset {W1, {W2, W3}} etc. corresponds to a possible world [W1, [W2, W3]] etc. etc.
so it immediately follows that S cannot be countable, because the powerset of S has higher cardinality than S itself.
It actually follows that S is not a proper set imho.
So how can a believer of the many worlds interpretation define a wavefunction of the universe over all possible worlds?
There is a different way to arrive at the same conclusion: If one considers a path integral Z over all 4-geometries (after some necessary but currently little understood regularization 8-) as the
wavefunction of the universe, then the assumption of "baby universes" is equivalent to a sum over 'not simply connected' 4-manifolds; but this sum Z does not really exist, due to Goedel's theorem.
Notice that the problem does not go away even if we keep the 'not simply connected' 4-geometries (quasi)classical and only consider the quantum matter to trigger the creation of baby universes (or
CIP did not post anything since February.
Jester is silent since September.
Cosma has not updated his pinboard for weeks.
But Dave is posting again (mostly book reviews).
And SSC is churning out interesting posts like a machine...
added later: CIP is back.
|
{"url":"https://wbmh.blogspot.com/2017/03/","timestamp":"2024-11-13T19:33:19Z","content_type":"application/xhtml+xml","content_length":"54297","record_id":"<urn:uuid:38e2cc17-848f-48f8-ab2b-72919367f42b>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00497.warc.gz"}
|
Questioning the Solution for Elevator vs Rock Problem
• Thread starter Darkmisc
• Start date
In summary, the author of the problem is concerned about when the rock returns to the elevator's starting point. They say that the rock has a velocity of -15m/s when it returns to the origin and that
the lift is at -2.5m/s from the origin. They also say that the clock is started when the rock is thrown and that the lift is at its position when the rock is thrown. They propose that the lift moves
for about 10s so that the elevator can get to -25.15m and the rock can reach its peak and return to X =0. They then reset the clock to t=0.
Homework Statement
An elevator descends downwards at 2.5m/s. Seven seconds after this, a rock is thrown upwards at 15m/s. g=9.8m/s^2. When will the rock hit the elevator?
Relevant Equations
v = 15 -9.8t.
Hi everyone
I have solutions to this problem, but I am not sure if the premise behind them is correct.
I think this is the reasoning behind the solution for question d.
- Work out how far the elevator travels in the 7s before the rock is thrown, so 7x2.5 = 17.5
- Work out how far the elevator travels in the time it takes for the rock to reach its peak and then fall back to X=0, so 150/49 x 2.5 = 7.65
- This means the elevator will have a head start of 25.15m on the rock when the rock returns to X = 0.
- After this point, the position of the elevator can be given by XL = -25.15 -2.5t
The part of the solution that I'm not sure about is where they give the position of the rock with XR= 15t - 4.9t^2.
It seems like they are starting the clock with XL at -25.15 and XR at 0. However, at this point in time, XR is not starting from rest. It had already fallen for 75/49s. I'm not sure that the equation
XR=15t - 4.9t^2 accounts for this.
When the rock returns to X =0, its velocity should be 15 - 150/49*9.8 = -15m/s. Its velocity at subsequent moments should be given by v=-15 -9.8t. Integrating this should give XR= -15t -4.9t^2. I
think it is this equation for XR that should be equated with -25.15 -2.5t to get the correct value for t.
Is my reasoning correct? Or are the solutions correct?
I can't view the attachments and I can't follow what you are saying. The best idea is to start the clock when the rock is thrown up.
I can't view the attachments either.
Is the rock above or below the elevator when it is projected upwards? How far is it from the elevator when the elevator starts moving? That information is important for finding when the rock hits the
Sorry, not sure what went wrong with the attachments
Parts (a) and (b) look fine.
I disagree with the author of the problem about part (c). Here is why.
Both rock and lift start at point O. This is the origin from which positions are measured.
Time ##t## is measured with respect to a clock that starts when the rock is thrown.
At that time, the lift is at position ##D=-2.5~\text{(m/s)}\times 7~\text{s}=-17.5~\text{m}## relative to the origin. I don't see where the ##x_L(t)=-25.15-2.5t## is coming from. It predicts that at
time ##t=0## the lift is at ##-25.15## m which would be its position if it started moving about 10 s before the rock. Am I missing something?
Part (d) is consistent with part (c).
Well, it says in the question that ##t = 0## when the lift starts to move and the rock is thrown at ##t = 7s##. That doesn't seem to be respected in the solution.
kuruman said:
Parts (a) and (b) look fine.
I disagree with the author of the problem about part (c). Here is why.
Both rock and lift start at point O. This is the origin from which positions are measured.
Time ##t## is measured with respect to a clock that starts when the rock is thrown.
At that time, the lift is at position ##D=-2.5~\text{(m/s)}\times 7~\text{s}=-17.5~\text{m}## relative to the origin. I don't see where the ##x_L(t)=-25.15-2.5t## is coming from. It predicts that
at time ##t=0## the lift is at ##-25.15## m which would be its position if it started moving about 10 s before the rock. Am I missing something?
Part (d) is consistent with part (c).
I think the solution let's the clock run for about 10s so that the elevator can get to -25.15m and the rock can reach its peak and return to X =0. It then resets the clock to t=0. I think this was
the author's way of dealing with the 7s difference between the elevator's descent and the rock being thrown.
I also don't agree with the author's approach. It confused me and I think they've jumped between time frames without making it clear that's what they did. The answer they gave is 4.67s, which is less
than the time it took for the elevator to get to -25.15m. I'm guessing they mean to add 4.67s to the ~10s, but even then, I think it'll be the wrong answer.
Darkmisc said:
I think the solution let's the clock run for about 10s so that the elevator can get to -25.15m and the rock can reach its peak and return to X =0.
You can't reverse-engineer the problem statement from the solution and try to justify the answer this way. The problem statement should be written in such a way that the solution can be found without
ambiguity. It seems to me that the author of the problem changed the input numbers and forgot to change the equation in part (c) to match the new numbers.
For future reference, the way to handle problems is to write separate equations for each moving object in which time starts when the object starts moving and use subscripts to keep the objects sorted
out. For the lift, the equation is $$x_L=V_Lt_L$$ For the rock it is $$x_R=v_0t_R-\frac{1}{2}gt_R^2.$$Now the rock clock is ##t_0=7## s behind the lift clock. In other words, ##t_R=t_L-t_0## which
makes the rock equation $$x_R=v_0t_L-\frac{1}{2}g(t_L-t_0)^2.$$ At specific time ##t_L=T## the lift and the rock are at the same place, i.e. the rock collides with the lift. This means that $$v_0T-\
frac{1}{2}g(T-t_0)^2=V_LT.$$ At this point you solve the quadratic in ##T## with ##v_0=15## m/s, ##V_L=-2.5## m/s, ##t_0=7## s and ##g=9.8## m/s
FAQ: Questioning the Solution for Elevator vs Rock Problem
1. What is the "Elevator vs Rock Problem"?
The "Elevator vs Rock Problem" is a thought experiment that poses the question of whether it is better to be in a falling elevator or to jump out of the elevator before it hits the ground, assuming
both options result in certain death.
2. What is the solution to the "Elevator vs Rock Problem"?
The solution to the "Elevator vs Rock Problem" is that it is better to stay in the elevator and try to cushion the impact by lying flat on the floor. This is because the elevator provides some
protection and cushioning from the impact, whereas jumping out would result in a higher velocity and a more direct impact with the ground.
3. Are there any real-life examples of the "Elevator vs Rock Problem"?
There have been a few real-life examples of the "Elevator vs Rock Problem", such as the 1945 Empire State Building elevator incident where a B-25 bomber crashed into the building and caused an
elevator to fall from the 75th floor. In this case, the passengers who stayed in the elevator survived while those who tried to jump out did not.
4. What factors should be considered when evaluating the "Elevator vs Rock Problem"?
When evaluating the "Elevator vs Rock Problem", factors such as the height of the fall, the weight and density of the objects involved, and the impact surface should be considered. These factors can
affect the velocity and force of impact, and therefore the outcome of the situation.
5. Is there a definitive answer to the "Elevator vs Rock Problem"?
No, there is no definitive answer to the "Elevator vs Rock Problem" as it ultimately depends on the specific circumstances and variables involved. However, the general consensus among scientists is
that staying in the elevator and trying to cushion the impact is the better option.
|
{"url":"https://www.physicsforums.com/threads/questioning-the-solution-for-elevator-vs-rock-problem.1045293/","timestamp":"2024-11-03T14:03:06Z","content_type":"text/html","content_length":"120366","record_id":"<urn:uuid:80887631-3382-49f4-8f44-c8809cba7a09>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00366.warc.gz"}
|
Mathematics and optics
On number and vision
Mathematics and Optics
Many sons of Francis recognized the importance of mathematics well before the scientific vision of the world emerged. For Roger Bacon, one of the greatest Franciscan thinkers of the Middle Ages,
mathematics opened the doors to understanding the laws of nature and the mutability of phenomena, and constituted the key to building solid scientific knowledge in every field.
Medieval mathematics: between East and West
In medieval universities, the treatise De institutione arithmetica by Severinus Boethius was widely used, here displayed in an incunabulum; but for teaching the basics of calculation, simpler
treatises were also used, such as the so-called algorismus, named after the 9th-century Arab mathematician al-Khuwārizmī. The texts contain numerous marginal annotations to facilitate the reader’s
orientation, a sign of intense study activity and perhaps a trace of the preparation for exams by anonymous medieval students.
Classical mathematical thought: Euclid
Among the various classics of mathematical thought, Euclid’s Elements stand out, an indispensable reference for the study of the discipline for over two millennia. The work is a precious and
systematic reorganization of all the mathematical knowledge accumulated between the 5th and 4th centuries BC, here displayed in an edition published in Venice in 1509 by one of the greatest
mathematicians of the Renaissance: the Franciscan Luca Pacioli.
Fra Luca applied the extraordinary practical utility of mathematics learned in the Venetian mercantile environment and, faithful to the Franciscan ideal of total and loving openness to the world,
disseminated it in the vernacular to facilitate understanding by students and practitioners.
Optics and perspectiva
In the second part of the section, we also find works on optics or perspectiva, that is, the science of light, studied both from a physical-mathematical and physiological point of view: Roger Bacon,
Bartolomeo da Bologna, and John Peckham are just some of the friars who were keenly interested in this discipline, blending theological concepts and scientific theories on light in their treatises.
Noteworthy is a “small” but famous manuscript: the Tractatus de Perspectiva by Peckham, richly accompanied by diagrams and geometric drawings useful for explaining optical phenomena; it is likely a
part of the Perspectiva communis, one of the most well-known optics manuals until the 17th century.
The Arab and Latin traditions in the Renaissance
Finally, printed works are displayed that testify to the friars’ interest in the classics of the Arab and Latin traditions. In a volume published in 1572, two works are intentionally united: a
treatise on optics, known in Latin as Aspectibus, widely used during the Middle Ages and the Renaissance, by Alhazen, an Arab doctor, philosopher, mathematician, and astronomer; and the Perspectiva
by Witelo (Vitellione), a Polish philosopher and scientist who lived in the 13th century, particularly appreciated by scholars well beyond the Middle Ages, counting among its famous readers the
“great” Kepler.
|
{"url":"https://laudatosie.com/sezione5/","timestamp":"2024-11-06T17:32:10Z","content_type":"text/html","content_length":"60457","record_id":"<urn:uuid:ea13239d-25de-4532-9133-30b0a9074834>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00046.warc.gz"}
|
Difference capm and single index model
22 May 2019 While the CAPM is a single-factor model, APT allows for multi-factor it is exposed to a different systematic risk factors and such deviation analysis of the single index model (SIM) and
the evaluation of the capital asset pricing model (CAPM) that he asserts my article to be gives me an opportunity to further highlight physical/financial asset returns are two different concepts.
View and compare difference,between,single,INDEX,model,AND,capm on Yahoo Finance. Single factor model assumes that the actual returns deviates from expectation due to macro event and firm specific
event. Single index model simply replaces macro event with a broad market index. None of this deals with risk free rate. CAPM models expected returns excess of risk free rate based on the security
market line. The international capital asset pricing model (CAPM) is a financial model that extends the concept of the CAPM to international investments. single index model. There is no reason to
assume that a good factor model for one period will be a good one for the next period. Key factors change as in the e ect of energy prices on security markets in the 1970s and more recently during
the war in the Persian Gulf. 3 Capital Asset Pricing Model The CAPM is a cornerstone of nancial economics.
In fact, the single index model is just a statistical technique, because you can replace $r_m$ with any other variable you think fits best to explain a stocks return. The CAPM however is an economic
model in equilibrium, where the market-portfolio return $r_m$ is a clearly determined portfolio (of all risky assets, investments, also human-capital).
So both CAPM and the single-index model suggest that the market portfolio is the optimal risky portfolio, but do so from different starting assumptions. Sample The above equilibrium model for
portfolio analysis is called the Capital Asset Pricing Model. (CAPM). 1 assets (portfolios) that is different from σ2 i ; it measures the αiβi(rM − rf ) (CAPM formula for single asset i). = (rM − rf
) n. ∑ index is the Standard & Poor's 500-stock index (S&P), made up of 500 stocks. A beta for a. Sharpe's Index Model simplifies the process of Markowitz model by reducing the data in a Sharpe first
made a single index model. The following table shows the difference in calculation between Markowitz covariance model and Sharpe Index International Capital Asset Pricing Model (CAPM) | Forex
Management 11 Dec 2019 How are regression results for portfolios different than regression results for What are the testable implications of the CAPM and how do you test them? How do you use the
excess returns single index model Ri - rf = ai* + Single Index and Multi Index Models,Portfolio Theory,e-Learning online finance courses for all business and finance professionals. Courses are CPE /
CPD for We complement the conditional capital asset pricing model (CAPM) by introducing Promising avenues of research, which preserve the single factor significant difference between the ex-ante
expected level of risk and ex-post estimates 5 Jul 2016 We can compute the difference as: Alpha Equation The model extended that traditional single factor CAPM model with two new drivers of
Six different estimating strategies are employed to explore ex-post-portfolio Within the Capital Asset Pricing Model (CAPM) and a single index model approach
25 Dec 2015 uni-factor model like the (CAPM) could not predict satisfactorily, the power better than the single factor. CAPM. The main difference in the APT The Single Index Model Relates returns on
each security to the returns on a method of Markowitz and the single-index model Capital Asset Pricing Model to the market portfolio of all stocks – Volatility different than market All securities
Interestingly, this is the same formula that is used to calculate the rate of return with CAPM, which stands for Capital Asset Pricing Model. However, the difference lies in the use of a single non
company factor and a single measure of relationship between price of asset and the factor in the case of CAPM whereas there are many factors and also different measures of relationships between price
of asset and different factors in APT.
This lesson is part 7 of 9 in the course CAPM and Multi-factor Models. The Single Index Model (SIM) is an asset pricing model, according to which the returns on 9 May 2019 The Capital Asset Pricing
Model (CAPM) and the Arbitrage Pricing Theory On the other hand, the factor used in the CAPM is the difference 29 Jan 2019 When we derive the CAPM (i.e. find equations for the capital market line
and the security market line), we nowhere assume that the individual
CAPM, which stands for the capital asset pricing model, divides an investors portfolio into two groups. The first group consists of a single, riskless asset, and the second group consists of a
portfolio of all risky assets. The latter is called the tangent portfolio. It is also assumed that all investors hold the same tangent portfolio.
But, the difference is in the way a single non-company factor and a single measure correlation are used among price of asset and the factor in case of CAPM while there are numerous aspects and
diverse measures of relationships between asset price and various factors in APT. Interestingly, this is the same formula that is used to calculate the rate of return with CAPM, which stands for
Capital Asset Pricing Model. However, the difference lies in the use of a single non company factor and a single measure of relationship between price of asset and the factor in the case of CAPM
whereas there are many factors and View and compare difference,between,single,INDEX,model,AND,capm on Yahoo Finance. Single factor model assumes that the actual returns deviates from expectation due
to macro event and firm specific event. Single index model simply replaces macro event with a broad market index. None of this deals with risk free rate. CAPM models expected returns excess of risk
free rate based on the security market line. The international capital asset pricing model (CAPM) is a financial model that extends the concept of the CAPM to international investments. single index
model. There is no reason to assume that a good factor model for one period will be a good one for the next period. Key factors change as in the e ect of energy prices on security markets in the
1970s and more recently during the war in the Persian Gulf. 3 Capital Asset Pricing Model The CAPM is a cornerstone of nancial economics.
The capital asset pricing model (CAPM) is an idealized portrayal of how financial risk—rises and falls at the same percentage as a broad market index, such as The difference reflects the long-term
inflation rate of 10% incorporated in our The famous capital asset pricing model or CAPM is a single factor model that The difference between the market and risk-free rates of return or (rM – rRF)
is CAPM advocates a single, market-wide risk factor for CAPM while APT The underlying difference between Capital Asset Pricing Model and Arbitrage Pricing Six different estimating strategies are
employed to explore ex-post-portfolio Within the Capital Asset Pricing Model (CAPM) and a single index model approach Some interesting Index Models include the single index model and three index
model. The single responds to the standard Capital Asset Pricing Model of Sharpe (1964). More recently, Here, we come from a different direction and show
|
{"url":"https://bestbinaryhoqu.netlify.app/lalich25282noc/difference-capm-and-single-index-model-ha","timestamp":"2024-11-05T05:56:27Z","content_type":"text/html","content_length":"35436","record_id":"<urn:uuid:6c9ca36a-4dc8-4fb8-a9e9-a9edd68cca6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00651.warc.gz"}
|
The linear stability of the schwarzschild solution to gravitational perturbations
We prove in this paper the linear stability of the celebrated Schwarzschild family of black holes in general relativity: Solutions to the linearisation of the Einstein vacuum equations (“linearised
gravity”) around a Schwarzschild metric arising from regular initial data remain globally bounded on the black hole exterior, and in fact decay to a linearised Kerr metric. We express the equations
in a suitable double null gauge. To obtain decay, one must in fact add a residual pure gauge solution which we prove to be itself quantitatively controlled from initial data. Our result a fortiori
includes decay statements for general solutions of the Teukolsky equation (satisfied by gauge-invariant null-decomposed curvature components). These latter statements are in fact deduced in the
course of the proof by exploiting associated quantities shown to satisfy the Regge–Wheeler equation, for which appropriate decay can be obtained easily by adapting previous work on the linear scalar
wave equation. The bounds on the rate of decay to linearised Kerr are inverse polynomial, suggesting that dispersion is sufficient to control the non-linearities of the Einstein equations in a
potential future proof of non-linear stability. This paper is self-contained and includes a physical-space derivation of the equations of linearised gravity around Schwarzschild from the full
non-linear Einstein vacuum equations expressed in a double null gauge.
All Science Journal Classification (ASJC) codes
Dive into the research topics of 'The linear stability of the schwarzschild solution to gravitational perturbations'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/the-linear-stability-of-the-schwarzschild-solution-to-gravitation","timestamp":"2024-11-11T17:50:22Z","content_type":"text/html","content_length":"53019","record_id":"<urn:uuid:d50ae8ef-b040-4965-9949-5184d0ce369f>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00105.warc.gz"}
|
[EM] Thoughts on Burial
fsimmons at pcc.edu fsimmons at pcc.edu
Fri Jul 23 17:46:03 PDT 2010
You guys have come up with some interesting ideas about the likelihood of sincere cycles, but my idea
is not that complicated:
Usually in the high stakes elections that I have witnessed there are just a few issues that most voters
feel strongly about, and opinions on these issues are highly correlated (or anti-correlated) so that the
voter distribution in issue space is basically cigar shaped.
Perpendicular to the long axis of that cigar find a plane that divides the voters into two equal subsets
(plus or minus one). The candidate closest to that plane is very likely a Condorcet candidate.
But this Condorcet candidate can be buried as easily as a Condorcet candidate can be buried in a
precisely one dimensional issue space.
I like Condorcet methods that discourage burial in one dimensional cases. I don't care so much about
the case where the candidates are distributed on the vertices of an acute triangle, i.e. the triangle is
close to equilateral. In that case burial may serve a useful purpose of decreasing the probability of
winning for a low utility Condorcet candidate.
In particular, the sincere profile
40 A>C>>B
30 B>C>A
30 C>A>>B
could easily come from a one dim or cigar shaped issue space. Any condorcet method that doesn't
make burial of C risky for the A faction in this context is going to end up with more artificial cycles than
real ones.
Note that random ballot on Smith is adequate for preventing the burial without any defensive strategy on
the part of the C supporters.
On the other hand the profile
40 A>>C>B
30 B>>C>A
30 C>>A>B
could not arise from a one dimensional or cigar shaped issue space. And candidate C has such low
uility, it wouldn't be bad if A got a share of the probability through a burial of C.
Random Ballot Smith doesn't discourage burial in this case, in which C retains only 30% of the
probability. Without more detailed information it would be impossible to prove that C deserved more than
that amount.
More information about the Election-Methods mailing list
|
{"url":"http://lists.electorama.com/pipermail/election-methods-electorama.com/2010-July/124727.html","timestamp":"2024-11-13T15:18:42Z","content_type":"text/html","content_length":"4812","record_id":"<urn:uuid:8a365eba-e528-4813-9e99-884e91bea412>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00632.warc.gz"}
|
The Stacks project
Definition 12.23.6. Let $\mathcal{A}$ be an abelian category. Let $(K, F, d)$ be a filtered differential object of $\mathcal{A}$. We say the spectral sequence associated to $(K, F, d)$
1. weakly converges to $H(K)$ if $\text{gr}H(K) = E_{\infty }$ via Lemma 12.23.5,
2. abuts to $H(K)$ if it weakly converges to $H(K)$ and we have $\bigcap F^ pH(K) = 0$ and $\bigcup F^ pH(K) = H(K)$,
Comments (0)
There are also:
• 5 comment(s) on Section 12.23: Spectral sequences: filtered differential objects
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 012I. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 012I, in case you are confused.
|
{"url":"https://stacks.math.columbia.edu/tag/012I","timestamp":"2024-11-14T03:47:11Z","content_type":"text/html","content_length":"14178","record_id":"<urn:uuid:b19190a7-5164-40b5-a655-554ee7c019fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00135.warc.gz"}
|
Part IA, 2014, Paper 4
Jump to course
Dynamics and Relativity
A particle of mass $m$ has charge $q$ and moves in a constant magnetic field B. Show that the particle's path describes a helix. In which direction is the axis of the helix, and what is the
particle's rotational angular frequency about that axis?
What is a 4-vector? Define the inner product of two 4-vectors and give the meanings of the terms timelike, null and spacelike. How do the four components of a 4-vector change under a Lorentz
transformation of speed $v$ ? [Without loss of generality, you may take the velocity of the transformation to be along the positive $x$-axis.]
Show that a 4-vector that is timelike in one frame of reference is also timelike in a second frame of reference related by a Lorentz transformation. [Again, you may without loss of generality
take the velocity of the transformation to be along the positive $x$-axis.]
Show that any null 4-vector may be written in the form $a(1, \hat{\mathbf{n}})$ where $a$ is real and $\hat{\mathbf{n}}$ is a unit 3-vector. Given any two null 4-vectors that are future-pointing,
that is, which have positive time-components, show that their sum is either null or timelike.
Define the 4-momentum of a particle and describe briefly the principle of conservation of 4-momentum.
A photon of angular frequency $\omega$ is absorbed by a particle of rest mass $m$ that is stationary in the laboratory frame of reference. The particle then splits into two equal particles, each
of rest mass $\mathrm{\alpha m}$.
Find the maximum possible value of $\alpha$ as a function of $\mu=\hbar \omega / m c^{2}$. Verify that as $\mu \rightarrow 0$, this maximum value tends to $\frac{1}{2}$. For general $\mu$, show
that when the maximum value of $\alpha$ is achieved, the resulting particles are each travelling at speed $c /\left(1+\mu^{-1}\right)$ in the laboratory frame.
A thin flat disc of radius $a$ has density (mass per unit area) $\rho(r, \theta)=\rho_{0}(a-r)$ where $(r, \theta)$ are plane polar coordinates on the disc and $\rho_{0}$ is a constant. The disc
is free to rotate about a light, thin rod that is rigidly fixed in space, passing through the centre of the disc orthogonal to it. Find the moment of inertia of the disc about the rod.
The section of the disc lying in $r \geqslant \frac{1}{2} a,-\frac{\pi}{13} \leqslant \theta \leqslant \frac{\pi}{13}$ is cut out and removed. Starting from rest, a constant torque $\tau$ is
applied to the remaining part of the disc until its angular speed about the axis reaches $\Omega$. Show that this takes a time
$\frac{3 \pi \rho_{0} a^{5} \Omega}{32 \tau}$
After this time, no further torque is applied and the partial disc continues to rotate at constant angular speed $\Omega$. Given that the total mass of the partial disc is $k \rho_{0} a^{3}$,
where $k$ is a constant that you need not determine, find the position of the centre of mass, and hence its acceleration. From where does the force required to produce this acceleration arise?
A reference frame $S^{\prime}$ rotates with constant angular velocity $\boldsymbol{\omega}$ relative to an inertial frame $S$ that has the same origin as $S^{\prime}$. A particle of mass $m$ at
position vector $\mathbf{x}$ is subject to a force $\mathbf{F}$. Derive the equation of motion for the particle in $S^{\prime}$.
A marble moves on a smooth plane which is inclined at an angle $\theta$ to the horizontal. The whole plane rotates at constant angular speed $\omega$ about a vertical axis through a point $O$
fixed in the plane. Coordinates $(\xi, \eta)$ are defined with respect to axes fixed in the plane: $O \xi$ horizontal and $O \eta$ up the line of greatest slope in the plane. Ensuring that you
account for the normal reaction force, show that the motion of the marble obeys
\begin{aligned} \ddot{\xi} &=\omega^{2} \xi+2 \omega \dot{\eta} \cos \theta, \\ \ddot{\eta} &=\omega^{2} \eta \cos ^{2} \theta-2 \omega \dot{\xi} \cos \theta-g \sin \theta \end{aligned}
By considering the marble's kinetic energy as measured on the plane in the rotating frame, or otherwise, find a constant of the motion.
[You may assume that the marble never leaves the plane.]
A rocket of mass $m(t)$, which includes the mass of its fuel and everything on board, moves through free space in a straight line at speed $v(t)$. When its engines are operational, they burn fuel
at a constant mass rate $\alpha$ and eject the waste gases behind the rocket at a constant speed $u$ relative to the rocket. Obtain the rocket equation
$m \frac{d v}{d t}-\alpha u=0$
The rocket is initially at rest in a cloud of space dust which is also at rest. The engines are started and, as the rocket travels through the cloud, it collects dust which it stores on board for
research purposes. The mass of dust collected in a time $\delta t$ is given by $\beta \delta x$, where $\delta x$ is the distance travelled in that time and $\beta$ is a constant. Obtain the new
\begin{aligned} \frac{d m}{d t} &=\beta v-\alpha \\ m \frac{d v}{d t} &=\alpha u-\beta v^{2} \end{aligned}
By eliminating $t$, or otherwise, obtain the relationship
$m=\lambda m_{0} u \sqrt{\frac{(\lambda u-v)^{\lambda-1}}{(\lambda u+v)^{\lambda+1}}},$
where $m_{0}$ is the initial mass of the rocket and $\lambda=\sqrt{\alpha / \beta u}$.
If $\lambda>1$, show that the fuel will be exhausted before the speed of the rocket can reach $\lambda u$. Comment on the case when $\lambda<1$, giving a physical interpretation of your answer.
Numbers and Sets
Define the binomial coefficients $\left(\begin{array}{l}n \\ k\end{array}\right)$, for integers $n, k$ satisfying $n \geqslant k \geqslant 0$. Prove directly from your definition that if $n>k \
geqslant 0$ then
$\left(\begin{array}{l} n \\ k \end{array}\right)+\left(\begin{array}{c} n \\ k+1 \end{array}\right)=\left(\begin{array}{c} n+1 \\ k+1 \end{array}\right)$
and that for every $m \geqslant 0$ and $n \geqslant 0$,
$\sum_{k=0}^{m}\left(\begin{array}{c} n+k \\ k \end{array}\right)=\left(\begin{array}{c} n+m+1 \\ m \end{array}\right)$
Use Euclid's algorithm to determine $d$, the greatest common divisor of 203 and 147 , and to express it in the form $203 x+147 y$ for integers $x, y$. Hence find all solutions in integers $x, y$
of the equation $203 x+147 y=d$.
How many integers $n$ are there with $1 \leqslant n \leqslant 2014$ and $21 n \equiv 25(\bmod 29) ?$
(i) State and prove the Inclusion-Exclusion Principle.
(ii) Let $n>1$ be an integer. Denote by $\mathbb{Z} / n \mathbb{Z}$ the integers modulo $n$. Let $X$ be the set of all functions $f: \mathbb{Z} / n \mathbb{Z} \rightarrow \mathbb{Z} / n \mathbb
{Z}$ such that for every $j \in \mathbb{Z} / n \mathbb{Z}, f(j)-f(j-1) ot \equiv j$ $(\bmod n)$. Show that
$|X|= \begin{cases}(n-1)^{n}+1-n & \text { if } n \text { is odd } \\ (n-1)^{n}-1 & \text { if } n \text { is even }\end{cases}$
(i) What does it mean to say that a set $X$ is countable? Show directly that the set of sequences $\left(x_{n}\right)_{n \in \mathbb{N}}$, with $x_{n} \in\{0,1\}$ for all $n$, is uncountable.
(ii) Let $S$ be any subset of $\mathbb{N}$. Show that there exists a bijection $f: \mathbb{N} \rightarrow \mathbb{N}$ such that $f(S)=2 \mathbb{N}$ (the set of even natural numbers) if and only
if both $S$ and its complement are infinite.
(iii) Let $\sqrt{2}=1 \cdot a_{1} a_{2} a_{3} \ldots$ be the binary expansion of $\sqrt{2}$. Let $X$ be the set of all sequences $\left(x_{n}\right)$ with $x_{n} \in\{0,1\}$ such that for
infinitely many $n, x_{n}=0$. Let $Y$ be the set of all $\left(x_{n}\right) \in X$ such that for infinitely many $n, x_{n}=a_{n}$. Show that $Y$ is uncountable.
(i) State and prove the Fermat-Euler Theorem.
(ii) Let $p$ be an odd prime number, and $x$ an integer coprime to $p$. Show that $x^{(p-1) / 2} \equiv \pm 1(\bmod p)$, and that if the congruence $y^{2} \equiv x(\bmod p)$ has a solution then
$x^{(p-1) / 2} \equiv 1(\bmod p)$.
(iii) By arranging the residue classes coprime to $p$ into pairs $\{a, b x\}$ with $a b \equiv 1(\bmod p)$, or otherwise, show that if the congruence $y^{2} \equiv x(\bmod p)$ has no solution
then $x^{(p-1) / 2} \equiv-1(\bmod p) .$
(iv) Show that $5^{5^{5}} \equiv 5(\bmod 23)$.
What does it mean to say that the sequence of real numbers $\left(x_{n}\right)$ converges to the limit $x ?$ What does it mean to say that the series $\sum_{n=1}^{\infty} x_{n}$ converges to $s$
Let $\sum_{n=1}^{\infty} a_{n}$ and $\sum_{n=1}^{\infty} b_{n}$ be convergent series of positive real numbers. Suppose that $\left(x_{n}\right)$ is a sequence of positive real numbers such that
for every $n \geqslant 1$, either $x_{n} \leqslant a_{n}$ or $x_{n} \leqslant b_{n}$. Show that $\sum_{n=1}^{\infty} x_{n}$ is convergent.
Show that $\sum_{n=1}^{\infty} 1 / n^{2}$ is convergent, and that $\sum_{n=1}^{\infty} 1 / n^{\alpha}$ is divergent if $\alpha \leqslant 1$.
Let $\left(x_{n}\right)$ be a sequence of positive real numbers such that $\sum_{n=1}^{\infty} n^{2} x_{n}^{2}$ is convergent. Show that $\sum_{n=1}^{\infty} x_{n}$ is convergent. Determine (with
proof or counterexample) whether or not the converse statement holds.
|
{"url":"https://questions.tripos.org/part-ia/2014/4/","timestamp":"2024-11-09T23:13:42Z","content_type":"text/html","content_length":"192273","record_id":"<urn:uuid:7fb441c3-32bb-4d6d-ad78-9b49411872c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00817.warc.gz"}
|
agmon-motzkin-schoenberg algorithm
Linear superiorization considers linear programming problems but instead of attempting to solve them with linear optimization methods it employs perturbation resilient feasibility-seeking algorithms
and steers them toward reduced (not necessarily minimal) target function values. The two questions that we set out to explore experimentally are (i) Does linear superiorization provide a feasible
point whose linear … Read more
|
{"url":"https://optimization-online.org/tag/agmon-motzkin-schoenberg-algorithm/","timestamp":"2024-11-05T04:08:35Z","content_type":"text/html","content_length":"83912","record_id":"<urn:uuid:9934058e-0c0f-44b0-bcf0-2d0335a6d86a>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00282.warc.gz"}
|
Buy ruskovce.eu ?
Products related to Similarity:
• Florenzyme Capsules - 16 g
Nutritional supplement with bacteria culture (LAB2PRO TM), Alpha-Amylase and Protease. Vegan. Florenzyme capsules are an innovative nutritional supplement that combines selected bacterial
cultures with valuable digestive enzymes (alpha-amylase and protease). A special capsule technology ensures that the ingredients are protected from the acids in the stomach and reach the
digestive tract in a functional way. In this way, they can contribute to a natural, desirable digestion and intestinal flora. In terms of targeted nutritional supplementation, we recommend taking
one Florenzyme capsule daily over a longer period of time with or after a meal.
Price: 21.86 £ | Shipping*: 14.50 £
• Organic-Spelt Grass-Powder - 300 g
Spelt grass is a natural, vegetable dietary enrichment. Spelt, also referred to as husk or Swabian corn, is a close relative of modern-day wheat. Spelt was already grown and highly valued in
central and northern Europe thousands of years ago. Village names such as Dinkelsbühl or Dinkelscherben testify the former relevance of this cereal as a foodstuff. Our spelt grass powder is
obtained by gently drying and grinding young, organically-cultivated spelt plants. At the time of harvesting, the nutrient content in the young stalks and green shoots of the spelt grass is
particularly high. Organic-spelt grass-powder tastes pleasantly aromatic and can simply be stirred into water, juices, soups or other food and enjoyed. Purely plant-based, vegan.
Price: 13.45 £ | Shipping*: 14.50 £
• Traditional Sweets Sour lemon flavour - 170 g
Traditional sweets with a delicious, refreshingly sour lemon flavour. No artificial flavours or colours. Produced in line with ancient confectionery tradition using copper vessels over a fire and
produced by hand. Taste as delicious as Grandma's very own! Sure to evoke childhood memories...
Price: 3.59 £ | Shipping*: 14.50 £
• Traditional Candies Raspberry - 170 g
Traditional Candies with a delicious raspberry flavor. We brought back the tradition of candymaking!Our candies are cooked in old copper kettles over the fire and are made by hand. They taste
like your grandmother made it! It brings back childhood memories ...... With natural fruit and plant extracts, without artificial flavors or artificial colors.
Price: 3.59 £ | Shipping*: 14.50 £
• In South America, "Inkatee" has a long tradition. It is made from the inner, reddish-brown bark of the tropical Lapacho tree. Its typical, fine aroma with woody notes and light vanilla character
makes it a tasty drink for all day long. Very good to enjoy sweetened with a little honey, or even cold.
Price: 7.56 £ | Shipping*: 14.50 £
• Traditional Candies Propolis and Pine Honey - 170 g
Traditional candies with propolis and pine honey. Soothes neck and throat. We brought back the tradition of candymaking!Our candies are cooked in old copper kettles over the fire and are made by
hand. They taste like your grandmother made it! It brings back childhood memories ...... With natural plant extracts, no artificial flavors or artificial colors added.
Price: 3.92 £ | Shipping*: 14.50 £
• Traditional Candies Sage Forest-honey - 170 g
Traditional Candies with pure forest-honey, sage leaves and sage oil extract. Soothing to the throat. We brought back the tradition of candymaking!Our candies are cooked in old copper kettles
over the fire and are made by hand. They taste like your grandmother made it! It brings back childhood memories ...... With natural plant extracts, no artificial flavors or artificial colors
Price: 3.92 £ | Shipping*: 14.50 £
• Traditional Herbal Cough Candies - 170 g
Natural remedy for cough and voice hoarseness. Traditional Herbal Cough Candies made after a classic recipe. We brought back the tradition of candymaking!Our candies are cooked in old copper
kettles over the fire and are made by hand. They taste like your grandmother made it! It brings back childhood memories ...... With natural plant extracts, without artificial flavours or
artificial colours.
Price: 3.92 £ | Shipping*: 14.50 £
• Traditional Candies Organic Ginger-Orange - 170 g
With the sharp and spicy taste of the finest organic ginger root from controlled organic cultivation and the tangy freshness of sun-ripened oranges. Sweet production in the Kräuterhaus Produced
in line with ancient confectionery tradition using copper vessels over a fire and produced by hand. Taste as delicious as Grandma's very own! Sure to evoke childhood memories... With natural
fruit and plant extracts, no artificial flavours or colours.
Price: 3.92 £ | Shipping*: 14.50 £
• Traditional Candies Eucalyptus - 170 g
Traditional sweets made according to a classic recipe. The unique combination of eucalyptus oil, mint oil and menthol has a soothing effect on the throat. We brought back the tradition of
candymaking!Our candies are cooked in old copper kettles over the fire and are made by hand. They taste like your grandmother made it! It brings back childhood memories ...... With natural plant
extracts, no artificial flavours or artificial colours added.
Price: 3.92 £ | Shipping*: 14.50 £
• Boswellia serrata Tablets - 103 g
The Indian frankincense tree (Boswellia serrata) is the oldest known agricultural crop at all. The amazingly long tradition of incense in the Indian food proves its beneficial, health-promoting
effect in many areas. Used is the resin (air-dried), that exudes after cutting the bark. This is known as Indian Frankincense. Particularly appreciated are the boswellic acids contained in the
resin. Each Boswellia tablet contains 400 mg Boswellia extract containing 70 % boswellic acids.
Price: 27.48 £ | Shipping*: 14.50 £
• Traditional Candies Sea Buckthorn with Vitamin C - 170 g
Traditional Candy with a delicious fruity sea buckthorn flavor and and to keep your body's defenses healthy with an extra portion of vitamin C. We brought back the tradition of candymaking!Our
candies are cooked in old copper kettles over the fire and are made by hand. They taste like your grandmother made it! It brings back childhood memories... With natural fruit and plant extracts,
no artificial flavors or artificial colors added.
Price: 3.92 £ | Shipping*: 14.50 £
Similar search terms for Similarity:
• What are similarity ratios?
Similarity ratios are ratios that compare the corresponding sides of two similar figures. They help us understand the relationship between the sides of similar shapes. The ratio of corresponding
sides in similar figures is always the same, which means that if you know the ratio of one pair of sides, you can use it to find the ratio of other pairs of sides. Similarity ratios are important
in geometry and are used to solve problems involving similar figures.
• What is the difference between similarity theorem 1 and similarity theorem 2?
Similarity theorem 1, also known as the Angle-Angle (AA) similarity theorem, states that if two angles of one triangle are congruent to two angles of another triangle, then the triangles are
similar. On the other hand, similarity theorem 2, also known as the Side-Angle-Side (SAS) similarity theorem, states that if two sides of one triangle are proportional to two sides of another
triangle and the included angles are congruent, then the triangles are similar. The main difference between the two theorems is the criteria for establishing similarity - AA theorem focuses on
angle congruence, while SAS theorem focuses on both side proportionality and angle congruence.
• How can one calculate the similarity factor to determine the similarity of triangles?
The similarity factor can be calculated by comparing the corresponding sides of two triangles. To do this, one can divide the length of one side of the first triangle by the length of the
corresponding side of the second triangle. This process is repeated for all three pairs of corresponding sides. If the ratios of the corresponding sides are equal, then the triangles are similar,
and the similarity factor will be 1. If the ratios are not equal, the similarity factor will be the ratio of the two triangles' areas.
• How can the similarity factor for determining the similarity of triangles be calculated?
The similarity factor for determining the similarity of triangles can be calculated by comparing the corresponding sides of the two triangles. If the ratio of the lengths of the corresponding
sides of the two triangles is the same, then the triangles are similar. This ratio can be calculated by dividing the length of one side of a triangle by the length of the corresponding side of
the other triangle. If all three ratios of corresponding sides are equal, then the triangles are similar. This is known as the similarity factor and is used to determine the similarity of
• Do you see the similarity?
Yes, I see the similarity between the two concepts. Both share common characteristics and features that make them comparable. The similarities can be observed in their structure, function, and
behavior. These similarities help in understanding and drawing parallels between the two concepts.
• 'How do you prove similarity?'
Similarity between two objects can be proven using various methods. One common method is to show that the corresponding angles of the two objects are congruent, and that the corresponding sides
are in proportion to each other. Another method is to use transformations such as dilation, where one object can be scaled up or down to match the other object. Additionally, if the ratio of the
lengths of corresponding sides is equal, then the two objects are similar. These methods can be used to prove similarity in geometric figures such as triangles or other polygons.
• What is similarity in mathematics?
In mathematics, similarity refers to the relationship between two objects or shapes that have the same shape but are not necessarily the same size. This means that the objects are proportional to
each other, with corresponding angles being equal and corresponding sides being in the same ratio. Similarity is often used in geometry to compare and analyze shapes, allowing for the transfer of
properties and measurements from one shape to another.
• What is the similarity ratio?
The similarity ratio is a comparison of the corresponding sides of two similar figures. It is used to determine how the dimensions of one figure compare to the dimensions of another figure when
they are similar. The ratio is calculated by dividing the length of a side of one figure by the length of the corresponding side of the other figure. This ratio remains constant for all pairs of
corresponding sides in similar figures.
• Is mirroring allowed in similarity?
Yes, mirroring is allowed in similarity. Mirroring is a technique used to create similarity by reflecting the actions, behaviors, or emotions of another person. It can help to establish rapport
and build connections with others by showing that you understand and relate to their experiences. However, it is important to use mirroring in a genuine and respectful way, and to be mindful of
the other person's comfort level and boundaries.
• Is there a similarity here?
Yes, there is a similarity here. Both situations involve individuals or groups facing challenges and obstacles, and needing to find creative solutions to overcome them. In both cases, there is a
need for resilience, determination, and adaptability in order to succeed. Additionally, both situations highlight the importance of teamwork and collaboration in achieving a common goal.
• What is the similarity ratio 31?
The similarity ratio 31 refers to the ratio of corresponding sides of two similar figures. In other words, if two figures are similar, the ratio of their corresponding sides is 31. This means
that the lengths of the corresponding sides of the two figures are in the ratio of 3:1. This ratio is used to determine the relationship between the sides of similar figures and can be used to
find missing side lengths in similar figures.
• What is the similarity theorem 31?
Similarity theorem 31 states that if a line is drawn parallel to one side of a triangle, it creates a new triangle that is similar to the original triangle. This means that the corresponding
angles of the two triangles are congruent, and the corresponding sides are in proportion. This theorem is useful in geometry for proving the similarity of triangles and finding unknown side
lengths or angles in similar triangles.
* All prices are inclusive of VAT and, if applicable, plus shipping costs. The offer information is based on the details provided by the respective shop and is updated through automated processes.
Real-time updates do not occur, so deviations can occur in individual cases.
|
{"url":"https://www.ruskovce.eu/Similarity","timestamp":"2024-11-13T16:09:47Z","content_type":"text/html","content_length":"194873","record_id":"<urn:uuid:8dcfa588-6bb2-452f-85df-cf9260d0465e>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00827.warc.gz"}
|
Number Explained
A number is a mathematical object used to count, measure, and label. The most basic examples are the natural numbers 1, 2, 3, 4, and so forth.^[1] Numbers can be represented in language with number
words. More universally, individual numbers can be represented by symbols, called numerals; for example, "5" is a numeral that represents the number five. As only a relatively small number of symbols
can be memorized, basic numerals are commonly organized in a numeral system, which is an organized way to represent any number. The most common numeral system is the Hindu–Arabic numeral system,
which allows for the representation of any non-negative integer using a combination of ten fundamental numeric symbols, called digits.^[2] In addition to their use in counting and measuring, numerals
are often used for labels (as with telephone numbers), for ordering (as with serial numbers), and for codes (as with ISBNs). In common usage, a numeral is not clearly distinguished from the number
that it represents.
real number
s such as the
square root of 2
complex number
which extend the real numbers with a
square root of
(and its combinations with real numbers by adding or subtracting its multiples).
s with numbers are done with arithmetical operations, the most familiar being
, and
. Their study or usage is called
, a term which may also refer to
number theory
, the study of the properties of numbers.
Besides their practical uses, numbers have cultural significance throughout the world.^[7] ^[8] For example, in Western society, the number 13 is often regarded as unlucky, and "a million" may
signify "a lot" rather than an exact quantity. Though it is now regarded as pseudoscience, belief in a mystical significance of numbers, known as numerology, permeated ancient and medieval thought.^
[9] Numerology heavily influenced the development of Greek mathematics, stimulating the investigation of many problems in number theory which are still of interest today.
During the 19th century, mathematicians began to develop many different abstractions which share certain properties of numbers, and may be seen as extending the concept. Among the first were the
hypercomplex numbers, which consist of various extensions or modifications of the complex number system. In modern mathematics, number systems are considered important special examples of more
general algebraic structures such as rings and fields, and the application of the term "number" is a matter of convention, without fundamental significance.^[10]
First use of numbers
See main article: History of ancient numeral systems. Bones and other artifacts have been discovered with marks cut into them that many believe are tally marks.^[11] These tally marks may have been
used for counting elapsed time, such as numbers of days, lunar cycles or keeping records of quantities, such as of animals.
A tallying system has no concept of place value (as in modern decimal notation), which limits its representation of large numbers. Nonetheless, tallying systems are considered the first kind of
abstract numeral system.
The first known system with place value was the Mesopotamian base 60 system ( BC) and the earliest known base 10 system dates to 3100 BC in Egypt.^[12]
See main article: Numeral system. Numbers should be distinguished from numerals, the symbols used to represent numbers. The Egyptians invented the first ciphered numeral system, and the Greeks
followed by mapping their counting numbers onto Ionian and Doric alphabets.^[13] Roman numerals, a system that used combinations of letters from the Roman alphabet, remained dominant in Europe until
the spread of the superior Hindu–Arabic numeral system around the late 14th century, and the Hindu–Arabic numeral system remains the most common system for representing numbers in the world today.^
[14] The key to the effectiveness of the system was the symbol for zero, which was developed by ancient Indian mathematicians around 500 AD.
The first known documented use of zero dates to AD 628, and appeared in the Brāhmasphuṭasiddhānta, the main work of the Indian mathematician Brahmagupta. He treated 0 as a number and discussed
operations involving it, including division. By this time (the 7th century) the concept had clearly reached Cambodia as Khmer numerals, and documentation shows the idea later spreading to China and
the Islamic world.
Brahmagupta's Brāhmasphuṭasiddhānta is the first book that mentions zero as a number, hence Brahmagupta is usually considered the first to formulate the concept of zero. He gave rules of using zero
with negative and positive numbers, such as "zero plus a positive number is a positive number, and a negative number plus zero is the negative number". The Brāhmasphuṭasiddhānta is the earliest known
text to treat zero as a number in its own right, rather than as simply a placeholder digit in representing another number as was done by the Babylonians or as a symbol for a lack of quantity as was
done by Ptolemy and the Romans.
The use of 0 as a number should be distinguished from its use as a placeholder numeral in place-value systems. Many ancient texts used 0. Babylonian and Egyptian texts used it. Egyptians used the
word nfr to denote zero balance in double entry accounting. Indian texts used a Sanskrit word Sanskrit: Shunye or Sanskrit: shunya to refer to the concept of void. In mathematics texts this word
often refers to the number zero.^[15] In a similar vein, Pāṇini (5th century BC) used the null (zero) operator in the Ashtadhyayi, an early example of an algebraic grammar for the Sanskrit language
(also see Pingala).
There are other uses of zero before Brahmagupta, though the documentation is not as complete as it is in the Brāhmasphuṭasiddhānta.
Records show that the Ancient Greeks seemed unsure about the status of 0 as a number: they asked themselves "How can 'nothing' be something?" leading to interesting philosophical and, by the Medieval
period, religious arguments about the nature and existence of 0 and the vacuum. The paradoxes of Zeno of Elea depend in part on the uncertain interpretation of 0. (The ancient Greeks even questioned
whether was a number.)
The late Olmec people of south-central Mexico began to use a symbol for zero, a shell glyph, in the New World, possibly by the but certainly by 40 BC, which became an integral part of Maya numerals
and the Maya calendar. Maya arithmetic used base 4 and base 5 written as base 20. George I. Sánchez in 1961 reported a base 4, base 5 "finger" abacus.^[16]
By 130 AD, Ptolemy, influenced by Hipparchus and the Babylonians, was using a symbol for 0 (a small circle with a long overbar) within a sexagesimal numeral system otherwise using alphabetic Greek
numerals. Because it was used alone, not as just a placeholder, this Hellenistic zero was the first documented use of a true zero in the Old World. In later Byzantine manuscripts of his Syntaxis
Mathematica (Almagest), the Hellenistic zero had morphed into the Greek letter Omicron (otherwise meaning 70).
Another true zero was used in tables alongside Roman numerals by 525 (first known use by Dionysius Exiguus), but as a word, Latin: nulla meaning nothing, not as a symbol. When division produced 0 as
a remainder, Latin: nihil, also meaning nothing, was used. These medieval zeros were used by all future medieval computists (calculators of Easter). An isolated use of their initial, N, was used in a
table of Roman numerals by Bede or a colleague about 725, a true zero symbol.
Negative numbers
The abstract concept of negative numbers was recognized as early as 100–50 BC in China. The Nine Chapters on the Mathematical Art contains methods for finding the areas of figures; red rods were used
to denote positive coefficients, black for negative.^[17] The first reference in a Western work was in the 3rd century AD in Greece. Diophantus referred to the equation equivalent to (the solution is
negative) in Arithmetica, saying that the equation gave an absurd result.
During the 600s, negative numbers were in use in India to represent debts. Diophantus' previous reference was discussed more explicitly by Indian mathematician Brahmagupta, in Brāhmasphuṭasiddhānta
in 628, who used negative numbers to produce the general form quadratic formula that remains in use today. However, in the 12th century in India, Bhaskara gives negative roots for quadratic equations
but says the negative value "is in this case not to be taken, for it is inadequate; people do not approve of negative roots".
European mathematicians, for the most part, resisted the concept of negative numbers until the 17th century, although Fibonacci allowed negative solutions in financial problems where they could be
interpreted as debts (chapter 13 of, 1202) and later as losses (in Latin: Flos). René Descartes called them false roots as they cropped up in algebraic polynomials yet he found a way to swap true
roots and false roots as well. At the same time, the Chinese were indicating negative numbers by drawing a diagonal stroke through the right-most non-zero digit of the corresponding positive number's
numeral.^[18] The first use of negative numbers in a European work was by Nicolas Chuquet during the 15th century. He used them as exponents, but referred to them as "absurd numbers".
As recently as the 18th century, it was common practice to ignore any negative results returned by equations on the assumption that they were meaningless.
Rational numbers
It is likely that the concept of fractional numbers dates to prehistoric times. The Ancient Egyptians used their Egyptian fraction notation for rational numbers in mathematical texts such as the
Rhind Mathematical Papyrus and the Kahun Papyrus. Classical Greek and Indian mathematicians made studies of the theory of rational numbers, as part of the general study of number theory.^[19] The
best known of these is Euclid's Elements, dating to roughly 300 BC. Of the Indian texts, the most relevant is the Sthananga Sutra, which also covers number theory as part of a general study of
The concept of decimal fractions is closely linked with decimal place-value notation; the two seem to have developed in tandem. For example, it is common for the Jain math sutra to include
calculations of decimal-fraction approximations to pi or the square root of 2. Similarly, Babylonian math texts used sexagesimal (base 60) fractions with great frequency.
Irrational numbers
The earliest known use of irrational numbers was in the Indian Sulba Sutras composed between 800 and 500 BC.^[20] The first existence proofs of irrational numbers is usually attributed to Pythagoras,
more specifically to the Pythagorean Hippasus of Metapontum, who produced a (most likely geometrical) proof of the irrationality of the square root of 2. The story goes that Hippasus discovered
irrational numbers when trying to represent the square root of 2 as a fraction. However, Pythagoras believed in the absoluteness of numbers, and could not accept the existence of irrational numbers.
He could not disprove their existence through logic, but he could not accept irrational numbers, and so, allegedly and frequently reported, he sentenced Hippasus to death by drowning, to impede
spreading of this disconcerting news.^[21]
The 16th century brought final European acceptance of negative integral and fractional numbers. By the 17th century, mathematicians generally used decimal fractions with modern notation. It was not,
however, until the 19th century that mathematicians separated irrationals into algebraic and transcendental parts, and once more undertook the scientific study of irrationals. It had remained almost
dormant since Euclid. In 1872, the publication of the theories of Karl Weierstrass (by his pupil E. Kossak), Eduard Heine,^[22] Georg Cantor,^[23] and Richard Dedekind^[24] was brought about. In
1869, Charles Méray had taken the same point of departure as Heine, but the theory is generally referred to the year 1872. Weierstrass's method was completely set forth by Salvatore Pincherle (1880),
and Dedekind's has received additional prominence through the author's later work (1888) and endorsement by Paul Tannery (1894). Weierstrass, Cantor, and Heine base their theories on infinite series,
while Dedekind founds his on the idea of a cut (Schnitt) in the system of real numbers, separating all rational numbers into two groups having certain characteristic properties. The subject has
received later contributions at the hands of Weierstrass, Kronecker,^[25] and Méray.
The search for roots of quintic and higher degree equations was an important development, the Abel–Ruffini theorem (Ruffini 1799, Abel 1824) showed that they could not be solved by radicals (formulas
involving only arithmetical operations and roots). Hence it was necessary to consider the wider set of algebraic numbers (all solutions to polynomial equations). Galois (1832) linked polynomial
equations to group theory giving rise to the field of Galois theory.
Continued fractions, closely related to irrational numbers (and due to Cataldi, 1613), received attention at the hands of Euler,^[26] and at the opening of the 19th century were brought into
prominence through the writings of Joseph Louis Lagrange. Other noteworthy contributions have been made by Druckenmüller (1837), Kunze (1857), Lemke (1870), and Günther (1872). Ramus^[27] first
connected the subject with determinants, resulting, with the subsequent contributions of Heine,^[28] Möbius, and Günther,^[29] in the theory of .
Transcendental numbers and reals
The existence of transcendental numbers^[30] was first established by Liouville (1844, 1851). Hermite proved in 1873 that e is transcendental and Lindemann proved in 1882 that π is transcendental.
Finally, Cantor showed that the set of all real numbers is uncountably infinite but the set of all algebraic numbers is countably infinite, so there is an uncountably infinite number of
transcendental numbers.
Infinity and infinitesimals
The earliest known conception of mathematical infinity appears in the Yajur Veda, an ancient Indian script, which at one point states, "If you remove a part from infinity or add a part to infinity,
still what remains is infinity." Infinity was a popular topic of philosophical study among the Jain mathematicians c. 400 BC. They distinguished between five types of infinity: infinite in one and
two directions, infinite in area, infinite everywhere, and infinite perpetually. The symbol
is often used to represent an infinite quantity.
Aristotle defined the traditional Western notion of mathematical infinity. He distinguished between actual infinity and potential infinity—the general consensus being that only the latter had true
value. Galileo Galilei's Two New Sciences discussed the idea of one-to-one correspondences between infinite sets. But the next major advance in the theory was made by Georg Cantor; in 1895 he
published a book about his new set theory, introducing, among other things, transfinite numbers and formulating the continuum hypothesis.
In the 1960s, Abraham Robinson showed how infinitely large and infinitesimal numbers can be rigorously defined and used to develop the field of nonstandard analysis. The system of hyperreal numbers
represents a rigorous method of treating the ideas about infinite and infinitesimal numbers that had been used casually by mathematicians, scientists, and engineers ever since the invention of
infinitesimal calculus by Newton and Leibniz.
A modern geometrical version of infinity is given by projective geometry, which introduces "ideal points at infinity", one for each spatial direction. Each family of parallel lines in a given
direction is postulated to converge to the corresponding ideal point. This is closely related to the idea of vanishing points in perspective drawing.
Complex numbers
The earliest fleeting reference to square roots of negative numbers occurred in the work of the mathematician and inventor Heron of Alexandria in the, when he considered the volume of an impossible
frustum of a pyramid. They became more prominent when in the 16th century closed formulas for the roots of third and fourth degree polynomials were discovered by Italian mathematicians such as
Niccolò Fontana Tartaglia and Gerolamo Cardano. It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of
negative numbers.
This was doubly unsettling since they did not even consider negative numbers to be on firm ground at the time. When René Descartes coined the term "imaginary" for these quantities in 1637, he
intended it as derogatory. (See imaginary number for a discussion of the "reality" of complex numbers.) A further source of confusion was that the equation
seemed capriciously inconsistent with the algebraic identity
which is valid for positive real numbers
, and was also used in complex number calculations with one of
positive and the other negative. The incorrect use of this identity, and the related identity
}=\sqrtin the case when both
are negative even bedeviled
This difficulty eventually led him to the convention of using the special symbol
in place of
to guard against this mistake.
The 18th century saw the work of Abraham de Moivre and Leonhard Euler. De Moivre's formula (1730) states:
Euler's formula
complex analysis
(1748) gave us:
The existence of complex numbers was not completely accepted until Caspar Wessel described the geometrical interpretation in 1799. Carl Friedrich Gauss rediscovered and popularized it several years
later, and as a result the theory of complex numbers received a notable expansion. The idea of the graphic representation of complex numbers had appeared, however, as early as 1685, in Wallis's De
algebra tractatus.
In the same year, Gauss provided the first generally accepted proof of the fundamental theorem of algebra, showing that every polynomial over the complex numbers has a full set of solutions in that
realm. Gauss studied complex numbers of the form, where a and b are integers (now called Gaussian integers) or rational numbers. His student, Gotthold Eisenstein, studied the type, where ω is a
complex root of (now called Eisenstein integers). Other such classes (called cyclotomic fields) of complex numbers derive from the roots of unity for higher values of k. This generalization is
largely due to Ernst Kummer, who also invented ideal numbers, which were expressed as geometrical entities by Felix Klein in 1893.
In 1850 Victor Alexandre Puiseux took the key step of distinguishing between poles and branch points, and introduced the concept of essential singular points. This eventually led to the concept of
the extended complex plane.
Prime numbers
Prime numbers have been studied throughout recorded history. They are positive integers that are divisible only by 1 and themselves. Euclid devoted one book of the Elements to the theory of primes;
in it he proved the infinitude of the primes and the fundamental theorem of arithmetic, and presented the Euclidean algorithm for finding the greatest common divisor of two numbers.
In 240 BC, Eratosthenes used the Sieve of Eratosthenes to quickly isolate prime numbers. But most further development of the theory of primes in Europe dates to the Renaissance and later eras.
In 1796, Adrien-Marie Legendre conjectured the prime number theorem, describing the asymptotic distribution of primes. Other results concerning the distribution of the primes include Euler's proof
that the sum of the reciprocals of the primes diverges, and the Goldbach conjecture, which claims that any sufficiently large even number is the sum of two primes. Yet another conjecture related to
the distribution of prime numbers is the Riemann hypothesis, formulated by Bernhard Riemann in 1859. The prime number theorem was finally proved by Jacques Hadamard and Charles de la Vallée-Poussin
in 1896. Goldbach and Riemann's conjectures remain unproven and unrefuted.
Main classification
See also: List of types of numbers. Numbers can be classified into sets, called number sets or number systems, such as the natural numbers and the real numbers. The main number systems are as
Main number systems
0, 1, 2, 3, 4, 5, ... or 1, 2, 3, 4, 5, ...
Natural numbers or
are sometimes used.
Integers ..., −5, −4, −3, −2, −1, 0, 1, 2, 3, 4, 5, ...
Rational numbers where a and b are integers and b is not 0
Real numbers The limit of a convergent sequence of rational numbers
Complex numbers a + bi where a and b are real numbers and i is a formal square root of −1
Each of these number systems is a subset of the next one. So, for example, a rational number is also a real number, and every real number is also a complex number. This can be expressed symbolically
A more complete list of number sets appears in the following diagram.
Natural numbers
See main article: Natural number. The most familiar numbers are the natural numbers (sometimes called whole numbers or counting numbers): 1, 2, 3, and so on. Traditionally, the sequence of natural
numbers started with 1 (0 was not even considered a number for the Ancient Greeks.) However, in the 19th century, set theorists and other mathematicians started including 0 (cardinality of the empty
set, i.e. 0 elements, where 0 is thus the smallest cardinal number) in the set of natural numbers.^[32] Today, different mathematicians use the term to describe both sets, including 0 or not. The
mathematical symbol for the set of all natural numbers is N, also written
, and sometimes
when it is necessary to indicate whether the set should start with 0 or 1, respectively.
In the base 10 numeral system, in almost universal use today for mathematical operations, the symbols for natural numbers are written using ten digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. The radix or
base is the number of unique numerical digits, including zero, that a numeral system uses to represent numbers (for the decimal system, the radix is 10). In this base 10 system, the rightmost digit
of a natural number has a place value of 1, and every other digit has a place value ten times that of the place value of the digit to its right.
In set theory, which is capable of acting as an axiomatic foundation for modern mathematics,^[33] natural numbers can be represented by classes of equivalent sets. For instance, the number 3 can be
represented as the class of all sets that have exactly three elements. Alternatively, in Peano Arithmetic, the number 3 is represented as sss0, where s is the "successor" function (i.e., 3 is the
third successor of 0). Many different representations are possible; all that is needed to formally represent 3 is to inscribe a certain symbol or pattern of symbols three times.
See main article: Integer. The negative of a positive integer is defined as a number that produces 0 when it is added to the corresponding positive integer. Negative numbers are usually written with
a negative sign (a minus sign). As an example, the negative of 7 is written −7, and . When the set of negative numbers is combined with the set of natural numbers (including 0), the result is defined
as the set of integers, Z also written . Here the letter Z comes . The set of integers forms a ring with the operations addition and multiplication.
The natural numbers form a subset of the integers. As there is no common standard for the inclusion or not of zero in the natural numbers, the natural numbers without zero are commonly referred to as
positive integers, and the natural numbers with zero are referred to as non-negative integers.
Rational numbers
See main article: Rational number. A rational number is a number that can be expressed as a fraction with an integer numerator and a positive integer denominator. Negative denominators are allowed,
but are commonly avoided, as every rational number is equal to a fraction with positive denominator. Fractions are written as two integers, the numerator and the denominator, with a dividing bar
between them. The fraction represents m parts of a whole divided into n equal parts. Two different fractions may correspond to the same rational number; for example and are equal, that is:
In general,
if and only if
If the absolute value of m is greater than n (supposed to be positive), then the absolute value of the fraction is greater than 1. Fractions can be greater than, less than, or equal to 1 and can also
be positive, negative, or 0. The set of all rational numbers includes the integers since every integer can be written as a fraction with denominator 1. For example −7 can be written . The symbol for
the rational numbers is Q (for quotient), also written .
Real numbers
See main article: Real number.
The symbol for the real numbers is R, also written as
They include all the measuring numbers. Every real number corresponds to a point on the
number line
. The following paragraph will focus primarily on positive real numbers. The treatment of negative real numbers is according to the general rules of arithmetic and their denotation is simply
prefixing the corresponding positive numeral by a minus sign, e.g. −123.456.
Most real numbers can only be approximated by decimal numerals, in which a decimal point is placed to the right of the digit with place value 1. Each digit to the right of the decimal point has a
place value one-tenth of the place value of the digit to its left. For example, 123.456 represents, or, in words, one hundred, two tens, three ones, four tenths, five hundredths, and six thousandths.
A real number can be expressed by a finite number of decimal digits only if it is rational and its fractional part has a denominator whose prime factors are 2 or 5 or both, because these are the
prime factors of 10, the base of the decimal system. Thus, for example, one half is 0.5, one fifth is 0.2, one-tenth is 0.1, and one fiftieth is 0.02. Representing other real numbers as decimals
would require an infinite sequence of digits to the right of the decimal point. If this infinite sequence of digits follows a pattern, it can be written with an ellipsis or another notation that
indicates the repeating pattern. Such a decimal is called a repeating decimal. Thus can be written as 0.333..., with an ellipsis to indicate that the pattern continues. Forever repeating 3s are also
written as 0..^[34]
It turns out that these repeating decimals (including the repetition of zeroes) denote exactly the rational numbers, i.e., all rational numbers are also real numbers, but it is not the case that
every real number is rational. A real number that is not rational is called irrational. A famous irrational real number is the , the ratio of the circumference of any circle to its diameter. When pi
is written as
as it sometimes is, the ellipsis does not mean that the decimals repeat (they do not), but rather that there is no end to them. It has been proved that
is irrational
. Another well-known number, proven to be an irrational real number, is
square root of 2
, that is, the unique positive real number whose square is 2. Both these numbers have been approximated (by computer) to trillions of digits.
Not only these prominent examples but almost all real numbers are irrational and therefore have no repeating patterns and hence no corresponding decimal numeral. They can only be approximated by
decimal numerals, denoting rounded or truncated real numbers. Any rounded or truncated number is necessarily a rational number, of which there are only countably many. All measurements are, by their
nature, approximations, and always have a margin of error. Thus 123.456 is considered an approximation of any real number greater or equal to and strictly less than (rounding to 3 decimals), or of
any real number greater or equal to and strictly less than (truncation after the 3. decimal). Digits that suggest a greater accuracy than the measurement itself does, should be removed. The remaining
digits are then called significant digits. For example, measurements with a ruler can seldom be made without a margin of error of at least 0.001 m. If the sides of a rectangle are measured as 1.23 m
and 4.56 m, then multiplication gives an area for the rectangle between and . Since not even the second digit after the decimal place is preserved, the following digits are not significant.
Therefore, the result is usually rounded to 5.61.
Just as the same fraction can be written in more than one way, the same real number may have more than one decimal representation. For example, 0.999..., 1.0, 1.00, 1.000, ..., all represent the
natural number 1. A given real number has only the following decimal representations: an approximation to some finite number of decimal places, an approximation in which a pattern is established that
continues for an unlimited number of decimal places or an exact value with only finitely many decimal places. In this last case, the last non-zero digit may be replaced by the digit one smaller
followed by an unlimited number of 9s, or the last non-zero digit may be followed by an unlimited number of zeros. Thus the exact real number 3.74 can also be written 3.7399999999... and
3.74000000000.... Similarly, a decimal numeral with an unlimited number of 0s can be rewritten by dropping the 0s to the right of the rightmost nonzero digit, and a decimal numeral with an unlimited
number of 9s can be rewritten by increasing by one the rightmost digit less than 9, and changing all the 9s to the right of that digit to 0s. Finally, an unlimited sequence of 0s to the right of a
decimal place can be dropped. For example, 6.849999999999... = 6.85 and 6.850000000000... = 6.85. Finally, if all of the digits in a numeral are 0, the number is 0, and if all of the digits in a
numeral are an unending string of 9s, you can drop the nines to the right of the decimal place, and add one to the string of 9s to the left of the decimal place. For example, 99.999... = 100.
The real numbers also have an important but highly technical property called the least upper bound property.
It can be shown that any ordered field, which is also complete, is isomorphic to the real numbers. The real numbers are not, however, an algebraically closed field, because they do not include a
solution (often called a square root of minus one) to the algebraic equation
Complex numbers
See main article: Complex number. Moving to a greater level of abstraction, the real numbers can be extended to the complex numbers. This set of numbers arose historically from trying to find closed
formulas for the roots of cubic and quadratic polynomials. This led to expressions involving the square roots of negative numbers, and eventually to the definition of a new number: a square root of
−1, denoted by i, a symbol assigned by Leonhard Euler, and called the imaginary unit. The complex numbers consist of all numbers of the form
are real numbers. Because of this, complex numbers correspond to points on the
complex plane
, a
vector space
of two real
s. In the expression, the real number
is called the
real part
is called the
imaginary part
. If the real part of a complex number is 0, then the number is called an
imaginary number
or is referred to as
purely imaginary
; if the imaginary part is 0, then the number is a real number. Thus the real numbers are a
of the complex numbers. If the real and imaginary parts of a complex number are both integers, then the number is called a
Gaussian integer
. The symbol for the complex numbers is
The fundamental theorem of algebra asserts that the complex numbers form an algebraically closed field, meaning that every polynomial with complex coefficients has a root in the complex numbers. Like
the reals, the complex numbers form a field, which is complete, but unlike the real numbers, it is not ordered. That is, there is no consistent meaning assignable to saying that i is greater than 1,
nor is there any meaning in saying that i is less than 1. In technical terms, the complex numbers lack a total order that is compatible with field operations.
Subclasses of the integers
Even and odd numbers
See main article: Even and odd numbers. An even number is an integer that is "evenly divisible" by two, that is divisible by two without remainder; an odd number is an integer that is not even. (The
old-fashioned term "evenly divisible" is now almost always shortened to "divisible".) Any odd number n may be constructed by the formula for a suitable integer k. Starting with the first non-negative
odd numbers are . Any even number m has the form where k is again an integer. Similarly, the first non-negative even numbers are .
Prime numbers
See main article: Prime number. A prime number, often shortened to just prime, is an integer greater than 1 that is not the product of two smaller positive integers. The first few prime numbers are
2, 3, 5, 7, and 11. There is no such simple formula as for odd and even numbers to generate the prime numbers. The primes have been widely studied for more than 2000 years and have led to many
questions, only some of which have been answered. The study of these questions belongs to number theory. Goldbach's conjecture is an example of a still unanswered question: "Is every even number the
sum of two primes?"
One answered question, as to whether every integer greater than one is a product of primes in only one way, except for a rearrangement of the primes, was confirmed; this proven claim is called the
fundamental theorem of arithmetic. A proof appears in Euclid's Elements.
Other classes of integers
Many subsets of the natural numbers have been the subject of specific studies and have been named, often after the first mathematician that has studied them. Example of such sets of integers are
Fibonacci numbers and perfect numbers. For more examples, see Integer sequence.
Subclasses of the complex numbers
Algebraic, irrational and transcendental numbers
Algebraic numbers are those that are a solution to a polynomial equation with integer coefficients. Real numbers that are not rational numbers are called irrational numbers. Complex numbers which are
not algebraic are called transcendental numbers. The algebraic numbers that are solutions of a monic polynomial equation with integer coefficients are called algebraic integers.
Constructible numbers
Motivated by the classical problems of constructions with straightedge and compass, the constructible numbers are those complex numbers whose real and imaginary parts can be constructed using
straightedge and compass, starting from a given segment of unit length, in a finite number of steps.
Computable numbers
See main article: Computable number. A computable number, also known as recursive number, is a real number such that there exists an algorithm which, given a positive number n as input, produces the
first n digits of the computable number's decimal representation. Equivalent definitions can be given using μ-recursive functions, Turing machines or λ-calculus. The computable numbers are stable for
all usual arithmetic operations, including the computation of the roots of a polynomial, and thus form a real closed field that contains the real algebraic numbers.
The computable numbers may be viewed as the real numbers that may be exactly represented in a computer: a computable number is exactly represented by its first digits and a program for computing
further digits. However, the computable numbers are rarely used in practice. One reason is that there is no algorithm for testing the equality of two computable numbers. More precisely, there cannot
exist any algorithm which takes any computable number as an input, and decides in every case if this number is equal to zero or not.
The set of computable numbers has the same cardinality as the natural numbers. Therefore, almost all real numbers are non-computable. However, it is very difficult to produce explicitly a real number
that is not computable.
Extensions of the concept
p-adic numbers
See main article: ''p''-adic number. The p-adic numbers may have infinitely long expansions to the left of the decimal point, in the same way that real numbers may have infinitely long expansions to
the right. The number system that results depends on what base is used for the digits: any base is possible, but a prime number base provides the best mathematical properties. The set of the p-adic
numbers contains the rational numbers, but is not contained in the complex numbers.
The elements of an algebraic function field over a finite field and algebraic numbers have many similar properties (see Function field analogy). Therefore, they are often regarded as numbers by
number theorists. The p-adic numbers play an important role in this analogy.
Hypercomplex numbers
See main article: hypercomplex number. Some number systems that are not included in the complex numbers may be constructed from the real numbers in a way that generalize the construction of the
complex numbers. They are sometimes called hypercomplex numbers. They include the quaternions H, introduced by Sir William Rowan Hamilton, in which multiplication is not commutative, the octonions,
in which multiplication is not associative in addition to not being commutative, and the sedenions, in which multiplication is not alternative, neither associative nor commutative.
Transfinite numbers
See main article: transfinite number. For dealing with infinite sets, the natural numbers have been generalized to the ordinal numbers and to the cardinal numbers. The former gives the ordering of
the set, while the latter gives its size. For finite sets, both ordinal and cardinal numbers are identified with the natural numbers. In the infinite case, many ordinal numbers correspond to the same
cardinal number.
Nonstandard numbers
Hyperreal numbers are used in non-standard analysis. The hyperreals, or nonstandard reals (usually denoted as *R), denote an ordered field that is a proper extension of the ordered field of real
numbers R and satisfies the transfer principle. This principle allows true first-order statements about R to be reinterpreted as true first-order statements about *R.
Superreal and surreal numbers extend the real numbers by adding infinitesimally small numbers and infinitely large numbers, but still form fields.
See also
• Tobias Dantzig, Number, the language of science; a critical survey written for the cultured non-mathematician, New York, The Macmillan Company, 1930.
• Erich Friedman, What's special about this number?
• Steven Galovich, Introduction to Mathematical Structures, Harcourt Brace Javanovich, 1989, .
• Paul Halmos, Naive Set Theory, Springer, 1974, .
• Morris Kline, Mathematical Thought from Ancient to Modern Times, Oxford University Press, 1990.
• Alfred North Whitehead and Bertrand Russell, Principia Mathematica to *56, Cambridge University Press, 1910.
• Leo Cory, A Brief History of Numbers, Oxford University Press, 2015, .
External links
□ Web site: Tallant. Jonathan. Do Numbers Exist. Numberphile. Brady Haran. 2013-04-06. https://web.archive.org/web/20160308015528/http://www.numberphile.com/videos/exist.html. 2016-03-08. dead.
• In Our Time: Negative Numbers. 9 March 2006. https://web.archive.org/web/20220531120903/https://www.bbc.co.uk/programmes/p003hyd9. 31 May 2022. BBC Radio 4.
• Web site: 4000 Years of Numbers. https://web.archive.org/web/20220408112133/http://www.gresham.ac.uk/lectures-and-events/4000-years-of-numbers. live. 8 April 2022. Robin Wilson. 7 November 2007.
Gresham College.
• News: What's the World's Favorite Number?. NPR. live. https://web.archive.org/web/20210518141211/https://www.npr.org/sections/krulwich/2011/07/22/138493147/
what-s-your-favorite-number-world-wide-survey-v1. 18 May 2021. 17 September 2011. 22 July 2011. Krulwich. Robert. ; Web site: Cuddling With 9, Smooching With 8, Winking At 7. live. https://
web.archive.org/web/20181106205912/https://www.npr.org/templates/transcript/transcript.php?storyId=139797360?storyId=139797360. 6 November 2018. NPR. 21 August 2011. 17 September 2011.
Notes and References
1. number, n. . OED Online . en-GB . Oxford University Press . 2017-05-16 . https://web.archive.org/web/20181004081907/http://www.oed.com/view/Entry/129082 . 2018-10-04 . live .
2. numeral, adj. and n. . OED Online . Oxford University Press . 2017-05-16 . 2022-07-30 . https://web.archive.org/web/20220730095156/https://www.oed.com/start;jsessionid=
B9929F0647C8EE5D4FDB3A3C1B2CA3C3?authRejection=true&url=%2Fview%2FEntry%2F129111 . live .
3. News: The Origin of Zero . Matson . John . Scientific American . 2017-05-16 . en . https://web.archive.org/web/20170826235655/https://www.scientificamerican.com/article/history-of-zero/ .
2017-08-26 . live .
4. Book: Hodgkin, Luke . A History of Mathematics: From Mesopotamia to Modernity . 2005-06-02 . OUP Oxford . 978-0-19-152383-0 . 85–88 . en . 2017-05-16 . https://web.archive.org/web/20190204012433/
https://books.google.com/books?id=f6HlhlBuQUgC&pg=PA88 . 2019-02-04 . live .
5. Book: Mathematics across cultures : the history of non-western mathematics . 2000 . Kluwer Academic . Dordrecht . 1-4020-0260-2 . 410–411.
6. Book: Descartes, René . La Géométrie: The Geometry of René Descartes with a facsimile of the first edition . 1954 . René Descartes . 1637 . . 0-486-60068-8 . 20 April 2011 .
7. Book: Gilsdorf, Thomas E. . Introduction to cultural mathematics : with case studies in the Otomies and the Incas . 2012 . Wiley . 978-1-118-19416-4 . Hoboken, N.J. . 793103475.
8. Book: Restivo, Sal P. . Mathematics in society and history : sociological inquiries . 1992 . 978-94-011-2944-2 . Dordrecht . 883391697.
9. Book: Ore, Øystein . Number theory and its history . 1988 . Dover . 0-486-65620-9 . New York . 17413345.
10. Gouvêa, Fernando Q. The Princeton Companion to Mathematics, Chapter II.1, "The Origins of Modern Mathematics", p. 82. Princeton University Press, September 28, 2008. . "Today, it is no longer
that easy to decide what counts as a 'number.' The objects from the original sequence of 'integer, rational, real, and complex' are certainly numbers, but so are the p-adics. The quaternions are
rarely referred to as 'numbers,' on the other hand, though they can be used to coordinatize certain mathematical notions."
11. Book: Marshack, Alexander . The roots of civilization; the cognitive beginnings of man's first art, symbol, and notation. . 1971 . McGraw-Hill . 0-07-040535-2 . [1st ed.] . New York . 257105.
12. Web site: Egyptian Mathematical Papyri – Mathematicians of the African Diaspora . Math.buffalo.edu . 2012-01-30 . https://web.archive.org/web/20150407231917/http://www.math.buffalo.edu/mad/
Ancient-Africa/mad_ancient_egyptpapyrus.html#berlin . 2015-04-07 . live .
13. Chrisomalis . Stephen . 2003-09-01 . The Egyptian origin of the Greek alphabetic numerals . Antiquity . 77 . 297 . 485–96 . 10.1017/S0003598X00092541 . 160523072 . 0003-598X .
14. Book: The Earth and Its Peoples: A Global History, Volume 1 . Crossley . Pamela . Headrick . Daniel . Hirsch . Steven . Johnson . Lyman . Cengage Learning . 2010 . 978-1-4390-8474-8 . 192 .
Indian mathematicians invented the concept of zero and developed the "Arabic" numerals and system of place-value notation used in most parts of the world today . Richard . Bulliet . 2017-05-16 .
https://web.archive.org/web/20170128072424/https://books.google.com/books?id=dOxl71w-jHEC&pg=PA192 . 2017-01-28 . live .
15. Web site: Historia Matematica Mailing List Archive: Re: [HM] The Zero Story: a question ]. Sunsite.utk.edu . 1999-04-26 . 2012-01-30 . dead . https://web.archive.org/web/20120112073735/http://
sunsite.utk.edu/math_archives/.http/hypermail/historia/apr99/0197.html . 2012-01-12 .
16. Book: Sánchez, George I. . George I. Sánchez . Arithmetic in Maya . self published . 1961 . Austin, Texas.
17. Book: Staszkow, Ronald . Robert Bradshaw . The Mathematical Palette (3rd ed.) . Brooks Cole . 2004 . 41 . 0-534-40365-4.
18. Book: Smith, David Eugene . David Eugene Smith . History of Modern Mathematics . Dover Publications . 1958 . 259 . 0-486-20429-4.
19. Web site: Classical Greek culture (article) . 2022-05-04 . Khan Academy . en . 2022-05-04 . https://web.archive.org/web/20220504133917/https://www.khanacademy.org/humanities/world-history/
ancient-medieval/classical-greece/a/greek-culture . live .
20. Book: Selin . Helaine . Helaine Selin . Mathematics across cultures: the history of non-Western mathematics . Kluwer Academic Publishers . 2000 . 451 . 0-7923-6481-3.
21. Book: Harvard Studies in Classical Philology . Horace and the Monuments: A New Interpretation of the Archytas Ode . Bernard Frischer . D.R. Shackleton Bailey . D. R. Shackleton Bailey . 83 .
Harvard University Press . 1984 . 0-674-37935-7.
22. Eduard Heine, "Die Elemente der Functionenlehre", [Crelle's] Journal für die reine und angewandte Mathematik, No. 74 (1872): 172–188.
23. Georg Cantor, "Ueber unendliche, lineare Punktmannichfaltigkeiten", pt. 5, Mathematische Annalen, 21, 4 (1883‑12): 545–591.
24. Richard Dedekind, Stetigkeit & irrationale Zahlen (Braunschweig: Friedrich Vieweg & Sohn, 1872). Subsequently published in: ———, Gesammelte mathematische Werke, ed. Robert Fricke, Emmy Noether &
Öystein Ore (Braunschweig: Friedrich Vieweg & Sohn, 1932), vol. 3, pp. 315–334.
25. L. Kronecker, "Ueber den Zahlbegriff", [Crelle's] Journal für die reine und angewandte Mathematik, No. 101 (1887): 337–355.
26. Leonhard Euler, "Conjectura circa naturam aeris, pro explicandis phaenomenis in atmosphaera observatis", Acta Academiae Scientiarum Imperialis Petropolitanae, 1779, 1 (1779): 162–187.
27. Ramus, "Determinanternes Anvendelse til at bes temme Loven for de convergerende Bröker", in: Det Kongelige Danske Videnskabernes Selskabs naturvidenskabelige og mathematiske Afhandlinger
(Kjoebenhavn: 1855), p. 106.
28. Eduard Heine, "Einige Eigenschaften der Laméschen Funktionen", [Crelle's] Journal für die reine und angewandte Mathematik, No. 56 (Jan. 1859): 87–99 at 97.
29. Siegmund Günther, Darstellung der Näherungswerthe von Kettenbrüchen in independenter Form (Erlangen: Eduard Besold, 1873); ———, "Kettenbruchdeterminanten", in: Lehrbuch der Determinanten-Theorie:
Für Studirende (Erlangen: Eduard Besold, 1875), c. 6, pp. 156–186.
30. Web site: Bogomolny . A. . Cut-the-Knot . What's a number? . Interactive Mathematics Miscellany and Puzzles . 11 July 2010 . https://web.archive.org/web/20100923231547/http://www.cut-the-knot.org
/do_you_know/numbers.shtml . 23 September 2010 . live .
31. Martínez . Alberto A. . 2007 . Euler's 'mistake'? The radical product rule in historical perspective . The American Mathematical Monthly . 114 . 4 . 273–285 . 10.1080/00029890.2007.11920416 .
43778192 .
32. Web site: natural number . Merriam-Webster.com . . 4 October 2014 . https://web.archive.org/web/20191213133201/https://www.merriam-webster.com/dictionary/natural%20number . 13 December 2019 .
live .
33. Book: Suppes, Patrick . Patrick Suppes . Axiomatic Set Theory . Courier Dover Publications . 1972 . 1 . 0-486-61630-4 .
34. Web site: Weisstein. Eric W.. Repeating Decimal. 2020-07-23. Wolfram MathWorld . en. 2020-08-05. https://web.archive.org/web/20200805170548/https://mathworld.wolfram.com/RepeatingDecimal.html.
|
{"url":"https://everything.explained.today/Number/","timestamp":"2024-11-05T06:26:07Z","content_type":"text/html","content_length":"107510","record_id":"<urn:uuid:1e3a07df-318b-4e9f-9bb3-7399caee907e>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00805.warc.gz"}
|
definable groupoid
nLab definable groupoid
Model theory
Basic concepts and techniques
Dimension, ranks, forking
Universal constructions
Internal categories
In a model $M$ of a first-order theory $T$ a definable set $G$ might have additional algebraic stucture (e.g. that of a group, ring, groupoid, category, etc.) also given by definable functions and
predicates. Since this set with extra structure is given by some collection of formulas in the language of $T$, it is interpreted in every model of $T$, and is hence an invariant of the syntactic
category (walking model) $\mathbf{Def}(T)$ of $T$.
To make this precise:
• A definable group is just a group object in $\mathbf{Def}(T)$.
• A definable groupoid is just a groupoid object, i.e. an internal groupoid in $\mathbf{Def}(T)$.
• A definable ring is just a semigroup object in the category of abelian group objects of $\mathbf{Def}(T)$.
• A definable category is just a category object, i.e. an internal category in $\mathbf{Def}(T)$.
Since groups, groupoids, rings, and categories can be given by algebraic theories, a definition (modulo having EI) of one of these in $T$ is just an interpretation of one of those theories in $T$ is
just a logical functor from the walking models of one of these to $\mathbf{Def}(T)$.
• Evaluating the unit of the Makkai duality adjunction at $T$ yields $\mathbf{Def}(T) \simeq \operatorname{Hom}_{\mathbf{Ult}}(\mathbf{Mod}(T), \mathbf{Set})$ the category of ultrafunctors? from
the category of models (logical functors $\mathbf{Def}(T) \to \mathbf{Set}$) to $\mathbf{Set}$ viewed as ultracategories, so that one may identify a definable set (and definable sets with extra
definable structure on them) $X \in \mathbf{Def}(T)$ with a functor of points $M \mapsto X(M)$ on the category of models.
• The fact that the external axiom of choice (all epimorphisms split) holds in $\mathbf{Set}$ if and only if every fully faithful essentially surjective functor between small categories is a true
equivalence of categories can be word-for-word internalized to $\mathbf{Def}(T)$. This means: if the theory has two constants, then $T$ has definable Skolem functions if and only if every fully
faithful essentially surjective definable functor between definable categories admits a pseudoinverse?.
• With a suitable amount of choice (definable Skolem functions, for example) Freyd’s general adjoint functor theorem also carries over word-for-word to the setting of internal categories in $\
• In particular, there is much studied case of definable groups, cf. e.g. (Peterzil-Pillay)
There is a bijective correspondence between internal imaginary sorts of $T$ and definable concrete groupoids with a single isomorphism class, up to bi-interpretability over $T$ for the internal
imaginary sorts and Hrushovski-equivalence for the definable concrete groupoids.
This is (Hrushovski 2006, Th.3.2).
• Y. Peterzil, A. Pillay, Generic sets in definably compact groups, Fundamenta Mathematicae 193 (2007), pp. 153–170, MR2282713, doi
Hrushovski’s correspondence between definable connected groupoids in a theory $T$ and internal generalised imaginary sorts of $T$ is extended to an equivalence of categories in
• Levon Haykazyan, Rahim Moosa, Functoriality and uniformity in Hrushovski’s groupoid-cover correspondence, Annals of Pure and Applied Logic 169:8 (2018) 705– 730 arXiv/1711.03531 doi
On higher categorical analogues of definable groupoids and internal covers:
Last revised on April 19, 2023 at 19:06:18. See the history of this page for a list of all contributions to it.
|
{"url":"https://ncatlab.org/nlab/show/definable+groupoid","timestamp":"2024-11-09T03:24:33Z","content_type":"application/xhtml+xml","content_length":"39643","record_id":"<urn:uuid:33fbd3d2-0387-45a0-86e3-c8e6c959b93b>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00680.warc.gz"}
|
Analytic Geometry Honors
The course was/will be terminated at the end of School Year 2014 - 2015
General Course Information and Notes
General Information
Course Number: 1206330
Course Path:
Abbreviated Title: ANLY GEO HON
Course Length: Semester (S)
Course Status: Terminated
Grade Level(s): 9,10,11,12
Educator Certifications
One of these educator certification options is required to teach this course.
Student Resources
Vetted resources students can use to learn the concepts and skills in this course.
Original Student Tutorials
Graphing Linear Functions Part 1: Table of Values:
Learn how to graph linear functions by creating a table of values based on the equation in this interactive tutorial.
This is part 1 of a series of tutorials on linear functions.
Type: Original Student Tutorial
Quadratic Function Part 2: Launches:
Learn about different formats of quadratic equations and their graphs with experiments involving launching and shooting of balls in this interactive tutorial.
This is part 2 of a two-part series: Click HERE to open part 1.
Type: Original Student Tutorial
Quadratic Functions Part 1: Ball Games:
Join us as we watch ball games and explore how the height of a ball bounce over time is represented by quadratic functions, which provides opportunities to interpret key features of the function in
this interactive tutorial.
This is part 1 of a two-part series: Click HERE to open part 2.
Type: Original Student Tutorial
Multistep Factoring: Quadratics:
Learn how to use multistep factoring to factor quadratics in this interactive tutorial.
This is part 5 in a five-part series. Click below to open the other tutorials in this series.
• Part 5: Multistep Factoring: Quadratics (current tutorial)
Type: Original Student Tutorial
Factoring Polynomials when "a" Does Not Equal 1, Snowflake Method:
Learn to factor quadratic trinomials when the coefficient a does not equal 1 by using the Snowflake Method in this interactive tutorial.
This is part 4 in a five-part series. Click below to open the other tutorials in this series.
Type: Original Student Tutorial
Solving Systems of Linear Equations Part 6: Writing Systems from Context:
Learn how to create systems of linear equations to represent contextual situations in this interactive tutorial.
This part 6 in a 7-part series. Click below to explore the other tutorials in the series.
• Part 7: Solving Systems of Linear Equations: Word Problems (Coming soon)
Type: Original Student Tutorial
Factoring Quadratics When the Coefficient a Does Not Equal 1: The Box Method:
Learn how to factor quadratic polynomials when the leading coefficient (a) is not 1 by using the box method in this interactive tutorial.
This is part 3 in a five-part series. Click below to open the other tutorials in this series.
Type: Original Student Tutorial
The Diamond Game: Factoring Quadratics when a = 1:
Learn how to factor quadratics when the coefficient a = 1 using the diamond method in this game show-themed, interactive tutorial.
This is part 1 in a five-part series. Click below to open the other tutorials in this series.
Type: Original Student Tutorial
Highs and Lows Part 2: Completing the Square:
Learn the process of completing the square of a quadratic function to find the maximum or minimum to discover how high a dolphin jumped in this interactive tutorial.
This is part 2 of a 2 part series. Click HERE to open part 1.
Type: Original Student Tutorial
Highs and Lows Part 1: Completing the Square:
Learn the process of completing the square of a quadratic function to find the maximum or minimum to discover how high a dolphin jumped in this interactive tutorial.
This is part 1 of a 2 part series. Click HERE to open Part 2.
Type: Original Student Tutorial
Identifying Parts of Linear Expressions:
Learn to identify and interpret parts of linear expressions in terms of mathematical or real-world contexts in this original tutorial.
Type: Original Student Tutorial
Exponential Functions Part 3: Decay:
Learn about exponential decay as you calculate the value of used cars by examining equations, graphs, and tables in this interactive tutorial.
Type: Original Student Tutorial
Linear Functions: Jobs:
Learn how to interpret key features of linear functions and translate between representations of linear functions through exploring jobs for teenagers in this interactive tutorial.
Type: Original Student Tutorial
Exponential Functions Part 2: Growth:
Learn about exponential growth in the context of interest earned as money is put in a savings account by examining equations, graphs, and tables in this interactive tutorial.
Type: Original Student Tutorial
Exponential Functions Part 1:
Learn about exponential functions and how they are different from linear functions by examining real world situations, their graphs and their tables in this interactive tutorial.
Type: Original Student Tutorial
Dilations...The Effect of k on a Graph:
Visualize the effect of using a value of k in both kf(x) or f(kx) when k is greater than zero in this interactive tutorial.
Type: Original Student Tutorial
Solving Inequalities and Graphing Solutions Part 2:
Learn how to solve and graph compound inequalities and determine if solutions are viable in part 2 of this interactive tutorial series.
Click HERE to open Part 1.
Type: Original Student Tutorial
Solving Inequalities and Graphing Solutions: Part 1:
Learn how to solve and graph one variable inequalities, including compound inequalities, in part 1 of this interactive tutorial series.
Click HERE to open Part 2.
Type: Original Student Tutorial
Reflections...The Effect of k on a Graph:
Learn how reflections of a function are created and tied to the value of k in the mapping of f(x) to -1f(x) in this interactive tutorial.
Type: Original Student Tutorial
Translations...The Effect of k on the Graph:
Explore translations of functions on a graph that are caused by k in this interactive tutorial. GeoGebra and interactive practice items are used to investigate linear, quadratic, and exponential
functions and their graphs, and the effect of a translation on a table of values.
Type: Original Student Tutorial
Introduction to Polynomials, Part 2 - Adding and Subtracting:
Learn how to add and subtract polynomials in this online tutorial. You will learn how to combine like terms and then use the distribute property to subtract polynomials.
This is part 2 of a two-part lesson. Click below to open part 1.
Type: Original Student Tutorial
Introduction to Polynomials: Part 1:
Learn how to identify monomials and polynomials and determine their degree in this interactive tutorial.
This is part 1 in a two-part series. Click here to open Part 2.
Type: Original Student Tutorial
The Radical Puzzle:
Learn to rewrite products involving radicals and rational exponents using properties of exponents in this interactive tutorial.
Type: Original Student Tutorial
Changing Rates:
Learn how to calculate and interpret an average rate of change over a specific interval on a graph in this interactive tutorial.
Type: Original Student Tutorial
Graphing Quadratic Functions:
Follow as we discover key features of a quadratic equation written in vertex form in this interactive tutorial.
Type: Original Student Tutorial
Factoring Polynomials Using Special Cases:
Learn how to factor quadratic polynomials that follow special cases, difference of squares and perfect square trinomials, in this interactive tutorial.
This is part 2 in a five-part series. Click below to open the other tutorials in this series.
Type: Original Student Tutorial
Educational Games
Timed Algebra Quiz:
In this timed activity, students solve linear equations (one- and two-step) or quadratic equations of varying difficulty depending on the initial conditions they select. This activity allows students
to practice solving equations while the activity records their score, so they can track their progress. This activity includes supplemental materials, including background information about the
topics covered, a description of how to use the application, and exploration questions for use with the java applet.
Type: Educational Game
Algebra Four:
In this activity, two students play a simulated game of Connect Four, but in order to place a piece on the board, they must correctly solve an algebraic equation. This activity allows students to
practice solving equations of varying difficulty: one-step, two-step, or quadratic equations and using the distributive property if desired. This activity includes supplemental materials, including
background information about the topics covered, a description of how to use the application, and exploration questions for use with the Java applet.
Type: Educational Game
Perspectives Video: Experts
Jumping Robots and Quadratics:
<p>Jump to it and learn more about how quadratic equations are used in robot navigation problem solving!</p>
Type: Perspectives Video: Expert
Perspectives Video: Professional/Enthusiasts
Base 16 Notation in Computing:
Listen in as a computing enthusiast describes how hexadecimal notation is used to express big numbers in just a little space.
Download the CPALMS Perspectives video student note taking guide.
Type: Perspectives Video: Professional/Enthusiast
Quadratic Equations and Robots:
<p>Get in gear with robotics as this engineer explains how quadratic equations are used in programming robotic navigation.</p>
Type: Perspectives Video: Professional/Enthusiast
Problem-Solving Tasks
Quadrupling Leads to Halving:
Students explore the structure of the operation s/(vn). This question provides students with an opportunity to see expressions as constructed out of a sequence of operations: first taking the square
root of n, then dividing the result of that operation into s.
Type: Problem-Solving Task
Dilating a Line:
This task asks students to make deductions about a line after it has been dilated by a factor of 2.
Type: Problem-Solving Task
As the Wheel Turns:
In this task, students use trigonometric functions to model the movement of a point around a wheel and, through space. Students also interpret features of graphs in terms of the given real-world
Type: Problem-Solving Task
The Circle and The Line:
Although this task is fairly straightforward, it is worth noticing that it does not explicitly tell students to look for intersection points when they graph the circle and the line. Thus, in addition
to assessing whether they can solve the system of equations, it is assessing a simple but important piece of conceptual understanding, namely the correspondence between intersection points of the two
graphs and solutions of the system.
Type: Problem-Solving Task
Population and Food Supply:
In this task students use verbal descriptions to construct and compare linear and exponential functions and to find where the two functions intersect (F-LE.2, F-LE.3, A-REI.11).
Type: Problem-Solving Task
Braking Distance:
This task provides an exploration of a quadratic equation by descriptive, numerical, graphical, and algebraic techniques. Based on its real-world applicability, teachers could use the task as a way
to introduce and motivate algebraic techniques like completing the square, en route to a derivation of the quadratic formula.
Type: Problem-Solving Task
A Linear and Quadratic System:
This task asks students to consider the linear and quadratic functions shown on a graph, and use quadratic functions to find the coordinates.
Type: Problem-Solving Task
Cash Box:
The given solutions for this task involve the creation and solving of a system of two equations and two unknowns, with the caveat that the context of the problem implies that we are interested only
in non-negative integer solutions. Indeed, in the first solution, we must also restrict our attention to the case that one of the variables is further even. This aspect of the task is illustrative of
the mathematical practice of modeling with mathematics, and crucial as the system has an integer solution for both situations, that is, whether we include the dollar on the floor in the cash box or
Type: Problem-Solving Task
A Cubic Identity:
Solving this problem with algebra requires factoring a particular cubic equation (the difference of two cubes) as well as a quadratic equation. An alternative solution using prime numbers and
arithmetic is presented.
Type: Problem-Solving Task
Two Squares are Equal:
This classroom task is meant to elicit a variety of different methods of solving a quadratic equation (A-REI.4). Some are straightforward (for example, expanding the square on the right and
rearranging the equation so that we can use the quadratic formula); some are simple but clever (reasoning from the fact that x and (2x - 9) have the same square); some use tools (using a graphing
calculator to graph the functions f(x) = x^2 and g(x) = (2x-90)^2 and looking for values of x at which the two functions intersect). Some solution methods will work on an arbitrary quadratic
equation, while others (such as the last three) may have difficulty or fail if the quadratic equation is not given in a particular form, or if the solutions are not rational numbers.
Type: Problem-Solving Task
Exponential growth versus linear growth I:
The purpose of this task it to have students discover how (and how quickly) an exponentially increasing quantity eventually surpasses a linearly increasing quantity. Students' intuitions will
probably have them favoring Option A for much longer than is actually the case, especially if they are new to the phenomenon of exponential growth. Teachers might use this surprise as leverage to
segue into a more involved task comparing linear and exponential growth.
Type: Problem-Solving Task
Finding Parabolas through Two Points:
This problem-solving task challenges students to find all quadratic functions described by given equation and coordinates, and describe how the graphs of those functions are related to one another.
Type: Problem-Solving Task
Exponential growth versus polynomial growth:
This problem solving task shows that an exponential function takes larger values than a cubic polynomial function provided the input is sufficiently large. This resource also includes standards
alignment commentary and annotated solutions.
Type: Problem-Solving Task
Warming and Cooling:
This task is meant to be a straight-forward assessment task of graph reading and interpreting skills. This task helps reinforce the idea that when a variable represents time, t = 0 is chosen as an
arbitrary point in time and positive times are interpreted as times that happen after that.
Type: Problem-Solving Task
Throwing Baseballs:
This task could be used for assessment or for practice. It allows students to compare characteristics of two quadratic functions that are each represented differently, one as the graph of a quadratic
function and one written out algebraically. Specifically, students are asked to determine which function has the greatest maximum and the greatest non-negative root.
Type: Problem-Solving Task
Average Cost:
This task asks students to find the average, write an equation, find the domain, and create a graph of the cost of producing DVDs.
Type: Problem-Solving Task
Springboard Dive:
The problem presents a context where a quadratic function arises. Careful analysis, including graphing of the function, is closely related to the context. The student will gain valuable experience
applying the quadratic formula and the exercise also gives a possible implementation of completing the square.
Type: Problem-Solving Task
The High School Gym:
This task asks students to consider functions in regard to temperatures in a high school gym.
Type: Problem-Solving Task
Telling a Story with Graphs:
In this task students are given graphs of quantities related to weather. The purpose of the task is to show that graphs are more than a collection of coordinate points; they can tell a story about
the variables that are involved, and together they can paint a very complete picture of a situation, in this case the weather. Features in one graph, like maximum and minimum points, correspond to
features in another graph. For example, on a rainy day, the solar radiation is very low, and the cumulative rainfall graph is increasing with a large slope.
Type: Problem-Solving Task
Oakland Coliseum:
This deceptively simple task asks students to find the domain and range of a function from a given context. The function is linear and if simply looked at from a formulaic point of view, students
might find the formula for the line and say that the domain and range are all real numbers.
Type: Problem-Solving Task
Logistic Growth Model, Explicit Version:
This problem introduces a logistic growth model in the concrete settings of estimating the population of the U.S. The model gives a surprisingly accurate estimate and this should be contrasted with
linear and exponential models.
Type: Problem-Solving Task
Logistic Growth Model, Abstract Version:
This task is for instructional purposes only and students should already be familiar with some specific examples of logistic growth functions. The goal of this task is to have students appreciate how
different constants influence the shape of a graph.
Type: Problem-Solving Task
How Is the Weather?:
This task can be used as a quick assessment to see if students can make sense of a graph in the context of a real world situation. Students also have to pay attention to the scale on the vertical
axis to find the correct match. The first and third graphs look very similar at first glance, but the function values are very different since the scales on the vertical axes are very different. The
task could also be used to generate a group discussion on interpreting functions given by graphs.
Type: Problem-Solving Task
Equations and Formulas:
In this task, students will use inverse operations to solve the equations for the unknown variable or for the designated variable if there is more than one.
Type: Problem-Solving Task
Bernardo and Sylvia Play a Game:
This task presents a simple but mathematically interesting game whose solution is a challenging exercise in creating and reasoning with algebraic inequalities. The core of the task involves
converting a verbal statement into a mathematical inequality in a context in which the inequality is not obviously presented, and then repeatedly using the inequality to deduce information about the
structure of the game.
Type: Problem-Solving Task
Regular Tessellations of the Plane:
This task examines the ways in which the plane can be covered by regular polygons in a very strict arrangement called a regular tessellation. These tessellations are studied here using algebra, which
enters the picture via the formula for the measure of the interior angles of a regular polygon (which should therefore be introduced or reviewed before beginning the task). The goal of the task is to
use algebra in order to understand which tessellations of the plane with regular polygons are possible.
Type: Problem-Solving Task
Building a quadratic function from f(x) = x^2:
This task aims for students to understand the quadratic formula in a geometric way in terms of the graph of a quadratic function.
Type: Problem-Solving Task
Building an explicit quadratic function by composition:
This task is intended for instruction and to motivate "Building a general quadratic function." This task assumes that the students are familiar with the process of completing the square.
Type: Problem-Solving Task
Checking a Calculation of a Decimal Exponent:
In this example, students use properties of rational exponents and other algebraic concepts to compare and verify the relative size of two real numbers that involve decimal exponents.
Type: Problem-Solving Task
Harvesting the Fields:
This is a challenging task, suitable for extended work, and reaching into a deep understanding of units. Students are given a scenario and asked to determine the number of people required to complete
the amount of work in the time described. The task requires students to exhibit , Make sense of problems and persevere in solving them. An algebraic solution is possible but complicated; a numerical
solution is both simpler and more sophisticated, requiring skilled use of units and quantitative reasoning. Thus the task aligns with either MAFS.912.A-CED.1.1 or MAFS.912.N-Q.1.1, depending on the
Type: Problem-Solving Task
Calculating the Square Root of 2:
This task is intended for instructional purposes so that students can become familiar and confident with using a calculator and understanding what it can and cannot do. This task gives an opportunity
to work on the notion of place value (in parts [b] and [c]) and also to understand part of an argument for why the square root of 2 is not a rational number.
Type: Problem-Solving Task
Throwing a Ball:
Students manipulate a given equation to find specified information.
Type: Problem-Solving Task
Paying the Rent:
Students solve problems tracking the balance of a checking account used only to pay rent. This simple conceptual task focuses on what it means for a number to be a solution to an equation, rather
than on the process of solving equations.
Type: Problem-Solving Task
Buying a Car:
Students extrapolate the list price of a car given a total amount paid in states with different tax rates. The emphasis in this task is not on complex solution procedures. Rather, the progression of
equations, from two that involve different values of the sales tax, to one that involves the sales tax as a parameter, is designed to foster the habit of looking for regularity in solution
procedures, so that students don't approach every equation as a new problem but learn to notice familiar types.
Type: Problem-Solving Task
Planes and Wheat:
In this resource, students refer to given information which defines 5 variables in the context of real world government expenses. They are then asked to write equations based upon specific known
values for some of the variables. The emphasis is on setting up, rather than solving, the equations.
Type: Problem-Solving Task
Building a General Quadratic Function:
In this resource, a method of deriving the quadratic formula from a theoretical standpoint is demonstrated. This task is for instructional purposes only and builds on "Building an explicit quadratic
Type: Problem-Solving Task
Sum of Even and Odd:
Students explore and manipulate expressions based on the following statement:
A function f defined for -a < x="">< a="" is="" even="" if="" f(-x)="f(x)" and="" is="" odd="" if="" f(-x)="-f(x)" when="" -a="">< x="">< a.="" in="" this="" task="" we="" assume="" f="" is=""
defined="" on="" such="" an="" interval,="" which="" might="" be="" the="" full="" real="" line="" (i.e.,="" a="">
Type: Problem-Solving Task
Graphs of Quadratic Functions:
Students compare graphs of different quadratic functions, then produce equations of their own to satisfy given conditions.
This exploration can be done in class near the beginning of a unit on graphing parabolas. Students need to be familiar with intercepts, and need to know what the vertex is. It is effective after
students have graphed parabolas in vertex form (y=a(x–h)^2+k), but have not yet explored graphing other forms.
Type: Problem-Solving Task
Equivalent Expressions:
This is a standard problem phrased in a non-standard way. Rather than asking students to perform an operation, expanding, it expects them to choose the operation for themselves in response to a
question about structure. Students must understand the need to transform the factored form of the quadratic expression (a product of sums) into a sum of products in order to easily see a, the
coefficient of the x^2 term; k, the leading coefficient of the x term; and n, the constant term.
Type: Problem-Solving Task
Radius of a Cylinder:
Students are asked to interpret the effect on the value of an expression given a change in value of one of the variables.
Type: Problem-Solving Task
Mixing Fertilizer:
Students examine and answer questions related to a scenario similar to a "mixture" problem involving two different mixtures of fertilizer. In this example, students determine and then compare
expressions that correspond to concentrations of various mixtures. Ultimately, students generalize the problem and verify conclusions using algebraic rather than numerical expressions.
Type: Problem-Solving Task
Mixing Candies:
Students are asked to interpret expressions and equations within the context of the amounts of caramels and truffles in a box of candy.
Type: Problem-Solving Task
Kitchen Floor Tiles:
This problem asks students to consider algebraic expressions calculating the number of floor tiles in given patterns. The purpose of this task is to give students practice in reading, analyzing, and
constructing algebraic expressions, attending to the relationship between the form of an expression and the context from which it arises. The context here is intentionally thin; the point is not to
provide a practical application to kitchen floors, but to give a framework that imbues the expressions with an external meaning.
Type: Problem-Solving Task
Extending the Definitions of Exponents, Variation 2:
The goal of this task is to develop an understanding of rational exponents (MAFS.912.N-RN.1.1); however, it also raises important issues about distinguishing between linear and exponential behavior
(MAFS.912.F-LE.1.1c) and it requires students to create an equation to model a context (MAFS.912.A-CED.1.2).
Type: Problem-Solving Task
Delivery Trucks:
This resource describes a simple scenario which can be represented by the use of variables. Students are asked to examine several variable expressions, interpret their meaning, and describe what
quantities they each represent in the given context.
Type: Problem-Solving Task
Animal Populations:
In this task students interpret the relative size of variable expressions involving two variables in the context of a real world situation. All given expressions can be interpreted as quantities that
one might study when looking at two animal populations.
Type: Problem-Solving Task
Computations with Complex Numbers:
This resource involves simplifying algebraic expressions that involve complex numbers and various algebraic operations.
Type: Problem-Solving Task
Operations with Rational and Irrational Numbers:
This task has students experiment with the operations of addition and multiplication, as they relate to the notions of rationality and irrationality.
Type: Problem-Solving Task
Seeing Dots:
The purpose of this task is to identify the structure in the two algebraic expressions by interpreting them in terms of a geometric context. Students will have likely seen this type of process
before, so the principal source of challenge in this task is to encourage a multitude and variety of approaches, both in terms of the geometric argument and in terms of the algebraic manipulation.
Type: Problem-Solving Task
Graphs of Power Functions:
This task requires students to recognize the graphs of different (positive) powers of x.
Type: Problem-Solving Task
Transforming the Graph of a Function:
This problem solving task examines, in a graphical setting, the impact of adding a scalar, multiplying by a scalar, and making a linear substitution of variables on the graph of the function f. This
resource also includes standards alignment commentary and annotated solutions.
Type: Problem-Solving Task
Medieval Archer:
The task addresses the first part of standard MAFS.912.F-BF.2.3: "Identify the effect on the graph of replacing f(x) by f(x) + k, kf(x), f(kx), and f(x + k) for specific values of k (both positive
and negative)."
Type: Problem-Solving Task
Kimi and Jordan:
In the middle grades, students have lots of experience analyzing and comparing linear functions using graphs, table, symbolic expressions, and verbal descriptions. In this task, students may choose a
representation that suits them and then reason from within that representation.
Type: Problem-Solving Task
The Canoe Trip, Variation 2:
The primary purpose of this task is to lead students to a numerical and graphical understanding of the behavior of a rational function near a vertical asymptote, in terms of the expression defining
the function.
Type: Problem-Solving Task
The Canoe Trip, Variation 1:
The purpose of this task is to give students practice constructing functions that represent a quantity of interest in a context, and then interpreting features of the function in the light of the
context. It can be used as either an assessment or a teaching task.
Type: Problem-Solving Task
Student Center Activity
Multiplying Complex Numbers:
This video demonstrates how to multiply complex numbers using distributive property and FOIL method.
Type: Tutorial
Addition and Subtraction of Polynomials:
This video tutorial shows students: the standard form of a polynomial, how to identify polynomials, how to determine the degree of a polynomial, how to add and subtract polynomials, and how to
represent the area of a shape as an addition or subtraction of polynomials.
Type: Tutorial
Systems of Equations Word Problems Example 1:
This video demonstrates solving a word problem by creating a system of linear equations that represents the situation and solving them using elimination.
Type: Tutorial
Squaring a Binomial:
This video covers squaring a binomial with two variables. Students will be given the area of a square.
Type: Tutorial
Constructing an Equations with Two Variables - Yoga Plan:
This video provides a real-world scenario and step-by-step instructions to constructing equations using two variables. Possible follow-up videos include Plotting System of Equations - Yoga Plan,
Solving System of Equations with Substitution - Yoga Plan, and Solving System of Equations with Elimination - Yoga Plan.
Type: Tutorial
Graphing Quadratic Equations:
This tutorial helps the learners to graph the equation of a quadratic function using the coordinates of the vertex of a parabola and its x- intercepts.
Type: Tutorial
Graphing Exponential Equations:
This tutorial will help you to learn about exponential functions by graphing various equations representing exponential growth and decay.
Type: Tutorial
What is a variable?:
Our focus here is understanding that a variable is just a letter or symbol (usually a lower case letter) that can represent different values in an expression. We got this. Just watch.
Type: Tutorial
Power of a Power Property:
This tutorial demonstrates how to use the power of a power property with both numerals and variables.
Type: Tutorial
Special Products of Binomials:
The video tutorial discusses about two typical polynomial multiplications. First, squaring a binomial and second, product of a sum and difference.
Type: Tutorial
Multiplying Bionomials:
Binomials are the polynomials with two terms. This tutorial will help the students learn about the multiplication of binomials. In multiplication, we need to make sure that each term in the first set
of parenthesis multiplies each term in the second set.
Type: Tutorial
Refraction of Light:
This resource explores the electromagnetic spectrum and waves by allowing the learner to observe the refraction of light as it passes from one medium to another, study the relation between refraction
of light and the refractive index of the medium, select from a list of materials with different refractive indicecs, and change the light beam from white to monochromatic and observe the difference.
Type: Tutorial
Human Eye Accommodation:
• Observe how the eye's muscles change the shape of the lens in accordance with the distance to the object being viewed
• Indicate the parts of the eye that are responsible for vision
• View how images are formed in the eye
Type: Tutorial
Concave Spherical Mirrors:
• Learn how a concave spherical mirror generates an image
• Observe how the size and position of the image changes with the object distance from the mirror
• Learn the difference between a real image and a virtual image
• Learn some applications of concave mirrors
Type: Tutorial
Convex Spherical Mirrors:
• Learn how a convex mirror forms the image of an object
• Understand why convex mirrors form small virtual images
• Observe the change in size and position of the image with the change in object's distance from the mirror
• Learn some practical applications of convex mirrors
Type: Tutorial
Color Temperature in a Virtual Radiator:
• Observe the change of color of a black body radiator upon changes in temperature
• Understand that at 0 Kelvin or Absolute Zero there is no molecular motion
Type: Tutorial
Solar Cell Operation:
This resource explains how a solar cell converts light energy into electrical energy. The user will also learn about the different components of the solar cell and observe the relationship between
photon intensity and the amount of electrical energy produced.
Type: Tutorial
Electromagnetic Wave Propagation:
• Observe that light is composed of oscillating electric and magnetic waves
• Explore the propagation of an electromagnetic wave through its electric and magnetic field vectors
• Observe the difference in propagation of light of different wavelengths
Type: Tutorial
Basic Electromagnetic Wave Properties:
• Explore the relationship between wavelength, frequency, amplitude and energy of an electromagnetic wave
• Compare the characteristics of waves of different wavelengths
Type: Tutorial
Geometrical Construction of Ray Diagrams:
• Learn to trace the path of propagating light waves using geometrical optics
• Observe the effect of changing parameters such as focal length, object dimensions and position on image properties
• Learn the equations used in determining the size and locations of images formed by thin lenses
Type: Tutorial
Will an Ice Cube Melt Faster in Freshwater or Saltwater?:
With an often unexpected outcome from a simple experiment, students can discover the factors that cause and influence thermohaline circulation in our oceans. In two 45-minute class periods, students
complete activities where they observe the melting of ice cubes in saltwater and freshwater, using basic materials: clear plastic cups, ice cubes, water, salt, food coloring, and thermometers. There
are no prerequisites for this lesson but it is helpful if students are familiar with the concepts of density and buoyancy as well as the salinity of seawater. It is also helpful if students
understand that dissolving salt in water will lower the freezing point of water. There are additional follow up investigations that help students appreciate and understand the importance of the
ocean's influence on Earth's climate.
Type: Video/Audio/Animation
Relations and Functions:
This video demonstrates how to determine if a relation is a function and how to identify the domain.
Type: Video/Audio/Animation
Roots and Unit Fraction Exponents:
Exponents are not only integers. They can also be fractions. Using the rules of exponents, we can see why a number raised to the power " one over n" is equivalent to the nth root of that number.
Type: Video/Audio/Animation
Rational Exponents:
Exponents are not only integers and unit fractions. An exponent can be any rational number expressed as the quotient of two integers.
Type: Video/Audio/Animation
Simplifying Radical Expressions:
Radical expressions can often be simplified by moving factors which are perfect roots out from under the radical sign.
Type: Video/Audio/Animation
Solving Mixture Problems with Linear Equations:
Mixture problems can involve mixtures of things other than liquids. This video shows how Algebra can be used to solve problems involving mixtures of different types of items.
Type: Video/Audio/Animation
Using Systems of Equations Versus One Equation:
When should a system of equations with multiple variables be used to solve an Algebra problem, instead of using a single equation with a single variable?
Type: Video/Audio/Animation
Systems of Linear Equations in Two Variables:
The points of intersection of two graphs represent common solutions to both equations. Finding these intersection points is an important tool in analyzing physical and mathematical systems.
Type: Video/Audio/Animation
Point-Slope Form:
The point-slope form of the equation for a line can describe any non-vertical line in the Cartesian plane, given the slope and the coordinates of a single point which lies on the line.
Type: Video/Audio/Animation
Two Point Form:
The two point form of the equation for a line can describe any non-vertical line in the Cartesian plane, given the coordinates of two points which lie on the line.
Type: Video/Audio/Animation
Solving Literal Equations:
Literal equations are formulas for calculating the value of one unknown quantity from one or more known quantities. Variables in the formula are replaced by the actual or 'literal' values
corresponding to a specific instance of the relationship.
Type: Video/Audio/Animation
MIT BLOSSOMS - Fabulous Fractals and Difference Equations :
This learning video introduces students to the world of Fractal Geometry through the use of difference equations. As a prerequisite to this lesson, students would need two years of high school
algebra (comfort with single variable equations) and motivation to learn basic complex arithmetic. Ms. Zager has included a complete introductory tutorial on complex arithmetic with homework
assignments downloadable here. Also downloadable are some supplemental challenge problems. Time required to complete the core lesson is approximately one hour, and materials needed include a
blackboard/whiteboard as well as space for students to work in small groups. During the in-class portions of this interactive lesson, students will brainstorm on the outcome of the chaos game and
practice calculating trajectories of difference equations.
Type: Video/Audio/Animation
Graphing Lines 1:
Khan Academy video tutorial on graphing linear equations: "Algebra: Graphing Lines 1"
Type: Video/Audio/Animation
This Khan Academy video tutorial introduces averages and algebra problems involving averages.
Type: Video/Audio/Animation
Virtual Manipulatives
Solving Quadratics By Taking The Square Root:
This resource can be used to assess students' understanding of solving quadratic equation by taking the square root. A great resource to view prior to this is "Solving quadratic equations by square
root' by Khan Academy.
Type: Virtual Manipulative
Slope Slider:
In this activity, students adjust slider bars which adjust the coefficients and constants of a linear function and examine how their changes affect the graph. The equation of the line can be in
slope-intercept form or standard form. This activity allows students to explore linear equations, slopes, and y-intercepts and their visual representation on a graph. This activity includes
supplemental materials, including background information about the topics covered, a description of how to use the application, and exploration questions for use with the java applet.
Type: Virtual Manipulative
Graphing Equations Using Intercepts:
This resource provides linear functions in standard form and asks the user to graph it using intercepts on an interactive graph below the problem. Immediate feedback is provided, and for incorrect
responses, each step of the solution is thoroughly modeled.
Type: Virtual Manipulative
Graphing Lines:
Allows students access to a Cartesian Coordinate System where linear equations can be graphed and details of the line and the slope can be observed.
Type: Virtual Manipulative
Data Flyer:
Using this virtual manipulative, students are able to graph a function and a set of ordered pairs on the same coordinate plane. The constants, coefficients, and exponents can be adjusted using slider
bars, so the student can explore the affect on the graph as the function parameters are changed. Students can also examine the deviation of the data from the function. This activity includes
supplemental materials, including background information about the topics covered, a description of how to use the application, and exploration questions for use with the java applet.
Type: Virtual Manipulative
Function Matching:
This is a graphing tool/activity for students to deepen their understanding of polynomial functions and their corresponding graphs. This tool is to be used in conjunction with a full lesson on
graphing polynomial functions; it can be used either before an in depth lesson to prompt students to make inferences and connections between the coefficients in polynomial functions and their
corresponding graphs, or as a practice tool after a lesson in graphing the polynomial functions.
Type: Virtual Manipulative
Function Flyer:
In this online tool, students input a function to create a graph where the constants, coefficients, and exponents can be adjusted by slider bars. This tool allows students to explore graphs of
functions and how adjusting the numbers in the function affect the graph. Using tabs at the top of the page you can also access supplemental materials, including background information about the
topics covered, a description of how to use the application, and exploration questions for use with the java applet.
Type: Virtual Manipulative
Curve Fitting:
With a mouse, students will drag data points (with their error bars) and watch the best-fit polynomial curve form instantly. Students can choose the type of fit: linear, quadratic, cubic, or quartic.
Best fit or adjustable fit can be displayed.
Type: Virtual Manipulative
Equation Grapher:
This interactive simulation investigates graphing linear and quadratic equations. Users are given the ability to define and change the coefficients and constants in order to observe resulting changes
in the graph(s).
Type: Virtual Manipulative
Parent Resources
Vetted resources caregivers can use to help students learn the concepts and skills in this course.
|
{"url":"https://www.cpalms.org/PreviewCourse/Preview/5080?isShowCurrent=false","timestamp":"2024-11-03T09:42:00Z","content_type":"text/html","content_length":"233792","record_id":"<urn:uuid:6bb8aa8a-43b7-4119-beb9-62c3bd5b9b05>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00295.warc.gz"}
|
Solid State Physics Past Papers | T4Tutorials.com
Solid State Physics Past Papers
Subject: Solid State Physics-I
Time Allowed: 15 Minutes
Maximum Marks: 10
NOTE: Attempt this Paper on this Question Sheet only. Please encircle the correct option. Division of marks is given in front of each question. This Paper will be collected back after expiry of time
limit mentioned above.
Part-I Encircle the right answer, cutting and overwriting are not allowed. (10)
The coordination number of CsCl structure is
a) 1
b) 7
c) 8
d) 14
Which combination of following crystal structures are closely-packed structures?
a) FCC and SC
b) BCC and SC
c) BCC and HCP
d) HCP and FCC
The space Jattice of cesium chloride (CsCl) structure is:
a) Simple cubic
b) Body centered cubic
c) Face-centered cubic
d) None of these
Reciprocal of face centered cubic (FCC) lattice is
a) FCC lattice
b) BCC lattice
c) SC lattice
d) HCP lattice
e) none of these
For p atoms in primitive cell, which of the following combination of acoustical and optical phonon branches in sequence is true?
a) (3p-3,3)
b) (3, 3p-3)
c) (3.3)
d) none of these
According to classical model of lattice heat capacity (C[v]), C[v] for all solids
a) depends on temperature
b) docs not depend on temperature
c) remains constant at all temperatures
d) b) and c
e) none of these
Van der Waals interactions in inert gas crystals are always
a) repulsive
b) attractive
c) neither attractive nor repulsive
d) zero
e) none of these
At Jow temperatures, phonon heat capacity, C[v] (according of Debye modcl) varies as:
a) T^3
b) T^3/2
c) T^2
d) T
e) None of these
In monatomic lattice, the frequency of the wave at long wavelengths varies with k as:
a) k
b) k^2
c) k^3
d) independent of wave-vector k
In a cubic crystals, [111] crystallographic direction to (111) crystal plane is always
a) Parallel
b) Perpendicular
c) neither parallel nor perpendicular
d) none of these
Subject: Solid State Physics-I
Time Allowed: 2 Hours 45 Minutes
Maximum Marks: 50
Part-II Give short notes on following, each question carries equal marks. (20)
Q#1: Draw (111), (200), (100) and (100) crystallographic planes in cubic unit cell.
Q#2: Differentiate between Bravais and non-Bravais lattice with the help of diagrams.
Q#3: Explain primitive and non-primitive unit cell. Differentiate by sketching diagrams.
Q#4: Show that reciprocal of FCC lattice is a BCC lattice.
Q#5: Calculate the packing fraction of face-centered cubic (FCC) lattice.
Part-III Give detailed answers, each question carries equal marks. (30)
Q#1: Consider a linear chain of diatomic atoms of masses m[1] and m[2] (m[1] > m[2]) with repeat distance a and interatomic force constant c.
1. i) Establish the equations of motion of two atoms and derive the dispersion relation for a diatomic linear lattice by taking into account nearest neighbor interaction only.
2. ii) Plot the dispersion curve and distinguish optical and acoustical phonon branches in dispersion curve.
Q#2: What kind of interaction exists between atoms of inert gas crystals? Discuss briefly. Show that the interaction between two identical inert gas atoms at separation & varies as –CR^6.
Q#3: Derive an expression for lattice heat capacity of solids on the basis of classical model. Explain graphically the discrepancies of classical model in explaining the experimental observations for
low and high temperature limits.
|
{"url":"https://t4tutorials.com/solid-state-physics-past-papers/","timestamp":"2024-11-04T09:12:45Z","content_type":"text/html","content_length":"150771","record_id":"<urn:uuid:133cd898-02de-4c13-af13-b9f70f2ed050>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00808.warc.gz"}
|
Tree Data Structure | Tree Terminologies | Tree Traversal
Tree Data Structure
A tree is a nonlinear data structure in which the elements are arranged in a sorted sequence. Trees are used to represent a hierarchical relationship that exists among several data items.
Tree Data Structure
Trees are very flexible, important, versatile, and powerful data structures.
Tree Terminologies
Each data item in a tree is called a node. A node is an entity that contains a key or a value and has pointers to its child nodes.
Node & Edge
The last nodes of each path are called leaf nodes or external nodes that do not contain a link/pointer to child nodes or it has no child nodes.
The node having at least a child node is called an internal node.
It is the line drawn from one node to another node or it is the link between any two nodes.
It is the topmost node of a tree. It is the first in the hierarchical arrangement of data items on the top of the tree.
Height of a Node
The height of a node is the number of edges from the node to the deepest leaf. Height is the longest path from the node to a leaf node.
height & depth
Depth of a Node
The depth of a node is the number of edges from the root to the node. It is the maximum level of any node in a given tree.
Terminal Node
A node with degree zero is called a terminal node or a leaf. It is a node that has no child nodes.
All the children nodes of a given parent node are called siblings. They are also called brothers.
• B and c are siblings for parent node a.
• E and F are siblings of parent node d.
The path is a sequence of consecutive edges from the source node to the destination node. In the figure above, the path between A and E is given by the node pairs, (A, B), (B, E).
Height of a Tree
The height of a Tree is the height of the root node or the depth of the deepest node.
Degree of a Node
The degree of a node is the total number of branches of that node.
A collection of disjoint trees is called a forest. You can create a forest by cutting the root of a tree.
Types of Trees
• Binary Tree
• Binary Search Tree
• AVL Tree
• B-Tree
Tree Traversal
In order to perform any operation on a tree, you need to reach a specific node. So visiting the nodes in a particular pattern is called tree traversal.
Here are some tree traversal algorithms which traverse the tree nodes in a particular order.
1. Pre Order
2. In Order
3. Post Order
Tree Applications
• Decision-based algorithm is used in machine learning which uses algorithm of tree.
• Indexing in databases is used by tree data structure.
• Binary Search Trees (BSTs) are used to quickly check whether an element is present in a set or not.
• Heap is a kind of tree that is used for heap sort.
• A modified version of a tree called Tries is used in modern routers to store routing information.
• DNS system also uses it.
• Most popular databases use B-Trees and T-Trees, which are variants of the tree structure we learned above to store their data
• Compilers use a syntax tree to validate the syntax of every program you write.
Why Tree Data Structure?
Because the other data structures such as Arrays, Linked Lists, Stack, and Queue are linear data structures that store data sequentially.
The time complexity increases with the increase in the data size in the linear data structures during any program operation so to overcome this problem we use nonlinear data structures, such as trees
and graphs.
These data structures allow quicker and easier access to the data. Because the time complexity of tree traversal is better than any linear data structure.
What is a Tree Data Structure
The tree is a nonlinear data structure. It is a collection of nodes that are related to each other.
What is a branch in tree data structure?
The lines connecting elements or nodes are called branches or edges.
What is tree traversal?
The visiting of a particular node in a tree is called tree traversal.
What are the types of Tree Traversal
There are 3 main types of tree traversal.
1. Pre Order
2. In Order
3. Post Order
|
{"url":"https://techindetail.com/tree-data-structure/","timestamp":"2024-11-06T00:53:20Z","content_type":"text/html","content_length":"96634","record_id":"<urn:uuid:45512c25-5411-476a-9459-18585500481c>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00292.warc.gz"}
|
Replacing BatchNorm | Tom Tumiel
A sneak-peek into the results. Both a 4 layer CNN and a ResNet18 are trained for 3 epochs with different normalisation schemes.
BatchNorm^1 is a technique for normalising the outputs of intermediate layers inside a neural network using the batch's statistics. The method was originally designed to "reduce internal covariate
shift" - to force all of the intermediate layers to have zero mean and unit variance so that the outputs do not explode (especially in deep networks) and keep everything nice and centered around
zero. And BatchNorm has proved vital to getting networks to train, enabling larger learning rates, faster training and results that generalise better. But since the algorithm's original publishing,
much more work has been done identifying why BatchNorm works. And it's not because it reduces covariate shift (mostly), but rather because it has a profound impact on the loss landscape^2 that the
algorithm must traverse in order to find an optimal solution. BatchNorm smooths the loss landscape so that our optimizers can find good solutions much easier. But with this in mind, couldn't we
design a better normalisation strategy? One that uses our better understanding of why it works the way it does. Well that's what I hope to explain in this article.
As a brief reminder, BatchNorm normalises batches using the mean and variance of the batch during training according to the following formula:
Why Should we Replace BatchNorm?
BatchNorm has proven instrumental to creating and training increasingly deeper networks, providing stability to training and reducing training times. Nevertheless, there are a few drawbacks to the
1. One of the main reasons that researchers look for alternatives to BN is when training a network with a small batch size. Since BatchNorm generates statistics across the batch, if the batch size
is small, the standard deviation is likely to be very small for at least one batch of data, leading to numerical instability in the normalisation. But with the increasing model and input sizes
used in modern networks, the batch size must be small to fit into memory, leading to a trade off between the size of the architecture and the size of the batch.
2. Additionally, the original premise behind BatchNorm proved to be somewhat incorrect^2 - but the technique still worked (like a lot of other bugs in ML). With this improved knowledge could it not
be possible to further improve the normalisation by redesigning it now. It's a bit like, after learning how an oven works, we create a new cake recipe to use this knowledge.
3. BatchNorm creates trouble for different domains when using transfer learning. When transferring from one domain to another, particularly with little data, the input has a different distribution.
As a result, the normalization is particularly skewed and may ruin the training. In particular, this happens when just training the head of the network - a common practice in transfer learning.
As a result, we have to train the BatchNorm layers (or make sure that they are in inference mode) in order for the body of the network to produce sensible results to train the head on.
4. BatchNorm has different training and testing phases, making the generalisation slightly more unpredictable. The network can't dynamically adjust to an input that has a completely different
distribution than the ones it has previously encountered, which may lead to strange performance^3.
Setting the Baseline
For the most part of this exploration, I will compare two networks: a small 4 layer convolutional network and a ResNet18, both trained on Imagenette, and averaged over 3 training runs each. For full
details of the training process that I used, please see this notebook. Each result is run over a small grid search of parameters (batchsize: 128, learning rate: 0.01).
The baseline below shows each network without normalization. Click on the tabs to switch between networks. The best performing result is in bold.
bs lr Accuracy Loss
128 0.01 0.47 ± 0.01 1.59 ± 0.02
128 0.001 0.35 ± 0.02 1.91 ± 0.04
2 0.01 0.46 ± 0.01 1.62 ± 0.03
2 0.001 0.51 ± 0.01 1.50 ± 0.02
With BatchNorm
Adding BatchNorm to the same baselines above leads to the following results. BatchNorm improves performance overall, particularly at the higher learning rate and large batch setting. We can see how
training does not perform as well when the batch size is small.
bs lr Accuracy Loss
128 0.01 0.59 ± 0.00 1.28 ± 0.01
128 0.001 0.44 ± 0.01 1.70 ± 0.02
2 0.01 0.43 ± 0.04 2.78 ± 0.77
2 0.001 0.44 ± 0.02 1.77 ± 0.10
Managing Small Batch Sizes
How can we manage small batch sizes? Well the first possible solution would be to increase the epsilon parameter in the divisor of the normalisation step which is used to prevent numerical
instability. If we increase it to 0.1 then the output of the normalisation will be constrained to times the input. While this is a reasonable stop-gap solution it doesn't quite fix the problem, but
in practice can lead to some performance increase.
bs lr Accuracy Loss
128 0.01 0.57 ± 0.01 1.31 ± 0.02
128 0.001 0.41 ± 0.01 1.73 ± 0.02
2 0.01 0.47 ± 0.01 1.68 ± 0.04
2 0.001 0.44 ± 0.01 1.83 ± 0.02
So while increasing epsilon is a reasonable first attempt, how can this be further improved? Since the goal of BatchNorm is to normalize by the dataset statistics, we could use the running statistics
at training time as well as test time.
bs lr Accuracy Loss
128 0.01 0.57 ± 0.01 1.31 ± 0.02
128 0.001 0.42 ± 0.01 1.71 ± 0.01
2 0.01 0.48 ± 0.02 1.73 ± 0.13
2 0.001 0.41 ± 0.05 2.14 ± 0.35
This seems like a reasonable attempt to fix the normalisation and results in reasonable performance, however, the performance is not the same as the original BatchNorm - and in some cases is worse
than increasing epsilon above. Why is this? While the normalisation step is much the same, and thus the forward pass results in the same numbers, the backward pass is very different. Since the
running statistics are detached from the batch statistics (there is no connection between them and thus they are treated as constants), the gradient is very different between BatchNorm and
RunningBatchNorm. The gradient no longer takes into account the gradient of the batch statistics. And this effect turned out to be more important than normalising the internal layers^2.
Comparing the Gradients
The original authors of BatchNorm left the "precise effect of Batch Normalization on gradient propagation" as a further area of study. So, after some time, the effect of BN on the loss landscape was
studied^2. It turns out that the whitening in BatchNorm using the batch statistics smooths the loss landscape. This means the gradients are more predictive - if you take a step in the direction of
the current gradient, it is quite likely that you will continue moving in the same direction for the next gradient step. So if you doubled the learning rate, then things are just fine.
A smooth loss landscape prevents exploding and vanishing gradients and reduces the reliance on a good intialisation and a tuned, small learning rate. And while BatchNorm reparametrizes the loss using
the batch statistics, it still keeps the same minima, since the parameters and can always be set to undo the whitening transform.
We can write out the gradients using backpropagation through a batchnorm layer and compare those with the RunningBN layer to see what changes.
The forward (blue) and backward (red) pass through a BatchNorm layer. The nodes contain the value at each step in the forward pass while the arrows show the transform between steps. Click on image
for full size.
Backpropping through the BatchNorm layer attaches the gradients of the batch statistics to the gradient of the input. The simplified gradient of the output of BatchNorm with respect to the input thus
looks like this:
There are many explanations on the backwards pass of BatchNorm.
See here
for a full explanation of the computational graph,
and here
for a more detailed explanation of the simplified gradient.
The authors directly apply this expression to show that BatchNorm actually smooths the gradients, making training much easier. For further explanation on the properties of smoothness displayed by
these gradients, see the extra note titled Lipschitz and Gradients, below. Comparing this to the RunningBN backprop graph, we can see that the statistics don't contribute to the gradient of the layer
(except for the scaling by a constant - the running standard deviation).
The forward and backward pass of a RunningBatchNorm layer shows that the gradients aren't affected by the batch statistics.
Lipschitz and Gradients
A lot of papers that explore the effects of BatchNorm and other adjustments to network architectures use something called Lipschitz continuity to describe the effect on smoothness of a function and
thus help prove convergence for many gradient descent based algorithms. This gives us a mathematical framework to compare different architectures instead of looking at things empirically. But since
it is a little extra math, it's just an aside.
A L-Lipschitz function is defined as follows:
This, in words, means the greatest change in the function is bounded by L. To take this further, the slope of the function between any 2 points is never greater than L. L, here, is called the
Lipschitz constant. So this property is basically an upper estimate of the gradient of the function. Since we want stable gradients, a smaller L is preferred.
Beta smoothness is the exact same Lipschitz property, applied to the gradient of the function with a Lipschitz constant . -smoothness will thus bound the second derivative (the hessian) of the
function. If the value is small it says that our gradients don't really change very much from place to place, while a large beta value says that the gradients are completely different if you move
just a tiny bit from where you are now. So to make sure that our gradients don't change very much after consecutive gradient steps, we want a small value. This will enable us to take larger gradient
descent steps (increase the learning rate) and feel more confident that our gradient is 'predictive' (that it actually is indicative of a good local minimum).
So to summarize, L-Lipschitz is a method of bounding the change in a function. To get good convergence with a first order method like gradient descent, we want the the -smoothness (-Lipschitz of ) to
be small.
What does this mean for BatchNorm?
BatchNorm reduces the Lipschitz constant of the gradients of the loss, the -smoothness, making the loss landscape smoother, and more resilient to exploding/vanishing gradients, initialisations and
learning rates^2. As a result, the gradients are more predictive of finding a good minimum and thus we can increase the learning rate.
(BN also reduces the Lipschitz constant of the loss landscape, bounding the change in loss and making the gradients smaller and more stable, but this is almost a secondary effect to the beta
The diagram shows the gradient predictiveness and beta smoothness from a network with and without batchnorm. The predictiveness is calculated by taking the L2 norm of the difference between the
gradient at the current step with the gradient after taking that step. This indicates if the gradient stays relatively the same (if the difference is small) or changes significantly after subsequent
steps (if the difference is large). The second graph shows the "effective" beta-smoothness, which calculates the beta-Lipschitz value along the gradient direction using the L2 norm. If the value is
large, the gradients are not smooth.
Since BN can undo the effects of the whitening using the parameters and , the method preconditions the optimisation - the optima remain the same but the landscape and the path that gradient descent
takes to achieve an optimum are different.
A final word on eigenvalues
We've seen that the hessian is vital to optimisation convergence. If the eigenvalues of the hessian are all positive (a positive semi-definite hessian) then the function is convex (and at that point
we can sit back and relax). While neural networks can be locally convex, they will not be globally convex due to non-linearities. By looking at the smallest (most negative) and largest eigenvalues of
the hessian at different parts of the loss landscape, we can see how smooth the optimisation is and how close it is to convexity^4. The Loss Landscape paper^4 uses this technique to show how residual
connections smooth the landscape, but we can also use it to see how BatchNorm changes it.
Here we have a bit of eye candy from the Loss Landscape paper showing the ratio of min and max eigenvalues of the hessian of a network using residual connections and without.
Using the same technique as above, we can look at the eigenvalue ratio with and without BatchNorm (of the small CNN). While the results aren't as clear cut as the resnet, there does seem to be better
convexity in the BatchNorm plot.
So if deep neural networks that don't have BN are more non-convex then they contain lots of ups and downs in the gradients, leading to vanishing or exploding gradients, and heavy reliance on good
initialisation and learning rate tuning, to ensure they don't end up in some untrainable state. But with a much smoother landscape and gradients that "just get the job done" - they're not too big or
small - the optimisation does not need to rely so heavily on initialisation and learning rate. In fact, an even larger learning rate will just get you there faster.
Alright, let's get back to it.
Empirical Gradients
As we saw above the gradients propagated through a BN layer and a Running BN layer differ significantly. But how does this show up in the gradients empirically, when training a network. Here we plot
the L2 norm of the gradients throughout training for all the convolutional layers in the networks without and with BatchNorm and using RunningBatchNorm.
Sidenote In all of the empirical comparisons, like the gradient norm, I use the best performing hyperparameter combinations.
What we expect to see is how using BatchNorm results in gradients that don't explode or vanish. As we go deeper into the network, if the norm of the gradients keeps increasing or decreasing, then
training becomes harder for the network. So ideally, we want to see the norms of the gradients remain roughly constant across iterations and layers. Additionally, we want fairly stable gradients that
don't have any large drops or rises. In the non-normalised graph, we can see that there is a steep dropoff at the beginning, followed by a continual rise. The BatchNorm graph is much more stable.
Furthermore, the L2 norm of the BN and RunningBN graphs are very similar, however, the L2 norm doesn't quite tell us enough about the quality of the gradients.
L2 norm of gradient while training. Take note of different scales and see hyperparameter combination in image title. Top: Gradient L2 norm without any normalisation layer. Middle: Gradient L2 norm
using BatchNorm. Bottom: Gradient L2 norm with RunningBatchNorm.
Where are BatchNorm's Improvements Coming From?
So if it's all about the gradients, then why are there other things in BN? The scaling and biasing, and the particular whitening (first and second moments)? Which parts of batchnorm cause the biggest
improvements in the validation scores? We can separate BN into 2 parts: a normalisation step and a linear weighting step. The normalisation step uses the batch statistics to normalise across
channels. We can also pull out the effect on the gradients by detaching the batch statistics from the normalisation step. By doing this, we can observe the effect of using the gradients in the
backward pass. The linear weighting step multiplies each channel by a constant and adds a bias.
Here we compare the impact of each part of the BN algorithm on the validation loss and we can see that the effect of normalisation on the gradients is most important, yielding the majority
improvement. The scaling and whitening themselves don't actually contribute too much to BN's performance. And this is why I particularly enjoy good ablation studies (such as the SqueezeExcite paper^5
), so that you know exactly where the improvements come from.
Comparing Activations
We can also compare the activations (the outputs) of the convolutional layers across normalisations. Similar to the original BN paper, we see that BN reduces the large variability in the activations
and training seems to progress much smoother. What we want is a nice and stable training trajectory, without any large bumps that can kick us off the manifold of a good model that generalizes well.
Without normalisation, the activations become biased with the last layer growing significantly and the others remaining small. The activations of the BatchNorm and RunningBN layers are very similar,
showing how the forward pass remains essentially unchanged, despite BatchNorm performing better.
Graphs of the activations of the convolutional layers while training the network with various normalisation methods.
Other Normalisations
Different Normalisation Methods
There have been several other attempts at creating a different normalisation layer to handle small batches which have mostly performed worse than BatchNorm except at the smallest batch sizes. Each
attempt simply adjusts the dimensions over which to normalise.
LayerNorm^6 normalises each layer individually, for every input. However, this prevents the network from learning to distinguish inputs that actually have a different distribution since each layer
has the same normalised distribution.
In LayerNorm, if there are 2 images, with one image having a higher contrast than the other, then LayerNorm will normalize both images to the same level, preventing the network from learning anything
based on the level of contrast. (This doesn't just apply to contrast but to any feature that occurs across layers.)
bs lr Accuracy Loss
128 0.01 0.54 ± 0.02 1.43 ± 0.05
128 0.001 0.38 ± 0.02 1.81 ± 0.04
2 0.01 0.61 ± 0.01 1.22 ± 0.02
2 0.001 0.55 ± 0.00 1.38 ± 0.01
GroupNorm^7 is a generalisation of LayerNorm. The method selects a number of groups for each normalisation to occur along the layer axis, ignoring batch information. At the one extreme is LayerNorm
with a single group across the entire layer, and at the other extreme is a group for every channel, called InstanceNorm (which is typically only used for style transfer). However, GroupNorm allows
different numbers of groups and each group is normalized, allowing the network to compensate for a lack of batch information, by grouping channels (more on this below).
bs lr Accuracy Loss
128 0.01 0.57 ± 0.01 1.32 ± 0.01
128 0.001 0.40 ± 0.00 1.76 ± 0.02
2 0.01 0.62 ± 0.01 1.19 ± 0.02
2 0.001 0.59 ± 0.01 1.27 ± 0.02
So while these methods include the gradients of the sample statistics at least across some dimensions, they do not include batch information, and often perform worse than BatchNorm unless the batch
size is small.
Weight Standardisation and Batch-Channel Normalisation
So BatchNorm smooths the loss landscape which results in a bunch of nice properties: faster training, larger learning rates, better generalisation, less reliance on good initialisation. All of these
properties are because of the effect of the normalization using the batch statistics on the gradient. Now that this is known, could we adjust the normalization so that the gradient has the same
smoothing without the normalization across activation batches. The Weight Standardisation paper^3 points out that the activations are simply one step removed from the weights of the network and the
weights are what actually receive the gradient. So instead, we can standardise the weights instead of the activations, and achieve the same smoothing effect (a reduction in Lipschitz constant, which
is proved in the paper, in a similar manner to the way it's proved for BatchNorm).
Now you may notice, that if the weights are standardised, then the activations after our convolution will be huge. So the authors assume the use of an activation normalization scheme like GN after
this to shift the activations to a regular place once more. The effects of this, they claim, are additive on the smoothness of the landscape.
The WS paper also notices another property that comes as a corollary to BatchNorm: it avoids parts of the network where neurons are completely zero and have no gradient (such as the negative half
after a ReLU activation) and are thus eliminated from training. The authors call this an "elimination singularity"^3:
Elimination singularities refer to the points along the training trajectory where neurons in the networks get eliminated. Eliminable neurons waste computations and decrease the effective model
complexity. Getting closer to them will harm the training speed and the final performances. By forcing each neuron to have zero mean and unit variance, BN keeps the networks at far distances from
elimination singularities caused by non-linear activation functions.
BatchNorm reduces elimination singularities (in ReLU based networks) by ensuring that at least some of the activations of each channel/neuron in a layer are positive and receive a gradient. It also
ensures that each channel isn't underrepresented since they are normalized across the batch for each channel. Channel based methods (GN, LN) normalize the same across channels, resulting in some
channels having more negative values which will be eliminated and other channels which are underrepresented. GroupNorm, however, somewhat helps this problem as each group will have some positive
neurons that aren't eliminated.
So without access to batch information, we come closer to elimination singularities, but in practice, collecting batch information is fairly easy - just use running BatchNorm like above. So we can
combine batch info with GN by adding in RunningBN (which the authors call Batch Channel Norm) which typically results in even better performance.
bs lr Accuracy Loss
128 0.01 0.59 ± 0.00 1.26 ± 0.01
128 0.001 0.39 ± 0.01 1.76 ± 0.00
2 0.01 0.65 ± 0.00 1.08 ± 0.01
2 0.001 0.58 ± 0.01 1.27 ± 0.03
Weight Standardisation Gradients
Comparing the gradients of WS with GN and BCN shows an interesting peak at the start of training. I don't have a good explanation as to its presence but I don't think it is particularly good for
training. Nevertheless, the gradients thereafter are particularly stable, remaining around 1 throughout with fewer spikes than BatchNorm.
BatchNorm owes most of its success to the effect it has on the gradients. By smoothing the loss landscape and making the gradients more stable, BatchNorm doesn't need to rely so heavily on good
initialisation and learning rate tuning to avoid vanishing or exploding gradients. By considering how the gradients are affected by BatchNorm, we can create a new layer to perform equally to
BatchNorm without the constraint on batch size. By applying the standardisation to the weights instead of activations, we can achieve the same effect on the gradients. Additionally, changing to
weight standardisation often leads to improved network performance.
The loss and accuracy results (with standard deviation error bar) of the various normalisation methods.
• Training was fairly short (only 3 epochs) so we don't see how things progress further on.
• Only did 3 runs per test although results are fairly stable.
• There are other methods that can take the place of normalization that I have not mentioned here. For example, SELU^8, Fixup^9, and network deconvolution^10.
• For all the experiments, I ignored wall time, which is approximately 5x longer for the small batch networks.
If you're still reading and if you enjoyed the article (or didn't), please feel free to send me a note about what you liked and what you didn't. I'd love to improve and your feedback means a lot! See
about page for contact.
|
{"url":"https://ttumiel.com/blog/replacing-bn/","timestamp":"2024-11-11T11:31:20Z","content_type":"text/html","content_length":"351170","record_id":"<urn:uuid:ddd709fe-f43f-41c1-8c9c-f150a3008cf4>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00206.warc.gz"}
|
PPT - Network Security PowerPoint Presentation, free download - ID:1114312
2. Problem • Computer networks are typically a shared resource used by many applications representing different interests. • The Internet is particularly widely shared, being used by competing
businesses, mutually antagonistic governments, and opportunistic criminals. • Unless security measures are taken, a network conversation or a distributed application may be compromised by an
3. Problem • Consider some threats to secure use of, for example, the World Wide Web. • Suppose you are a customer using a credit card to order an item from a website. • An obvious threat is that an
adversary would eavesdrop on your network communication, reading your messages to obtain your credit card information. • It is possible and practical, however, to encrypt messages so as to
prevent an adversary from understanding the message contents. A protocol that does so is said to provide confidentiality. • Taking the concept a step farther, concealing the quantity or
destination of communication is called traffic confidentiality
4. Problem • Even with confidentiality there still remain threats for the website customer. • An adversary who can’t read the contents of your encrypted message might still be able to change a few
bits in it, resulting in a valid order for, say, a completely different item or perhaps 1000 units of the item. • There are techniques to detect, if not prevent, such tampering. • A protocol that
detects such message tampering provides data integrity. • The adversary could alternatively transmit an extra copy of your message in a replay attack.
5. Problem • To the website, it would appear as though you had simply ordered another of the same item you ordered the first time. • A protocol that detects replays provides originality. •
Originality would not, however, preclude the adversary intercepting your order, waiting a while, then transmitting it—in effect, delaying your order. • The adversary could thereby arrange for the
item to arrive on your doorstep while you are away on vacation, when it can be easily snatched. A protocol that detects such delaying tactics is said to provide timeliness. • Data integrity,
originality, and timeliness are considered aspects of the more general property of integrity.
6. Problem • Another threat to the customer is unknowingly being directed to a false website. • This can result from a DNS attack, in which false information is entered in a Domain Name Server or
the name service cache of the customer’s computer. • This leads to translating a correct URL into an incorrect IP address—the address of a false website. • A protocol that ensures that you really
are talking to whom you think you’re talking is said to provide authentication. • Authentication entails integrity since it is meaningless to say that a message came from a certain participant if
it is no longer the same message.
7. Problem • The owner of the website can be attacked as well. Some websites have been defaced; the files that make up the website content have been remotely accessed and modified without
authorization. • That is an issue of access control: enforcing the rules regarding who is allowed to do what. Websites have also been subject to Denial of Service (DoS) attacks, during which
would-be customers are unable to access the website because it is being overwhelmed by bogus requests. • Ensuring a degree of access is called availability.
8. Problem • In addition to these issues, the Internet has notably been used as a means for deploying malicious code that exploits vulnerabilities in end-systems. • Worms, pieces of self-replicating
code that spread over networks, have been known for several decades and continue to cause problems, as do their relatives, viruses, which are spread by the transmission of “infected” files. •
Infected machines can then be arranged into botnets which can be used to inflict further harm, such as launching DoS attacks.
9. Chapter Outline • Cryptographic Building Blocks • Key Pre Distribution • Authentication Protocols • Example Systems • Firewalls
10. Cryptograhic Building Blocks • We introduce the concepts of cryptography-based security step by step. • The first step is the cryptographic algorithms—ciphers and cryptographic hashes •
Cryptographic algorithms are parameterized by keys
11. Cryptograhic Building Blocks Symmetric-key encryption and decryption
12. Cryptograhic Building Blocks • Principles of Ciphers • Encryption transforms a message in such a way that it becomes unintelligible to any party that does not have the secret of how to reverse
the transformation. • The sender applies an encryption function to the original plaintext message, resulting in a ciphertext message that is sent over the network. • The receiver applies a secret
decryption function–the inverse of the encryption function–to recover the original plaintext.
13. Cryptograhic Building Blocks • Principles of Ciphers • The ciphertext transmitted across the network is unintelligible to any eavesdropper, assuming she doesn’t know the decryption function. •
The transformation represented by an encryption function and its corresponding decryption function is called a cipher. • The basic requirement for an encryption algorithm is that it turn
plaintext into ciphertext in such a way that only the intended recipient—the holder of the decryption key—can recover the plaintext.
14. Cryptograhic Building Blocks • Principles of Ciphers • It is important to realize that when a potential attacker receives a piece of ciphertext, he may have more information at his disposal than
just the ciphertext itself. • Known plaintext attack • Ciphetext only attack • Chosen plaintext attack
15. Cryptograhic Building Blocks • Principles of Ciphers • Most ciphers are block ciphers: they are defined to take as input a plaintext block of a certain fixed size, typically 64 to 128 bits. •
Using a block cipher to encrypt each block independently—known as electronic codebook (ECB) mode encryption—has the weakness that a given plaintext block value will always result in the same
ciphertext block. • Hence recurring block values in the plaintext are recognizable as such in the ciphertext, making it much easier for a cryptanalyst to break the cipher.
16. Cryptograhic Building Blocks • Principles of Ciphers • Block ciphers are always augmented to make the ciphertext for a block vary depending on context. Ways in which a block cipher may be
augmented are called modes of operation.
17. Cryptograhic Building Blocks • Block Ciphers • A common mode of operation is cipher block chaining (CBC), in which each plaintext block is XORed with the previous block’s ciphertext before being
encrypted. • The result is that each block’s ciphertext depends in part on the preceding blocks, i.e. on its context. Since the first plaintext block has no preceding block, it is XORed with a
random number. • That random number, called an initialization vector (IV), is included with the series of ciphertext blocks so that the first ciphertext block can be decrypted.
18. Cryptograhic Building Blocks • Block Ciphers Cipher block chaining (CBC).
19. Cryptograhic Building Blocks • Symmetric Key Ciphers • In a symmetric-key cipher, both participants in a communication share the same key. In other words, if a message is encrypted using a
particular key, the same key is required for decrypting the message.
20. Cryptograhic Building Blocks • Symmetric Key Ciphers • The U.S. National Institute of Standards and Technology (NIST) has issued standards for a series of symmetric-key ciphers. • Data Encryption
Standard (DES) was the first, and it has stood the test of time in that no cryptanalytic attack better than brute force search has been discovered. • Brute force search, however, has gotten
faster. DES’s keys (56 independent bits) are now too small given current processor speeds.
21. Cryptograhic Building Blocks • Symmetric Key Ciphers • NIST also standardized the cipher Triple DES (3DES), which leverages the cryptanalysis resistance of DES while in effect increasing the key
size. • A 3DES key has 168 (= 3256) independent bits, and is used as three DES keys; • let’s call them DES-key1, DES-key2, and DES-key3. • 3DES-encryption of a block is performed by first
DES-encrypting the block using DES-key1, then DES-decrypting the result using DES-key2, and finally DES-encrypting that result using DES-key3. • Decryption involves decrypting using DES-key3,
then encrypting using DES-key2, then decrypting using DES-key1
22. Cryptograhic Building Blocks • Symmetric Key Ciphers • 3DES is being superseded by the Advanced Encryption Standard (AES) standard issued by NIST in 2001. • The cipher selected to become that
standard (with a few minor modifications) was originally named Rijndael (pronounced roughly like “Rhine dahl”) based on the names of its inventors, Daemen and Rijmen. • AES supports key lengths
of 128, 192, or 256 bits, and the block length is 128 bits.
23. Cryptograhic Building Blocks • Public Key Ciphers • An alternative to symmetric-key ciphers is asymmetric, or public-key, ciphers. • Instead of a single key shared by two participants, a
public-key cipher uses a pair of related keys, one for encryption and a different one for decryption. • The pair of keys is “owned” by just one participant. • The owner keeps the decryption key
secret so that only the owner can decrypt messages; that key is called the private key.
24. Cryptograhic Building Blocks • Public Key Ciphers • The owner makes the encryption key public, so that anyone can encrypt messages for the owner; that key is called the public key. • Obviously,
for such a scheme to work it must not be possible to deduce the private key from the public key. • Consequently any participant can get the public key and send an encrypted message to the owner
of the keys, and only the owner has the private key necessary to decrypt it.
25. Cryptograhic Building Blocks • Public Key Ciphers Public-key encryption
26. Cryptograhic Building Blocks • Public Key Ciphers • An important additional property of public-key ciphers is that the private “decryption” key can be used with the encryption algorithm to
encrypt messages so that they can only be decrypted using the public “encryption” key. • This property clearly wouldn’t be useful for confidentiality since anyone with the public key could
decrypt such a message. • This property is, however, useful for authentication since it tells the receiver of such a message that it could only have been created by the owner of the keys.
27. Cryptograhic Building Blocks • Public Key Ciphers Authentication using public keys
28. Cryptograhic Building Blocks • Public Key Ciphers • The concept of public-key ciphers was first published in 1976 by Diffie and Hellman. • The best-known public-key cipher is RSA, named after its
inventors: Rivest, Shamir, and Adleman. • RSA relies on the high computational cost of factoring large numbers. • Another public-key cipher is ElGamal. • Like RSA, it relies on a mathematical
problem, the discrete logarithm problem, for which no efficient solution has been found, and requires keys of at least 1024 bits.
29. Cryptograhic Building Blocks • Authenticator • An authenticator is a value, to be included in a transmitted message, that can be used to verify simultaneously the authenticity and the data
integrity of a message. • One kind of authenticator combines encryption and a cryptographic hash function. • Cryptographic hash algorithms are treated as public knowledge, as with cipher
algorithms. • A cryptographic hash function (also known as a cryptographic checksum) is a function that outputs sufficient redundant information about a message to expose any tampering.
30. Cryptograhic Building Blocks • Authenticator • Just as a checksum or CRC exposes bit errors introduced by noisy links, a cryptographic checksum is designed to expose deliberate corruption of
messages by an adversary. • The value it outputs is called a message digest and, like an ordinary checksum, is appended to the message. • All the message digests produced by a given hash have the
same number of bits regardless of the length of the original message. • Since the space of possible input messages is larger than the space of possible message digests, there will be different
input messages that produce the same message digest, like collisions in a hash table.
31. Cryptograhic Building Blocks • Authenticator • There are several common cryptographic hash algorithms, including MD5 (for Message Digest 5) and Secure Hash Algorithm 1 (SHA-1). MD5 outputs a
128-bit digest, and SHA-1 outputs a 160-bit digest • A digest encrypted with a public key algorithm but using the private key is called a digital signature because it provides nonrepudiation like
a written signature.
32. Cryptograhic Building Blocks • Authenticator • Another kind of authenticator is similar, but instead of encrypting a hash, it uses a hash-like function that takes a secret value (known to only
the sender and the receiver) as a parameter. • Such a function outputs an authenticator called a message authentication code (MAC). • The sender appends the MAC to her plaintext message. • The
receiver recomputes the MAC using the plaintext and the secret value, and compares that recomputed MAC to the received MAC.
33. Cryptograhic Building Blocks • Authenticator • A common variation on MACs is to apply a cryptographic hash (such as MD5 or SHA-1) to the concatenation of the plaintext message and the secret
value. • The resulting digest is called a hashed message authentication code (HMAC) since it is essentially a MAC. • The HMAC, but not the secret value, is appended to the plaintext message. •
Only a receiver who knows the secret value can compute the correct HMAC to compare with the received HMAC.
34. Cryptograhic Building Blocks • Authenticator Computing a MAC versus computing an HMAC
35. Key Pre Distribution • To use ciphers and authenticators, the communicating participants need to know what keys to use. • In the case of a symmetric-key cipher, how does a pair of participants
obtain the key they share? • In the case of a public-key cipher, how do participants know what public key belongs to a certain participant? • The answer differs depending on whether the keys are
short-lived session keys or longer-lived pre-distributed keys.
36. Key Pre Distribution • A session key is a key used to secure a single, relatively short episode of communication: a session. • Each distinct session between a pair of participants uses a new
session key, which is always a symmetric-key key for speed. • The participants determine what session key to use by means of a protocol—a session key establishment protocol. • A session key
establishment protocol needs its own security (so that, for example, an adversary cannot learn the new session key); that security is based on the longer-lived pre-distributed keys.
37. Key Pre Distribution • There are several motivations for this division of labor between session keys and pre-distributed keys: • Limiting the amount of time a key is used results in less time for
computationally intensive attacks, less ciphertext for cryptanalysis, and less information exposed should the key be broken. • Pre-distribution of symmetric keys is problematic. • Public key
ciphers are generally superior for authentication and session key establishment but too slow to use encrypting entire messages for confidentiality.
38. Key Pre Distribution • Pre-Distribution of Public Keys • The algorithms to generate a matched pair of public and private keys are publicly known, and software that does it is widely available. •
So if Alice wanted to use a public key cipher, she could generate her own pair of public and private keys, keep the private key hidden, and publicize the public key. • But how can she publicize
her public key— assert that it belongs to her—in such a way that other participants can be sure it really belongs to her?
39. Key Pre Distribution • Pre-Distribution of Public Keys • A complete scheme for certifying bindings between public keys and identities— what key belongs to who—is called a Public Key
Infrastructure (PKI). • A PKI starts with the ability to verify identities and bind them to keys out of band. By “out of band,” we mean something outside the network and the computers that
comprise it, such as in the following scenarios. • If Alice and Bob are individuals who know each other, then they could get together in the same room and Alice could give her public key to Bob
directly, perhaps on a business card.
40. Key Pre Distribution • Pre-Distribution of Public Keys • If Bob is an organization, Alice the individual could present conventional identification, perhaps involving a photograph or fingerprints.
• If Alice and Bob are computers owned by the same company, then a system administrator could configure Bob with Alice’s public key. • A digitally signed statement of a public key binding is
called a public key certificate, or simply a certificate
41. Key Pre Distribution • Pre-Distribution of Public Keys • One of the major standards for certificates is known as X.509. This standard leaves a lot of details open, but specifies a basic
structure. A certificate clearly must include • the identity of the entity being certified • the public key of the entity being certified • the identity of the signer • the digital signature • a
digital signature algorithm identifier (which cryptographic hash and which cipher)
42. Key Pre Distribution • Pre-Distribution of Public Keys • Certification Authorities • A certification authority or certificate authority (CA) is an entity claimed (by someone) to be trustworthy
for verifying identities and issuing public key certificates. • There are commercial CAs, governmental CAs, and even free CAs. • To use a CA, you must know its own key. You can learn that CA’s
key, however, if you can obtain a chain of CA-signed certificates that starts with a CA whose key you already know. • Then you can believe any certificate signed by that new CA
43. Key Pre Distribution • Pre-Distribution of Symmetric Keys • If Alice wants to use a secret-key cipher to communicate with Bob, she can’t just pick a key and send it to to him because, without
already having a key, they can’t encrypt this key to keep it confidential and they can’t authenticate each other. • As with public keys, some pre-distribution scheme is needed. • Pre-distribution
is harder for symmetric keys than for public keys for two obvious reasons: • While only one public key per entity is sufficient for authentication and confidentiality, there must be a symmetric
key for each pair of entities who wish to communicate. If there are N entities, that means N(N − 1)/2 keys. • Unlike public keys, secret keys must be kept secret.
44. Key Pre Distribution • Pre-Distribution of Symmetric Keys • Authentication Protocols A challenge-response protocol
45. Key Pre Distribution • Pre-Distribution of Symmetric Keys • Public Key Authentication Protocols A public-key authentication protocol that depends on synchronization
46. Key Pre Distribution • Pre-Distribution of Symmetric Keys • Public Key Authentication Protocols A public-key authentication protocol that does not depend on synchronization. Alice checks her own
timestamp against her own clock, and likewise for Bob.
47. Key Pre Distribution • Pre-Distribution of Symmetric Keys • Symmetric Key Authentication Protocols The Needham-Schroeder authentication protocol
48. Key Pre Distribution • Pre-Distribution of Symmetric Keys • Symmetric Key Authentication Protocols Kerberos Authentication
49. Key Pre Distribution • Pre-Distribution of Symmetric Keys • Diffie-Hellman Key Agreement • The Diffie-Hellman key agreement protocol establishes a session key without using any pre-distributed
keys. • The messages exchanged between Alice and Bob can be read by anyone able to eavesdrop, and yet the eavesdropper won’t know the session key that Alice and Bob end up with. • On the other
hand, Diffie-Hellman doesn’t authenticate the participants. • Since it is rarely useful to communicate securely without being sure whom you’re communicating with, Diffie-Hellman is usually
augmented in some way to provide authentication. • One of the main uses of Diffie-Hellman is in the Internet Key Exchange (IKE) protocol, a central part of the IP Security (IPSEC) architecture
50. Key Pre Distribution • Pre-Distribution of Symmetric Keys • Diffie-Hellman Key Agreement • The Diffie-Hellman protocol has two parameters, p and g, both of which are public and may be used by all
the users in a particular system. • Parameter p must be a prime number. The integers mod p (short for modulo p) are 0 through p − 1, since x mod p is the remainder after x is divided by p, and
form what mathematicians call a group under multiplication. • Parameter g (usually called a generator) must be a primitive root of p: for every number n from 1 through p − 1 there must be some
value k such that n = gk mod p.
|
{"url":"https://fr.slideserve.com/rigg/network-security-powerpoint-ppt-presentation","timestamp":"2024-11-08T04:57:59Z","content_type":"text/html","content_length":"113434","record_id":"<urn:uuid:55c52ec8-4102-4860-8434-cfc98123e47c>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00735.warc.gz"}
|
How to Design a Fundamentals-Based Strategy that Really Works, Part Three: Principles of Backtesting - Portfolio123 Blog (2024)
This is the third article in a series; here is the first (on factor design) and here is the second (on designing ranking systems).
I’ve been using a fundamentals- and ranking-based strategy for stock picking since 2015, and since then my compound average growth rate is 48%. So I can attest that a fundamentals-based strategy can
really work. My previous articles in this series discussed a) designing factors for multifactor strategies; and b) applying rules and creating ranking systems. In these articles, I’ve been advocating
creating ranking systems as a good alternative or addition to traditional screening. I use a subscription-based website called Portfolio123 for this; ranking can also be done with spreadsheets or
other data management tools, but that involves a huge amount of work and is beyond the scope of this series of articles.
• Past Performance and Future Results
• Laws and Theories
• Laws of Portfolio Performance
• Theories of Portfolio Performance
• The Correlation Between Past Performance and Future Results
• Running a Correlation Study
• Optimizing Your Strategy
• Optimizing Portfolio Parameters
• Optimizing Factors
• Optimizing a Ranking System
• Common Backtesting Mistakes
• Stress-Testing Your Systems
Past Performance and Future Results
We always say that “past performance is no guarantee of future results”—or, even worse, “past performance is not indicative of future results,” which implies that there’s no relationship at all
between them. If this were true, backtesting would be a complete waste of time. The whole idea behind backtesting is that there is a relationship between past performance and future results.
Laws and Theories
If you throw a ball into the air a hundred times and it always falls back down, you can assume that it will continue to do so every time you throw it into the air. Why? Because there’s a law behind
it: the law of gravity. Similarly, there are absolutely reliable laws that govern portfolio performance, laws derived from statistics and mathematics.
Portfolio performance is governed not only by laws, but by theories. There are thousands of those, and they can be tested. But they’re not immutable or completely reliable. They could pertain to
certain time periods and not to others. It’s important to recognize the difference.
Laws of Portfolio Performance
There are many of these, but the ones that I’ve found most useful are the laws of diversification, regression to the mean, outliers, and alpha and beta.
• The law of diversification can be stated as follows: The standard deviation of a portfolio’s returns is more likely to decrease than to increase as more imperfectly correlated assets are added to
it. (This law can be derived from the law of large numbers, which states that the average of a sample converges in probability toward an expected value as the sample gets larger.) In plain
English, the greater the number of stocks you hold, the less fluctuation you’ll see in your returns, so long as the stocks are relatively uncorrelated.
• The law of regression to the mean can be stated as follows: As long as there is a meaningful average of values, an extreme value will be more likely to become less extreme over time than to
continue to be extreme. For example, if a particular industry has outperformed or underperformed other industries, it will be more likely to have average performance in the future than to
continue to outperform or underperform. The same is true for factors, stocks, price returns, and so on. Regression to the mean can be overcome by certain tendencies, as one sees with stock
momentum. But the law still pertains. It explains why, for instance, companies with very low free cash flow yields are more likely to have higher free cash flow yields in the future, while
companies with very high yields are more likely to have lower yields in the future.
• The law of outliers can be stated as follows: The impact of extremely large and small numbers upon results is likely to make them difficult to replicate. This law applies to averages, standard
deviation, compounding, and regressions. For example, the average of 2, 3, 4, 5, 6, 7, 8, and 100 is close to 17, which is not at all representative of the sample. Similarly, a strategy that has
a high return due to one or two outperforming stocks or periods is less likely to have its return replicated in another time period than one whose return is relatively unaffected by such stocks
or periods. This law does not apply to medians, percentiles, and similar measures. Measures that are relatively impervious to outliers are called robust.
• The law of alpha and beta is one I came up with myself, and proved it. It can be stated as follows: alpha and beta are inversely correlated so long as the market return tends to be positive.
(There’s a corollary: alpha and beta are positively correlated in the rare case that the market return tends to be negative, and are uncorrelated when the market return is neutral.) This only
applies when you calculate alpha and beta using a benchmark that constitutes the average returns of the securities among which you’re choosing from (as is usually the case). This is a
mathematical law that applies to all asset classes. However, this inverse correlation, while provable mathematically and testable empirically, is relatively weak. The takeaway, in plain English,
is that low-beta portfolios are likely to outperform high-beta portfolios.
Theories of Portfolio Performance
There are legions of these. A few examples that come quickly to mind are:
• Stocks with low prices relative to their fundamentals tend to outperform stocks with high prices.
• The stock prices of small companies tend to be more volatile than those of large companies.
• Companies with growing sales and shrinking inventory tend to outperform companies with shrinking sales and growing inventory.
• Companies that pay dividends regularly are likely to continue to pay dividends regularly.
All of these, and many thousands of others, are ready for testing to see if they can be backed by empirical data.
The Correlation Between Past Performance and Future Results
In order to figure out how best to backtest, you have to answer the following question to your satisfaction: Under what conditions does past (in-sample) performance best correlate with future
(out-of-sample) results?
Further questions that this question raises include:
• How many years should be backtested?
• How large a portfolio should be backtested (i.e. should a portfolio consisting of the same number of stocks that will constitute your out-of-sample portfolio be backtested, or would a more
diverse one be better?)
• What performance measures should I use?
• Should backtested results be adjusted for outliers?
• How will optimization affect the correlation?
The best way to backtest is the way that maximizes the correlation between past performance and future results.
Some people think that there is no correlation between them, and that backtesting just gives you false hope. People warn about data mining and misleading backtests. That’s why it’s so important to
establish a correlation baseline.
For example, let’s say an academic study shows that a certain strategy gets a very good Sharpe ratio over the past fifty years, and it involves buying the top quintile of stocks in the S&P 500 and
shorting the bottom quintile.
What the study does not answer is this question: if one buys the top quintile of stocks in the S&P 500 and shorts the bottom quintile for fifty years according to, say, one hundred different
strategies, does the rank of these different strategies according to their Sharpe ratio correlate with the out-of-sample Sharpe ratio of exactly the same strategies, using the same methods, over the
next, say, twenty years? Of course, this would have to be tested over various fifty-year periods and the following twenty years.
I have no answer to this question. I honestly have no idea if the answer is “Yes” or “No.” And if the answer is that there is no correlation, then all these academic studies are useless.
Furthermore, there is no standard for testing. Some academic studies use the top decile minus the bottom decile, some the top tercile minus the bottom tercile, some ignore the bottom quantile
altogether. Some test over fifty years and some over less than twenty. They rely on a huge variety of different universes. Some use the Sharpe ratio as a performance measure, other use CAGR, others
use risk-adjusted alpha.
It astonishes me that I have not seen any such correlation study pertaining to academic methods such as these. They have been the standard for academic backtesting over the past fifty years, yet
nobody has actually tried to see if the results are meaningful in a general way. In my opinion, prior to using a testing method, one must determine if that testing method will generally yield
meaningful results.
Running a Correlation Study
Running a correlation study is a huge amount of work. One has to create a library of strategies, run equity curves, and measure their correlation. Doing this may not be worthwhile for most investors.
So in this section of the article I’ve summarized my own conclusions.
Here’s the way I have run correlation studies—with a view not for writing academic papers but for coming up with a strategy that will work for the way I buy and sell stocks.
1. Create fifty to a hundred different portfolio strategies. These should, ideally, have never been backtested. Get them from somewhere else, not from your own research. It’s especially important
that they have never been optimized over a specific time period. On the other hand, the strategies should not be random. They should make sense, at least to someone. They should ideally be
strategies that someone might use. The strategies should be flexible enough that you can decrease or increase the portfolio size. For example, you could use the various screening strategies that
AAII has developed over the last few decades.
2. Run all the strategies over as long a period as possible. Then rerun them with varying portfolio sizes.
3. Compare in-sample and out-of-sample results. How long do you want your own strategy to last? Do you anticipate changing your strategy every six months, every three years, every ten years? Take
your answer to that question and make that your “out-of-sample” period. Then measure the correlation between the total return of the out-of-sample period for your fifty or one hundred strategies
and the return of the preceding in-sample periods: one year, two years, etc. up to twelve or fifteen years. (For example, a good correlation will show that the same strategies perform poorly in
both periods and the same strategies perform well in both periods.) Ideally there should be a year between the end of the in-sample and the beginning of the out-of-sample period in order to avoid
data leakage. Do this over and over again, as much as your total time period allows. Considering that mass of data, what is the length of the in-sample period that best correlates with the
out-of-sample results? If the in-sample period is too long, the ranks of the strategies will cluster together and be almost indistinguishable; if it’s too short, the ranks may show very little
correlation since regression to the mean is more prevalent than persistence over short periods. My own results are as follows: for a three-year out-of-sample period, a ten-year in-sample period
is the most correlative, though almost anything longer than ten years will still correlate decently well. A five-year in-sample period is the worst—the correlations are often negative.
4. Vary your portfolio size. Now that you’ve determined your ideal in-sample time period, measure the correlation of those in-sample time periods with various portfolio sizes to the out-of-sample
periods with the portfolio size you’re actually going to use. If you double, triple, quadruple, or quintuple the number of positions in your in-sample period, does the correlation of the strategy
ranks with the out-of-sample period improve? In my experience, about two to five times the number of positions gives me better results than using the same number of positions.
5. Determine your portfolio backtesting method. You might find that a top-quintile vs bottom-quintile approach correlates best with your out-of-sample portfolio simulation; or you might find that a
rolling backtest correlates best, or a simple simulation. (For me, the latter has always worked well.)
6. Examine performance measures. If you’re interested in getting the highest return in the out-of-sample period, should you follow the strategy with the highest in-sample returns? Or is better to
use a different measure? What if you’re interested in getting the highest Sharpe ratio? Do Sharpe ratios in different periods have a strong correlation? Or is there another method to use in the
in-sample period that will correlate better with an out-of-sample Sharpe ratio? Personally, I have found that the performance measure with the highest correlation to out-of-sample CAGR is
in-sample alpha after trimming outliers.
Optimizing Your Strategy
One of the warnings I always hear is don’t optimize your strategy too much. Simply run a few backtests to see if something works or doesn’t.
The logic of this is pretty clear. The more you optimize your strategy, the closer it will hew to the data that’s available. You will then end up with a strategy that is tailored to work beautifully
for a very specific set of stocks during a very specific time period. Such a strategy may be less likely to work in a subsequent time period, and you will end up with illusory results.
On the other hand, if there is a correlation, even a slight one, between in-sample and out-of-sample returns, the laws of probability say that you will be more likely to do well out-of-sample with a
strategy that performs well in-sample than one that performs badly.
One solution to this conundrum may be to test over various different time periods and various different groups of stocks. I break up my universe of stocks into five equal parts and optimize different
strategies for each one. I then combine those five strategies into one. I also change the beginning and end dates of my backtests; some people (notably James O’Shaughnessy) subdivide their in-sample
testing period into discrete testing periods and only pick strategies that work in most of them.
There are a lot of things to optimize for when backtesting. You might want to optimize the rules that govern your universe of stocks. You might want to optimize the various elements of your portfolio
management: how many stocks to hold at a time, how often you buy and sell them, what weights you give them, and so on. If you use a ranking system, like I do, you might want to optimize the weights
of your various factors.
There are some things that backtesting may not tell you. Will a backtest ever give you really realistic slippage? I suggest setting your liquidity limits without backtesting. Instead, use your
experience and look at the actual slippage you’re paying by comparing your transaction records for stocks with varying liquidity.
Your backtesting method may end up favoring a certain industry or sector that outperformed during your in-sample period. Beware of this. Different industries and sectors perform well at different
times, and certain factors can help you concentrate on those. But don’t favor a method that will always choose the same industry/sector simply based on past results. The best investment strategies
will rotate and/or diversify between industries and sectors so that you’re always on top. And you certainly don’t want to exclude certain industries just because they performed poorly during your
in-sample period, or because doing so improves your in-sample backtests. Those may end up being the best industries in the near future.
Optimizing Portfolio Parameters
In my opinion, the best way to optimize your portfolio parameters is to use a factor-based stockpicking strategy that you or someone else developed a while ago and see what portfolio parameters would
have worked best with it in the years since then. The worst way is to use a strategy that you’ve already backtested over the same period that you’re now optimizing.
Here are some questions you should ask:
• How many stocks should be held?
• What buy rules and sell rules should you use?
• What weights should the stocks have?
• Should you have a minimum holding period to reduce slippage?
• Should you set a maximum number of stocks per industry or sector?
• Should you set a maximum weight for a single position?
Optimizing Factors
If you’re using ranking, some factors might benefit from an approach that doesn’t favor the highest or the lowest values, but instead chooses something in-between. For example, my ranking system
works very well for microcaps and small caps but doesn’t work very well for nanocaps, mid-caps, and large caps. So I don’t want my size factors to be simply smaller better, because that would
emphasize nanocaps.
One way to optimize factors like these is to take your ranking system, exclude the factor in question, and test it on subsets of your universe that are split up according to the factor. So, for
example, I could test my ranking system on stocks of various sizes and see which size stocks perform best and which perform worst. Or, if I were optimizing a growth factor, I could test my ranking
system on stocks with various growth rates.
Optimizing a Ranking System
Two questions that will immediately come up are:
• which factors should you use?
• what weights should they be assigned?
Fortunately, those questions can be easily answered at the same time by experimenting with a long list of factors and allowing yourself to assign 0% weights to many of them.
As I wrote above, I subdivide the universe of stocks I’m willing to invest in into five more-or-less random subuniverses and run my tests on each one. I vary my factor weights until I get the ranking
systems that perform best in each of my universes—and the ones that perform best in all of them—and then I average the weights of those outperforming systems.
A tip: don’t vary your factor weights by less than 2%. It’s a waste of time. One way to do this is to have all factor weights divisible by 2%, 2.5%, or 4%.
The amount of backtesting time this process takes can be extreme. Expect to spend weeks on it, or automate the process somehow. However, you really don’t have to be perfect in your optimization. A
very rough approximation is good enough. You’ll find that the highest-ranked stocks are pretty much the same after a certain point.
Common Backtesting Mistakes
• Optimizing only on the entire universe from which you’ll be choosing stocks.
• Optimizing on a short and fixed time period.
• Optimizing using a small number of positions.
• Believing that an optimized result is achievable out-of-sample. (Remember that the word optimize implies that the result is closely tied to a specific sample of stocks in a specific time period.)
• Basing your portfolio parameter backtests on a ranking system already optimized over the same time period.
• Basing your optimization only on CAGR rather than on more robust performance measures.
• Taking into account only overall performance rather than looking at performance during various discrete time periods. (Some robust performance measures can handle this for you.)
Stress-Testing Your Systems
As I wrote in a previous post on stress-testing quantitative models, it’s important not only to build a good strategy but also to try to destroy it. A strategy that succeeds over a variety of stress
tests is ready for the long haul. Backtesting for failure can be just as important as backtesting for success.
|
{"url":"https://toys2try.com/article/how-to-design-a-fundamentals-based-strategy-that-really-works-part-three-principles-of-backtesting-portfolio123-blog","timestamp":"2024-11-11T01:29:28Z","content_type":"text/html","content_length":"83777","record_id":"<urn:uuid:adc9a58e-a419-4b7f-96ab-2f0857307a9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00725.warc.gz"}
|
IB Mathematics: Analysis and Approaches
Top IB Mathematics: Analysis and Approaches Tutors serving Ankara
Keith: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...people understand something they were struggling with. Thats why I hope to tutor as many students as I can handle. The thing I enjoy most about teaching is the opportunity to help students learn.
I enjoy knowledge, both the receiving and the giving thereof, and Im looking forward to being able to help your student...
Lindsay: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...elementary age to graduate level, as well as working with students with special needs. I particularly enjoy tutoring mathematics, biology, and helping students prepare for standardized tests. I
can also tutor in writing and reading, since I have experience as a professional copy editor. My focus when tutoring is student understanding. I believe academic goals...
Minu: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...love to go on road trips! I love dark chocolate and Thai food anytime, any day. My favorite subject to teach is Math, because I like to work with numbers! I strongly believe in incentives. I
believe that setting small treats for each goal you reach will motivate you to do better, as long as...
Palak: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...how teaching needs to be adapted to meet those ways. For example, including a fun activity for elementary school students is important to keep them engaged in school. While on the other hand, high
school and college students require more of a practical approach and have to work through practice to understand a concept. I’ve...
Jake: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...minor in Theatre. In the fall, I'll be attending law school at Columbia Law, and I hope to pursue cybersecurity law while I'm there. I have a ton of experience tutoring in many different areas,
however my specific specialty areas are Computer Science, Math (up to Calculus 2), and test prep. I love theatre, music,...
Alexander: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...I worked with students between the ages of 16-22, most of whom were proficient in mathematics only at a 4th grade level. It can be very difficult working with students who have conceptual issues
with addition and multiplication, but struggling with that kind of work has furthered my passion to teach mathematics. Mathematics far and...
Rebecca: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...Indiana University and moved to New York to take a gap year before beginning graduate school. As a scholarship student at IU, I majored in economics, cognitive science, and piano performance.
Outside of tutoring, you can find me playing or teaching piano, doing jigsaw puzzles, watching TED talks, running, or trying a new food or...
Calvin: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...strictly mental math from age 4, and since then, math has always been my strongest subject. My passion for mathematics stretches even farther when I tutored other students alongside me in class
during my AP Calculus courses in High School, and even in college when I was taking Calculus 3 and Differential Equations. For me,...
Rosalyn: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...concepts they find difficult by looking at those concepts from different angles. I have done some one-on-one math tutor, as well as often helping my peers with their math homework in various
levels of math. I tutor many math subjects, including elementary math, pre-Algebra, Algebra 1 and 2, Geometry, pre-Calculus and Calculus 1. Math has...
Bryson: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...I then attended Loyola Marymount University where I received my master's degree in Urban Education. I have worked in education for the past 6 years, beginning as a 2012 Teach For America corps
member in Los Angeles. I have taught nearly every subject in middle school classrooms, and I have taught both biology and algebra...
Nick: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...the largest reason students begin to struggle in a class is because they have fallen behind in some way and therefore lost confidence. My main goal as a tutor will be to restore their confidence
in the subject by providing complete understanding of concepts as well as preparing them for the most complex examples so...
Miguel: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...and valued during all of my sessions. I emphasize a strategy based approach to studying, and keep an updated list of strategies that my students are finding helpful. I prefer to test mastery of
material via a role reversal in which I see if the student can teach me how to correctly solve problems and...
Chase: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...beings in this context, and it is my role as a teacher to effectively, enthusiastically communicate the importance and beauty of math to my students as well as to foster their powers of critical
and mathematical thinking. It is my professional hope that at the end of each tutoring session my students will have learned...
Jared: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...a B.S. in Biological Sciences. I have tutored private clients since 2011 and have experience tutoring students in Biology, Chemistry and Math. I also enjoy helping students prepare for
standardized exams for undergraduate and graduate school admission. In addition to my private tutoring activities, I was also a TA to an introductory science course at...
Gray: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...only grown. I primarily tutor math topics, but I am also available to tutor for various standardized test topics such as SAT, ACT or GRE Reading. I'm a firm believer in the idea that anyone can
understand grade school level math and reading given the proper instruction and effort. I never give up, and if...
Don: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
Don has all the usual academic degrees (BA, BS, MS, PhD). He currently specializes in tutoring differential and integral calculus, multivariable and vector calculus, differential equations and
dynamical systems, as well as linear and matrix algebra. He also has lots of experience tutoring students in both high school and college-level algebra, precalculus, and trigonometry courses.
Erin: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...and university students for about 12 years. Similarly, have 12 years of experience with the SAT, ACT, and ISEE/SSAT, and about 4 years of experience teaching GRE and GMAT. As far as hobbies go, I
enjoy reading science fiction, classic literature, and comic books. Rock-climbing is a big thing for me, as is Brazilian Jiu-Jitsu....
Payal: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...150 hours in Pre-Calculus, Algebra I, Algebra II, Geometry, and Chemistry. I love to teach because it lets me share my enthusiasm for mathematics and science with other students and helps me get a
deeper understanding of material that is useful for my higher level mathematics and physics courses. As a tutor, I break down...
Paul: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...with a new student, I focus on flexibility as we become accustomed to one another. As time progresses and we enact a personalized lesson plan, my expectations increase accordingly as does each
student's accountability. I strongly believe that it behooves all learners to employ a useful toolkit of reasoning and comprehension skills. In addition to...
Eshita: Ankara IB Mathematics: Analysis and Approaches tutor
Certified IB Mathematics: Analysis and Approaches Tutor in Ankara
...I love teaching because I believe I can connect well with the students. I am able to create short-cuts, tricks, and other useful methods that students love because I, myself, am a student.
Therefore, I can relate well with all my students. Since my experiences range far and wide, I am confident that I can...
Private Online IB Mathematics: Analysis and Approaches Tutoring in Ankara
Our interview process, stringent qualifications, and background screening ensure that only the best IB Mathematics: Analysis and Approaches tutors in Ankara work with Varsity Tutors. To assure a
successful experience, you're paired with one of these qualified tutors by an expert director - and we stand behind that match with our money-back guarantee.
Receive personally tailored IB Mathematics: Analysis and Approaches lessons from exceptional tutors in a one-on-one setting. We help you connect with online tutoring that offers flexible scheduling.
Your Personalized Tutoring Program and Instructor
Identify Needs
Our knowledgeable directors help you choose your tutor with your learning profile and personality in mind.
Customize Learning
Your tutor can customize your lessons and present concepts in engaging easy-to-understand-ways.
Increased Results
You can learn more efficiently and effectively because the teaching style is tailored to you.
Online Convenience
With the flexibility of online tutoring, your tutor can be arranged to meet at a time that suits you.
Call us today to connect with a top Ankara IB Mathematics: Analysis and Approaches tutor
(888) 888-0446
Top International Cities for Tutoring
|
{"url":"https://www.varsitytutors.com/ankara-turkey/ib_mathematics_analysis_approaches-tutors","timestamp":"2024-11-02T12:29:26Z","content_type":"text/html","content_length":"689197","record_id":"<urn:uuid:eeba031a-8f6f-4d2d-9bec-62c40b362409>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00546.warc.gz"}
|
Item 1008/1011 | Repositorio CIMAT
Please use this identifier to cite or link to this item: http://cimat.repositorioinstitucional.mx/jspui/handle/1008/1011
CATEGORICAL INDEPENDENCE AND NONCOMMUTATIVE DISTRIBUTIONS FOR SIMPLICIAL COMPLEXES
CARLOS EDUARDO DIAZ AGUILERA
Acceso Abierto
MATEMÁTICAS BÁSICAS
Graphs provide basic and illustrative examples of non-commutative random variables via their adjacency matrices. The main objective of this thesis is to discuss the general question of
assigning distributions to graphs and simplicial complexes, for these to be considered as random variables in some non-commutative probability space. We split the problem in three parts.
First, we need to decide which matrices associated with simplicial complexes are we considering as random variables. Second, we must specify a expectation or state to endow our matrices
with distributions. Third, we may also want to consider collections of compatible conditional expectations, so that we may relate the different distributions for different choices of
matrices. Surprisingly, if we restrict the third question, allowing only the simplest types of operator-valued spaces, i.e. projection-valued spaces, then the joint algebraic
distribution of incidence and boundary matrices and their duals contains the distributions of most of the matrices that are usually considered while studying simplicial complexes. This
includes adjacency matrices and combinatorial Laplace operators, of all dimensions. In addition, the simplicity of the *-algebra generated by the boundary matrix leaves little room for
finding self-adjoint operators. Two of the few canonical choices are naturally endowed with analytic distributions that encode the Betti numbers of the considered simplicial complex. It
is to remark that a notion of independence is required for the task. In our case, the concept of categorical independence studied was developed in \cite{Ger14}, where it is intended to
be the link between all the independence notions in quantum probability spaces. For this work, the categorical indepence of orthogonal spaces is a valuable concept because it enables and
encourages the separated study of different orthogonal pieces, which are independent, of an operator. A second objective, is to define a Boolean cumulant type on higher dimensions, i.e.
simplicial complexes. We hope that these observations will be helpful for finding simpler and meaningful examples for independent non-commutative random variables, particularly in those
general NCP frameworks still lacking of non-artificial realizations.
Tesis de maestría
Versión aceptada
acceptedVersion - Versión aceptada
Appears in Tesis del CIMAT
Upload archives
|
{"url":"https://cimat.repositorioinstitucional.mx/jspui/handle/1008/1011","timestamp":"2024-11-10T06:41:33Z","content_type":"text/html","content_length":"17422","record_id":"<urn:uuid:218518ae-e1db-468b-a115-bb7619ee49e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00645.warc.gz"}
|
The Book of Shaders
The images above were made in different ways. The first one was made by Van Gogh's hand applying layer over layer of paint. It took him hours. The second was produced in seconds by the combination of
four matrices of pixels: one for cyan, one for magenta, one for yellow and one for black. The key difference is that the second image is produced in a non-serial way (that means not step-by-step, but
all at the same time).
This book is about the revolutionary computational technique, fragment shaders, that is taking digitally generated images to the next level. You can think of it as the equivalent of Gutenberg's press
for graphics.
Fragment shaders give you total control over the pixels rendered on the screen at a super fast speed. This is why they're used in all sort of cases, from video filters on cellphones to incredible 3D
video games.
In the following chapters you will discover how incredibly fast and powerful this technique is and how to apply it to your professional and personal work.
Who is this book for?
This book is written for creative coders, game developers and engineers who have coding experience, a basic knowledge of linear algebra and trigonometry, and who want to take their work to an
exciting new level of graphical quality. (If you want to learn how to code, I highly recommend you start with Processing and come back later when you are comfortable with it.)
This book will teach you how to use and integrate shaders into your projects, improving their performance and graphical quality. Because GLSL (OpenGL Shading Language) shaders compile and run on a
variety of platforms, you will be able to apply what you learn here to any environment that uses OpenGL, OpenGL ES or WebGL. In other words, you will be able to apply and use your knowledge with
Processing sketches, openFrameworks applications, Cinder interactive installations, Three.js websites or iOS/Android games.
What does this book cover?
This book will focus on the use of GLSL pixel shaders. First we'll define what shaders are; then we'll learn how to make procedural shapes, patterns, textures and animations with them. You'll learn
the foundations of shading language and apply it to more useful scenarios such as: image processing (image operations, matrix convolutions, blurs, color filters, lookup tables and other effects) and
simulations (Conway's game of life, Gray-Scott's reaction-diffusion, water ripples, watercolor effects, Voronoi cells, etc.). Towards the end of the book we'll see a set of advanced techniques based
on Ray Marching.
There are interactive examples for you to play with in every chapter. When you change the code, you will see the changes immediately. The concepts can be abstract and confusing, so the interactive
examples are essential to helping you learn the material. The faster you put the concepts into motion the easier the learning process will be.
What this book doesn't cover:
What do you need to start?
Not much! If you have a modern browser that can do WebGL (like Chrome, Firefox or Safari) and a internet connection, click the “Next” Chapter button at the end of this page to get started.
Alternatively, based on what you have or what you need from this book you can:
|
{"url":"https://thebookofshaders.com/00/?ref=oluseyi.info","timestamp":"2024-11-06T13:35:56Z","content_type":"text/html","content_length":"11665","record_id":"<urn:uuid:aff6df49-b528-4853-bd93-834bfe939086>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00111.warc.gz"}
|
Impact of the Grid Resolution and Deterministic Interpolation of Precipitation on Rainfall-Runoff Modeling in a Sparsely Gauged Mountainous Catchment
Faculty of Building Services, Hydro and Environmental Engineering, Warsaw University of Technology, 00-661 Warsaw, Poland
Submission received: 23 December 2020 / Revised: 12 January 2021 / Accepted: 15 January 2021 / Published: 19 January 2021
Precipitation is a key variable in the hydrological cycle and essential input data in rainfall-runoff modeling. Rain gauge data are considered as one of the best data sources of precipitation but
before further use, the data must be spatially interpolated. The process of interpolation is particularly challenging over mountainous areas due to complex orography and a usually sparse network of
rain gauges. This paper investigates two deterministic interpolation methods (inverse distance weighting (IDW), and first-degree polynomial) and their impact on the outputs of semi-distributed
rainfall-runoff modeling in a mountainous catchment. The performed analysis considers the aspect of interpolation grid size, which is often neglected in other than fully-distributed modeling. The
impact of the inverse distance power (IDP) value in the IDW interpolation was also analyzed. It has been found that the best simulation results were obtained using a grid size smaller or equal to 750
m and the first-degree polynomial as an interpolation method. The results indicate that the IDP value in the IDW method has more impact on the simulation results than the grid size. Evaluation of the
results was done using the Kling-Gupta efficiency (KGE), which is considered to be an alternative to the Nash-Sutcliffe efficiency (NSE). It was found that KGE generally tends to provide higher and
less varied values than NSE which makes it less useful for the evaluation of the results.
1. Introduction
Precipitation is one of the major driving forces in the hydrological cycle that affects hydrological processes [
]. Nowadays, precipitation data are mainly acquired from rain gauges, weather radars, and satellites, while the first two are considered as the best data sources for catchment modeling [
]. Even though there are measurement alternatives to rain gauges, the data acquired at in-situ measurements are still frequently used in many hydrological applications as they provide reliable and
measured (not estimated) point information on the precipitation. However, before further use, the rain gauge data must be spatially interpolated, which might significantly affect the accuracy of the
spatial precipitation field [
]. The process of obtaining a reliable interpolated precipitation field is particularly challenging in mountainous environments. The spatial patterns of precipitation over these areas are mainly
affected by the topography of the area and wind direction, which significantly affects runoff modeling in the catchment [
]. Moreover, mountainous areas often face the problem of sparse rain gauge networks, which limits the accessibility of the data and affects the interpolation accuracy [
The amount of precipitation measured by the rain gauge provides local information on the precipitation, not its areal variability [
]. In hydrological applications, it is necessary to acquire the areal height of precipitation. Therefore, the rain gauge data is subject to spatial interpolation to reproduce its spatial variability.
One of the main problems with spatial interpolation of rain gauge data is the small number of measurement points [
], which is often insufficient to correctly reproduce areal precipitation, although the rainfall values measured at a given point are correct [
]. An equal measurement of the precipitation height is expected within a distance of a few meters from each other, while at several hundred meters, this convergence is significantly reduced [
Gridded datasets based on in-situ observations are mainly affected by station density and interpolation methodology [
]. As for interpolation methodology, two aspects are crucial: the choice of the method and the resolution of the interpolation grid. Spatial interpolation techniques can be divided into deterministic
and probabilistic methods. The first ones are based on deterministic interpolation algorithms, as a result of which a continuous or discontinuous precipitation field is created. The latter are based
on algorithms in which it is assumed that the point information from measurement has a deterministic and a spatially correlated random component. Among the deterministic methods, the most commonly
used techniques include inverse distance weighting (IDW), polynomial interpolation, and Thiessen polygons [
]. As for the geostatistical methods that are the most used, the techniques are ordinary kriging and co-kriging [
]. When the amount of available rain gauges is very limited, the geostatistical methods will not be effective [
In many previous studies (e.g., [
]) assessments of interpolation methods were done in terms of statistical analysis like cross-validation or minimization of mean absolute error (MAE) or root mean square error (RMSE). However, when
it comes to hydrological modeling, these statistical aspects of precipitation data are important, but their reliable values do not guarantee accurate discharge simulations using the rainfall-runoff
It’s frequently assumed that fully-distributed models are the best for investigation of the spatial variables (like precipitation) as they allow to provide input data in the grid cells and do not
average values over larger areas like lumped and semi-distributed models do [
]. However, the semi-distributed hydrological models are also sensitive to the spatial distribution of variables. That was the subject of an investigation by Cheng et al. [
], where the authors analyzed the impact of three interpolation methods (Thiessen polygons, IDW, and co-kriging) and applied it to a semi-distributed model. Nonetheless, they omitted the aspects of
grid resolution impact on interpolation outputs, as well as the impact of inverse distance power (IDP) value for IDW method. In another paper by Chen et al. [
], three interpolation methods were also analyzed (regression-based scheme, IDW, and multiple linear method) and applied to a semi-distributed model. However, in their study, the authors set up the
grid resolution to 500 × 500 m without any investigation. The impact of the IDP value in the IDW method for the purpose of hydrological modeling is also not sufficiently analyzed. There are some
papers (e.g., [
]) that investigated the effect of the IDP value on precipitation of the interpolation outputs, but the data were not applied to the hydrological model. Most frequently, the IDP value is set as equal
to two, which seems to be a fine potency for hydrogeology applications [
]. As for hydrological applications, the IDP is also often assumed as two, but there is no strong evidence that it is the most optimal value.
For hydrological modeling, there is a great need to interpolate precipitation data even when the number of measuring stations is too small to prevent the use of any geostatistical method. Taking into
account that the grid size aspects are mostly neglected when interpolating precipitation, the main objective of this paper is to investigate both the impact of grid resolution and deterministic
interpolation technique and evaluate them via rainfall-runoff simulations using the Hydrologic Engineering Center-Hydrologic Modelling System (HEC-HMS) semi-distributed hydrological model over a
sparsely gauged mountainous catchment. Also, for the inverse distance weighting method, the impact of the IDP value on the hydrological simulation results was investigated. The simulations were
performed using two interpolation methods (inverse distance weighting and first-degree polynomial interpolation) and 6 grid sizes (ranging from 250 m to 5000 m). The impact of IDP value in IDW
interpolation and its impact on discharge simulation was also subject to investigation. As the study area, a sparsely rain-gauged mountainous catchment in southern Poland was chosen which frequently
faces flooding events. The simulation results were evaluated using Nash-Sutcliffe efficiency (NSE), Kling-Gupta efficiency (KGE), and percent bias metrics. The KGE criterion is often considered as an
alternative to NSE, and this paper tries to judge which of these two allows for better evaluation of the modeling results.
2. Materials and Methods
2.1. Description of Study Area
The Skawa catchment is located in southern Poland and borders with the Czech Republic. Its area is equal to 1600 km
and is entirely located in the Outer Carpathians. The catchment can be divided into two parts, upper- and lower-one [
], where the upper part is more exposed to the risk of flooding. Within the river basin, the elevation varies from 435 to 1038 m a.s.l. To the southwest of the catchment is the Babia Góra massif—the
highest peak of the Polish part of the Carpathian Mountains. The highest rainfall is observed in the Babia Góra region and the lowest in the lower part of the catchment. Most of the catchment area is
dominated by a warm temperate (up to approx. 700 m a.s.l.) or a cold temperate cold (at altitudes 700–1100 m a.s.l.). There are four rain gauges in the catchment area, of which one is directly
located in the investigated study area. The rain gauges are not well distributed over the entire study area, which makes the areal estimation of precipitation based on them more challenging.
In this study, all analyses were limited to the upper part of the Skawa River (area of 240.4 km
), which is particularly at risk of flooding [
]. This part of the catchment consists of 6 sub-catchments. The upper Skawa catchment area is characterized by a dense water network with dominant short streams with large slopes, resulting from the
mountainous nature of the Skawa river [
]. The area of the catchment is dominated by low permeable soils, which is one of the most major factors contributing to the formation of flash floods caused by excessive rainfall [
]. Discharge data for the catchment are available at the gauging station in Osielec, which is located downstream.
Figure 1
presents major characteristics of the area in terms of elevation, locations of rain gauges, and gauging station, as well as division into sub-catchments.
The Upper Skawa catchment can be classified as a relatively small mountainous catchment with a quick time of response. The average of the time to peak of the catchment is around 2.5 h [
2.2. Data Collection and Processing
The precipitation data measured with rain gauges were obtained from the telemetric rain gauge network which is operated by the national hydrological and meteorological service—the Institute of
Meteorology and Water Management National Research Institute. Precipitation on the rain gauges was measured in 10-min time-step intervals and for the purpose of this work was aggregated to 1-h
intervals. All of the measurements undergo quality control by verifying their range according to climatological values, as well as spatio-temporal consistency [
]. There were four rain gauges located on-site or close to the study area (
Table 1
The discharge data, at hourly time-step, came from the gauging station in Osielec (
Figure 1
) and were also provided by the Institute of Meteorology and Water Management National Research Institute.
Both precipitation and discharge data were obtained for the years between 2014 and 2019. During this period, there were several flash flood events resulting from excessive rainfall and they were
subject to further investigation. Four events from 2014–2016 were selected for calibration of the hydrological model and another four events from 2017–2019 were chosen for its validation. That was
the maximum of available precipitation events over the analyzed period.
Slope information, which was required in the hydrological model, was acquired from the Digital Elevation Model (DEM) of 100 m resolution that was provided by the Head Office of Geodesy and
Cartography in Poland. As for the land-use data, the CORINE Land Cover Project CLC2012 v.18.5.1 was used. Complex information on the land-cover delimitation over the study area can be found in the
paper by Gilewski and Nawalany [
All the data processing and statistical analysis were performed using R software.
2.3. Hydrological Modeling and Assesment
2.3.1. Selection of the Model
The increasing computer power enables the creation of more and more complex hydrological models considering a range of processes related to the dynamics of water movement and its accumulation in the
catchment area. Fully distributed models allow to incorporate information on the spatial variability of input data, such as, e.g., rainfall or land-use. They usually require the availability of input
data characterized by a high degree of quality (reliability), which, given the high resolution of the described variables and hydrological parameters, is a serious scientific and technical challenge.
The computational time and calibration process of models with distributed parameters are longer and more complicated than for lumped or semi-distributed models [
In parallel to the development of models of high complexity, there are many works and research on the simpler models (lumped or semi-distributed) [
]. Simplified models are often used for preliminary analyses since they do not require high computing power and are characterized by a significantly shorter simulation time. One of the frequently
discussed topics in the literature is an attempt to answer the question: are models with distributed parameters better than simplified models since they allow for more detailed modeling of
hydrological processes in the catchment area? Answers that can be found in the literature (e.g., [
]) indicate that simulations performed with semi- and fully-distributed models give similar results.
For the purpose of this study, hydrological rainfall-runoff modeling was performed using the HEC-HMS (Hydrologic Engineering Center-Hydrologic Modelling System) version 4.2.1. This is a freeware
software developed by the US Army Corps of Engineers. It enables modelling continuous and even-based outflows. Depending on the adopted parameters, the model can be either lumped or semi-distributed.
Precipitation and discharge data were acquired into the hydrological model via HEC-DSSVue version 2.0 software.
The main components of the HEC-HMS software are the catchment model and meteorological model.
Table 2
presents the parameters and methods used for modeling in this paper. They were selected in such a way as to be adequate for the event-based simulations in the model with semi-distributed parameters.
As for the meteorological component, in total, 14 different data sources of precipitation were used:
• precipitation fields interpolated using the IDW interpolation method (IDP = 2.0) for 6 different grid sizes (250, 500, 750, 1000, 2500, and 5000 m);
• precipitation fields interpolated using the IDW interpolation method and different IDP values (0.5, 2.0 and 5.0) for 2 grid sizes (250 and 2500 m);
• precipitation fields interpolated using the first-degree polynomial interpolation method for 6 different grid sizes (250, 500, 750, 1000, 2500, and 5000 m).
2.3.2. Model Assessment
Assessments of the calibration and validation results from the hydrological model were conducted separately for each of the meteorological models, as the spatial distribution of precipitation has a
significant impact on the estimation of model parameters. During the calibration process, the parameters for the rainfall loss method (initial abstraction and curve number) and transformation of
effective precipitation (standard lag and peaking coefficient) were calibrated. The simulated model flow, at hourly time-steps, was compared to the observed flow at the gauging station Osielec. As an
objective function during the calibration process, the peak-weighted RMSE metric was applied.
The assessment criteria were chosen based on the literature to perform a multi-aspect analysis of the model simulation results. It must be noticed that a comprehensive analysis of the results
includes not only evaluation of the performance metrics, but also graphical evaluation of the results. For the purpose of this study, the following metrics were used to assess the performance of the
modeled discharge in relation to the observed flow:
• Nash-Sutcliffe efficiency (NSE)—frequently used metric to determine the relative magnitude of the residual variance in relation to the measured data variance [
]. The NSE values range from −1 to 1. The closer to 1, the more accurate the model is. If NSE value = 0, it means that the model predictions are as accurate as the mean value from the observed
data. Values < 0 indicate that the mean value from observed data is a better predictor than the model results. The NSE is defined as:
$NSE = 1 − ∑ i = 1 n ( Q o b s − Q s i m ) 2 ∑ i = 1 n ( Q o b s − Q o b s ¯ ) 2$
are consecutively simulated and observed river discharge,
$Q ¯ o b s$
represents the mean of observed values, and
stands for the number of observations.
• Kling-Gupta efficiency (KGE)—developed by Gupta et al. [
] is one of the alternatives to the NSE criterion [
], which is based on its decomposition (correlation, variability, and mean bias). Similarly, like NSE, the KGE value equal to 1 indicates a perfect agreement between model results and observation
data, and values <0 means that the mean of observation data serves as a better predictor than the model outputs. The KGE is expressed as follows:
$KGE = 1 − ( r − 1 ) 2 + ( α − 1 ) 2 + ( β − 1 ) 2$
is the linear correlation between the observed and simulated river flow,
is a measure of the flow variability error, and
represents a bias.
• Percent bias (PBIAS)—this metric is used to assess the model performance regarding the tendency of the simulated flow to be over- or underestimated [
]. If the value of PBIAS is greater than 20%, then it is considered to be unacceptable [
]. The formula for PBIAS is expressed as follows:
$PBIAS = ∑ i = 1 n ( Q s i m − Q o b s ) ∑ i = 1 n Q o b s$
are consecutively simulated and observed river discharge.
2.4. Spatial Interpolation of Precipitation
2.4.1. Interpolation Grid Resolutions
The spatial resolution of the precipitation field is one of the most important sources of uncertainty in the gridded data [
]. However, when it comes to semi-distributed hydrological modeling, this aspect is often neglected. Most of the time, only the interpolation method is investigated, but grid resolution is treated as
fixed, like in the paper by Tobin et al. [
] where the authors were using 500 m resolution grid and investigated three interpolation methods (Inverse Distance Weighting, Ordinary Kriging and Kriging with External Drift). In this paper, six
different grid sizes were analyzed (
Figure 2
Considering the complex topography of the study area, it was decided that the initial resolution of the grid size would be set to 250 m. Such resolution is fair enough to represent the spatial
variability of precipitation. Then, the grid sizes were gradually increased up to 5000 m, which, after investigation, seems to be the lowest acceptable resolution that could still be enough to
represent spatial variability. These grids were used to create precipitation fields using the inverse distance weighting and polynomial interpolation methods.
2.4.2. Inverse Distance Weighting
The Inverse Distance Weighting (IDW) interpolation method was developed by Shepard in 1968 [
]. This method has been used for decades, and despite the passage of time and the development of more sophisticated interpolation methods, it is still widely used for spatial interpolation of point
precipitation data [
]. The main advantage of this method is its simplicity in implementation and satisfactory interpolation results confirmed over the years in various works. The general concept behind this method is to
attribute the value over an unsampled location as a weighted average of the known values located around [
]. In the case of the IDW method, the functions of the inverse distance are used where the weights are defined by the inverse of the distance and normalized, so their sum equals one [
The amount of precipitation at a location not covered by the measurement (
$P ^ 0$
) is determined according to the formula [
$P ^ 0 = ∑ i n P i D 0 i p ∑ i n 1 D o i p$
—rainfall amount measured at the rain gauge,
—distance between the location of the estimated part of the precipitation field and the rain gauge
—number of rain gauges used to estimate the precipitation amount at the location 0,
—power exponent responsible for assigning significance weights to individual rain gauges.
The outputs of IDW interpolation depend on two factors: density of the rain gauges network, which impacts the distance between the rain gauges and the estimated part of the precipitation field, and
also the assumed value of the power parameter (
). To minimize the errors related to the distance problem, the density of the rain gauge network should be as high as possible. However, most researchers acquire the data from already existing
measuring networks and have no real impact on the station’s density. The IDW method assumes that the mapped variable decreases in influence with increasing distance from the sampling locations. The
influence depends on the power parameter (
), which is always a positive, real number, and usually assumed as equal to 2 [
] without further investigation. The general indication for mountainous areas is to assume the
-value as either 1, 2, or 3 [
]. After investigation of different values of the
parameter for the study area for further analysis, three values were chosen 0.5, 2, and 5.
The mentioned-above values represent a wide enough range to verify its impact on the precipitation estimates. From the conducted analysis, it turned out that the values greater than 5 seem to
over-generalize the interpolation results when using them over an area of a small catchment.
Some studies show that by including the elevation weighting in the IDW method, better results can be obtained in the regions where the topographic impact on the precipitation is significant [
]. On the other hand, there are studies that indicate that the appropriate choice of the power parameter value is more important than including the elevation as explanatory data [
]. Therefore, when applying the rainfall data into a semi-distributed model, it was decided not to include this parameter, as it would be more reliable to investigate it in a fully-distributed model
that takes into account spatially distributed information on the elevation.
2.4.3. Polynomial Interpolation
Spatial interpolation with the use of polynomial interpolation consists of matching the equation coefficients describing the spatial variability of precipitation so that the approximation error is as
low as possible [
]. The most commonly used are first- and second-degree polynomials expressed successively as [
$P ^ 0 = a 1 · X + a 2 · Y + a 3$
$P ^ 0 = a 1 · X + a 2 · Y + a 3 · X 2 + a 4 · Y 2 + a 5 · X · Y + a 6$
$P ^ 0$
—location not covered by the measurement,
—geographical coordinates of locations with unknown precipitation height,
—regression function coefficients.
The least-square method is commonly used to determine the regression coefficients found in Equations (5) and (6). Polynomial interpolation allows accurate estimation of values at nodal points,
whereas between them it can lead to the generation of values that have no physical justification [
]. Examples of the use of precipitation fields created by the means of polynomial interpolation can be found in numerous works [
]. When the number of rain gauge location is small, like in the case of the study area, the second-degree polynomial interpolation leads to large errors and unrealistic results [
]. Therefore, the first-degree interpolation was used in this work.
3. Results and Discussion
3.1. Impact of the Grid Resolution on the IDW Method
Figure 3
presents the comparison of observed and simulated discharge for the calibration events from 2014–2016. All of the simulations were performed at hourly time steps. The results of the evaluation
criteria are shown in
Table 3
Figure 4
shows the comparison of observed and simulated discharge for the validation events from 2017–2019. All of the simulations were performed at hourly time steps. The results of the evaluation criteria
are presented in
Table 4
Considering both the calibration and validation results, it can be noticed that in most of the cases the least satisfactory results were obtained when using 750 m and 1000 m interpolation grid.
Results for the grids 250 m and 500 m are similar. Very good results were observed when using 2500 m resolution grid, and in most of the cases, they are much better than the grid of lower
resolution—5000 m. Taking into consideration the following results, only the grids of 250 m and 2500 m were chosen for further investigation regarding the impact of IDP value on interpolation
Figure A1
, presented in
Appendix A
, provides exemplary visualizations of precipitation fields applied in the shown above hydrological modeling.
3.2. Impact of the IDP Value on the IDW Method
The next step, after analyzing the impact of grid size on the interpolation outputs, was an investigation of the impact of the value of IDP on the IDW interpolation results. As mentioned in
Section 2.4.2
, three values of the
parameter were chosen for the analysis 0.5, 2, and 5. All simulations were performed at hourly time-steps.
Figure 5
presents the comparison of observed and simulated discharge for the calibration events from 2014–2016. All of the simulations were performed at hourly time steps. The results of evaluation criteria
for the calibration events are shown in
Table 5
Figure 6
shows the comparison of observed and simulated discharge for the validation events from 2017–2019. All of the simulations were performed at hourly time steps. The results of the evaluation criteria
are presented in
Table 6
Analyzing the calibration results we can observe that the best results were obtained for the highest value of IDP (equal to 5.0) and the worst for the smallest one (equal to 0.5). The results for
traditionally applied IDP value, equal to 2.0, are somewhere in between the other two. As for the validation, the best results were obtained when using precipitation field interpolated with IDP equal
to 0.5, but it must be highlighted that the differences in comparison with the other two investigated IDP values are not significant. In two cases out of two, different values of IDP than 2.0
generated better results.
Figure A2
, presented in
Appendix A
, provides exemplary visualizations of precipitation fields applied in the shown above hydrological modeling.
3.3. Impact of the Grid Resolution on the Polynomial Interpolation
As argued in
Section 2.4.3
, the first-degree polynomial interpolation was performed to produce precipitation fields. Similarly, like for the IDW method, the impact of grid resolution in the interpolation results was
investigated. All simulations were performed at hourly time-steps.
Figure 7
presents the comparison of observed and simulated discharge for the calibration events from 2014–2016. The results of evaluation criteria for the calibration events are shown in
Table 7
Figure 8
Reference source not found. shows the comparison of observed and simulated discharge for the validation events from 2017–2019. All of the simulations were performed at hourly time steps. The results
of evaluation criteria are presented in
Table 8
The results for the calibration phase are similar between the different interpolation grids. However, it can be noticed that higher grid resolutions (500 m and 750 m) slightly outperform the lower
ones (1000 m, 2500 m, and 5000 m). The same pattern can be found for the validation of the hydrological model. For this phase in most of the cases, the best results were obtained when the grid
resolution ranged from 250 m to 750 m.
Figure A3
, presented in
Appendix A
, provides exemplary visualizations of precipitation fields applied in the shown above hydrological modeling.
4. Summary and Conclusions
This paper presents a comprehensive investigation of the impact of deterministic interpolation methods of precipitation on rainfall-runoff modeling in a small mountainous catchment characterized by a
quick time of response. When the number of rainfall stations is limited and too small to prevent the use of any geostatistical method, the deterministic methods are the only option. However, spatial
interpolation of precipitation over a sparsely gauged mountainous catchment is particularly challenging. The performance of spatial interpolation of precipitation obtained using the inverse distance
weighting and first-degree polynomial interpolation method was evaluated via the semi-distributed rainfall-runoff model. Furthermore, the impact of the grid resolution during the interpolation
process was investigated for 6 grid sizes (ranging from 250 m to 5000 m). The impact of the IDP value in the IDW interpolation was also analyzed.
When using a semi-distributed hydrological model, the aspect of the grid resolution used for the preparation of precipitation data is often neglected. This study shows that for different
interpolation methods, the grid resolutions have a significant impact on the outputs of hydrological modeling.
The following conclusions can be drawn from the analysis outcomes:
• When analyzing
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
, it must be noticed that the curves for various sizes of the grid and different IDP values for the IDW method are very correlated when compared to the curve of the observed flow. Therefore, the
choice of a different grid size (or IDP for the IDW method) does not change much the picture with respect to the observed discharge. However, when looking at the data more precisely with
statistical analysis, some differences can be detected.
• The impact of the grid resolution is more visible for the IDW method than for the first-degree polynomial interpolation. As for the IDW method, the maximum difference for the NSE criterion is
0.26 for both, calibration, and validation phases. For the first-degree polynomial method, the maximum differences for the NSE are 0.12 and 0.16, respectively. As the IDW method is frequently
used in hydrological applications, the appropriate choice of the interpolation grid is of particular importance.
• Among the analyzed grid resolutions, the best results for the IDW method were obtained for the grids of 250 m and 2500 m (average values of the NSE were 0.62 and 0.65 for the calibration and 0.74
and 0.76 for the validation respectively). For the first-degree polynomial method, higher grid resolutions (smaller or equal to 750 m) outperformed the lower ones (greater or equal to 1000 m).
The mean value of the NSE for the calibration phase for grids up to 750 m was 0.63 and 0.67 for validation. As for the lower resolution grids, the results were 0.60 and 0.65 consecutively.
• The applied value of the IDP in the IDW method has a significant impact on the outputs of hydrological modeling. In most of the cases, more accurate results were obtained using different values
of IDP than traditionally applied value equal to 2.0. Therefore, the choice of the appropriate IDP value when using a semi-distributed hydrological model cannot be neglected and should be taken
into account.
• The IDP value in the IDW interpolation method has more impact on the simulation results than the grid size. That can be clearly seen when comparing the results presented in
Table 6
• Within the analyzed deterministic interpolation methods, slightly better results were obtained for the first-degree interpolation method than for the IDW interpolation considering the results of
the evaluation criteria presented in
Table 3
Table 4
Table 5
Table 6
Table 7
Table 8
. Tobin et al. [
] reported that the IDW method tends to significantly underestimate rainfall volume, but this study shows that when using the right grid size and appropriate IDP value, this method can also be
effective. It should also be noted (
Figure A3
) that the first-degree polynomial method can lead to significant underestimation of precipitation over relatively large areas (horizontal), especially when using low-resolution grids.
• For small mountainous catchments, the best data source on the precipitation field would be rain gauge data interpolated using the first-degree interpolation method and grid size smaller or equal
to 750 m. This method, unlike the IDW, is more straightforward in application, and does not require subjective investigation of the method’s parameters (the IDP value in the IDW interpolation
• Kling-Gupta efficiency (KGE), which is considered as one of the alternatives to the Nash-Sutcliffe efficiency (NSE), generally tends to provide higher and less varied values, which makes it less
useful for the evaluation of the results.
For future works, it will be interesting to investigate whether incorporating other environmental variables or covariates into the precipitation modeling process will lead to better simulation
results when using a semi-distributed hydrological model. Apart from that, it would be worth considering other factors during the interpolation process, such as the density of meteorological stations
or drainage area. Better simulation results might be obtained when performing validation of the precipitation field before its application into the hydrological model.
Author Contributions
Conceptualization, P.G. and P.G.; methodology, P.G.; software, P.G.; validation, P.G., P.G. and P.G.; formal analysis, P.G.; investigation, P.G.; resources, P.G.; data curation, P.G.;
writing—original draft preparation, P.G.; writing—review and editing, P.G.; visualization, P.G.; supervision, P.G.; project administration, P.G.; funding acquisition, P.G. The author have read and
agreed to the published version of the manuscript.
This research was paid via discount vouchers for reviews provided by the author for Multidisciplinary Digital Publishing Institute.
Data Availability Statement
Data available on request due to privacy restrictions when using the data provided by the Institute of Meteorology and Water Management National Research Institute.
In memory of my PhD supervisor, Professor Marek Nawalany, who passed away in May 2020 just a few weeks before I have finished the PhD thesis.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
The appendix contains visualizations of the precipitation fields obtained using the investigated interpolation methods (inverse distance weighting, and first-degree polynomial). The data used for the
visualizations are the 1-h precipitation accumulation for 4 July 2014, 18:00. At that time, all of the rain gauges available close to the study area registered measurement of precipitation greater
than 0 mm.
Figure A1
refers to
Section 3.1
Figure A2
refers to
Section 3.2
, and
Figure A3
refers to
Section 3.3
Figure A1. Comparison of precipitation fields obtained using the Inverse Distance Weighting interpolation method (IDP = 2.0) for different grid sizes: (a) 250 m, (b) 500 m, (c) 750 m, (d) 1000 m, (e)
2500 m, (f) 5000 m.
Figure A2. Comparison of precipitation fields obtained using the Inverse Distance Weighting interpolation method and a grid size of 250 m: (a,c,e) and 2500 m: (b,d,f) for the IDP values of 0.5, 2.0,
and 5.0 consecutively.
Figure A3. Comparison of precipitation fields obtained using the first-degree polynomial as an interpolation method for different grid sizes: (a) 250 m, (b) 500 m, (c) 750 m, (d) 1000 m, (e) 2500 m,
(f) 5000 m.
1. Cheng, M.; Wang, Y.; Engel, B.; Zhang, W.; Peng, H.; Chen, X.; Xia, H.; Cheng, M.; Wang, Y.; Engel, B.; et al. Performance Assessment of Spatial Interpolation of Precipitation for Hydrological
Process Simulation in the Three Gorges Basin. Water 2017, 9, 838. [Google Scholar] [CrossRef] [Green Version]
2. Caracciolo, D.; Arnone, E.; Noto, L.V. Influence of Spatial Precipitation Sampling on Hydrological Response at the Catchment Scale. J. Hydrol. Eng. 2014, 19, 544–553. [Google Scholar] [CrossRef]
[Green Version]
3. Price, K.; Purucker, S.T.; Kraemer, S.R.; Babendreier, J.E.; Knightes, C.D. Comparison of radar and gauge precipitation data in watershed models across varying spatial and temporal scales.
Hydrol. Process. 2014, 28, 3505–3520. [Google Scholar] [CrossRef]
4. Chen, T.; Ren, L.; Yuan, F.; Yang, X.; Jiang, S.; Tang, T.; Liu, Y.; Zhao, C.; Zhang, L. Comparison of spatial interpolation schemes for rainfall data and application in hydrological modeling.
Water 2017, 9, 342. [Google Scholar] [CrossRef] [Green Version]
5. Bell, V.A.; Moore, R.J. The sensitivity of catchment runoff models to rainfall data at different spatial scales. Hydrol. Earth Syst. Sci. 2000, 4, 653–667. [Google Scholar] [CrossRef]
6. Li, J.; Heap, A.D. A review of comparative studies of spatial interpolation methods in environmental sciences: Performance and impact factors. Ecol. Inform. 2011, 6, 228–241. [Google Scholar] [
7. Gabella, M.; Speirs, P.; Hamann, U.; Germann, U.; Berne, A. Measurement of precipitation in the alps using dual-polarization C-Band ground-based radars, the GPMSpaceborne Ku-Band Radar, and rain
gauges. Remote Sens. 2017, 9, 1147. [Google Scholar] [CrossRef] [Green Version]
8. Kitchen, M.; Blackall, R.M. Representativeness errors in comparisons between radar and gauge measurements of rainfall. J. Hydrol. 1992. [Google Scholar] [CrossRef]
9. Michaelides, S.; Levizzani, V.; Anagnostou, E.; Bauer, P.; Kasparis, T.; Lane, J.E. Precipitation: Measurement, remote sensing, climatology and modeling. Atmos. Res. 2009, 94, 512–533. [Google
Scholar] [CrossRef]
10. Herrera, S.; Kotlarski, S.; Soares, P.M.M.; Cardoso, R.M.; Jaczewski, A.; Gutiérrez, J.M.; Maraun, D. Uncertainty in gridded precipitation products: Influence of station density, interpolation
method and grid resolution. Int. J. Climatol. 2019, 39, 3717–3729. [Google Scholar] [CrossRef]
11. Kurtzman, D.; Navon, S.; Morin, E. Improving interpolation of daily precipitation for hydrologic modelling: Spatial patterns of preferred interpolators. Hydrol. Process. 2009, 23, 3281–3291. [
Google Scholar] [CrossRef]
12. Ly, S.; Charles, C.; Degré, A. Different methods for spatial interpolation of rainfall data for operational hydrology and hydrological modeling at watershed scale: A review. Biotechnol. Agron.
Soc. Environ. 2013, 17, 392–406. [Google Scholar]
13. Zareian, M.J.; Eslamian, S.; Safavi, H.R. A modified regionalization weighting approach for climate change impact assessment at watershed scale. Theor. Appl. Climatol. 2015, 122, 497–516. [Google
Scholar] [CrossRef]
14. Gilewski, P. Sensitivity of the Catchment Outflow in a Mountainous Region Modeled with the Rainfall-Runoff Hydrological 529 Model to the Spatial-Temporal Distribution of Precipitation. Ph.D.
Thesis, Warsaw University of Technology, Warsaw, Poland, 2020. (In Polish). [Google Scholar]
15. Wang, S.; Huang, G.H.; Lin, Q.G.; Li, Z.; Zhang, H.; Fan, Y.R. Comparison of interpolation methods for estimating spatial distribution of precipitation in Ontario, Canada. Int. J. Climatol. 2014,
34, 3745–3751. [Google Scholar] [CrossRef]
16. Foehn, A.; García Hernández, J.; Schaefli, B.; De Cesare, G. Spatial interpolation of precipitation from multiple rain gauge networks and weather radar data for operational applications in Alpine
catchments. J. Hydrol. 2018. [Google Scholar] [CrossRef]
17. Simanton, J.; Osborn, H. Reciprocal-Distance Estimate of Point Rainfall. J. Hydraul. Div. 1980, 106, 1242–1246. [Google Scholar]
18. Tung, Y. Point Rainfall Estimation for a Mountainous Region. J. Hydraul. Eng. 1983. [Google Scholar] [CrossRef]
19. Chen, F.-W.; Liu, C.-W. Estimation of the spatial rainfall distribution using inverse distance weighting (IDW) in the middle of Taiwan. Paddy Water Environ. 2012, 10, 209–222. [Google Scholar] [
20. Wu, J.; Zheng, C.; Chien, C.C. Cost-effective sampling network design for contaminant plume monitoring under general hydrogeological conditions. J. Contam. Hydrol. 2005, 77, 41–65. [Google
Scholar] [CrossRef]
21. Malvić, T.; Ivšinović, J.; Velić, J.; Rajić, R. Interpolation of small datasets in the sandstone hydrocarbon reservoirs, case study of the sava depression, Croatia. Geosciences 2019, 9, 201. [
Google Scholar] [CrossRef] [Green Version]
22. Punzet, J. Hydrology of the Skawa river basin. Wiad. Sluz. Hydr. Meteo. 1994, 3–4, 29–40. [Google Scholar]
23. Franczak, P. Rola wielkich wezbrań powodziowych w kształtowaniu życia ludności w zlewni górnej Skawy od XV wieku The role of major floods in shaping the life of the population in the catchment
area of the upper Skawa River since the 15 th century. Wspolczesne Probl. Kierun. Badaw. Geogr. 2014, 2, 117–129. [Google Scholar]
24. Bronstert, A. Rainfall-runoff modelling for assessing impacts of climate and land-use change. Hydrol. Process. 2004, 18, 567–570. [Google Scholar] [CrossRef]
25. Witkowski, K. Evolution of the lower Skawa river bed in regard to hydrotechnical buildings. Acta Sci. Pol. Form. Circumiectus 2015, 14. [Google Scholar] [CrossRef]
26. Gilewski, P.; Nawalany, M. Inter-Comparison of Rain-Gauge, Radar, and Satellite (IMERG GPM) Precipitation Estimates Performance for Rainfall-Runoff Modeling in a Mountainous Catchment in Poland.
Water 2018, 10, 1665. [Google Scholar] [CrossRef] [Green Version]
27. Szturc, J.; Jurczyk, A.; Ośródka, K.; Wyszogrodzki, A.; Giszterowicz, M. Precipitation estimation and nowcasting at IMGW-PIB ( SEiNO system ). Meteorol. Hydrol. Water Manag. Res. Oper. Appl. 2017
, 6, 1–10. [Google Scholar] [CrossRef] [Green Version]
28. Gilewski, P.; Węglarz, A. Impact of land-cover change related urbanization on surface runoff estimation. In Proceedings of the MATEC Web of Conferences, Lille, France, 8–10 October 2018; Volume
196, pp. 1–6. [Google Scholar]
29. Gent, P.R.; Danabasoglu, G.; Donner, L.J.; Holland, M.M.; Hunke, E.C.; Jayne, S.R.; Lawrence, D.M.; Neale, R.B.; Rasch, P.J.; Vertenstein, M.; et al. The Community Climate System Model Version 4.
J. Clim. 2011, 24, 4973–4991. [Google Scholar] [CrossRef]
30. Kirchner, J.W. Catchments as simple dynamical systems: Catchment characterization, rainfall-runoff modeling, and doing hydrology backward. Water Resour. Res. 2009, 45, 1–34. [Google Scholar] [
CrossRef] [Green Version]
31. Koster, R.D.; Mahanama, S.P.; Koster, R.D.; Mahanama, S.P.P. Land Surface Controls on Hydroclimatic Means and Variability. J. Hydrometeorol. 2012, 13, 1604–1620. [Google Scholar] [CrossRef]
32. Orth, R.; Staudinger, M.; Seneviratne, S.I.; Seibert, J.; Zappa, M. Does model performance improve with complexity? A case study with three hydrological models. J. Hydrol. 2015, 523, 147–159. [
Google Scholar] [CrossRef] [Green Version]
33. Pina, R.D.; Ochoa-Rodriguez, S.; Simões, N.E.; Mijic, A.; Marques, A.S.; Maksimović, Č. Semi- vs. Fully-distributed urban stormwater models: Model set up and comparison with two real case
studies. Water 2016, 8, 58. [Google Scholar] [CrossRef] [Green Version]
34. Zhang, H.L.; Wang, Y.J.; Wang, Y.Q.; Li, D.X.; Wang, X.K. Quantitative comparison of semi- and fully-distributed hydrologic models in simulating flood hydrographs on a mountain watershed in
southwest China. J. Hydrodyn. 2013, 25, 877–885. [Google Scholar] [CrossRef]
35. Nash, J.E.; Sutcliffe, J.V. River flow forecasting through conceptual models part I—A discussion of principles. J. Hydrol. 1970, 10, 282–290. [Google Scholar] [CrossRef]
36. Gupta, H.; Kling, H.; Yilmaz, K.; Martinez, G. Decomposition of the mean squared error and NSE performance criteria: Implications for improving hydrological modelling. J. Hydrol. 2009, 377,
80–91. [Google Scholar] [CrossRef] [Green Version]
37. Knoben, W.; Freer, J.; Woods, R. Technical note: Inherent benchmark or not? Comparing Nash—Sutcliffe and Kling—Gupta efficiency scores. Hydrol. Earth Syst. Sci. 2019, 23, 4323–4331. [Google
Scholar] [CrossRef] [Green Version]
38. Fang, G.; Yuan, Y.; Gao, Y.; Huang, X.; Guo, Y. Assessing the Effects of Urbanization on Flood Events with Urban Agglomeration Polders Type of Flood Control Pattern Using the HEC-HMS Model in the
Qinhuai River Basin, China. Water 2018, 10, 1003. [Google Scholar] [CrossRef] [Green Version]
39. Tobin, C.; Nicotina, L.; Parlange, M.B.; Berne, A.; Rinaldo, A. Improved interpolation of meteorological forcings for hydrologic applications in a Swiss Alpine region. J. Hydrol. 2011, 401,
77–89. [Google Scholar] [CrossRef]
40. Shepard, D. Two-dimensional interpolation function for irregularly-spaced data. In Proceedings of the 1968 23rd ACM national Conference, New York, NY, USA, 27–29 August 1968; pp. 517–524. [Google
41. Garcia, M.; Peters-Lidard, C.D.; Goodrich, D.C. Spatial interpolation of precipitation in a dense gauge network for monsoon storm events in the southwestern United States. Water Resour. Res. 2008
, 44, 1–14. [Google Scholar] [CrossRef] [Green Version]
42. Lu, G.Y.; Wong, D.W. An adaptive inverse-distance weighting spatial interpolation technique. Comput. Geosci. 2008. [Google Scholar] [CrossRef]
43. Ruelland, D.; Ardoin-Bardin, S.; Billen, G.; Servat, E. Sensitivity of a lumped and semi-distributed hydrological model to several methods of rainfall interpolation on a large basin in West
Africa. J. Hydrol. 2008, 361, 96–117. [Google Scholar] [CrossRef]
44. Shen, Z.; Chen, L.; Liao, Q.; Liu, R.; Hong, Q. Impact of spatial rainfall variability on hydrology and nonpoint source pollution modeling. J. Hydrol. 2012, 472, 205–215. [Google Scholar] [
45. Masih, I.; Maskey, S.; Uhlenbrook, S.; Smakhtin, V. Assessing the Impact of Areal Precipitation Input on Streamflow Simulations Using the SWAT Model. J. Am. Water Resour. Assoc. 2011. [Google
Scholar] [CrossRef]
46. Szczepanek, R. Spatio-Temporal Structre of Precipitation in Mountainous Catchment. Ph.D. Thesis, Cracow 592 University of Technology, Kraków, Poland, 2003. (In Polish). [Google Scholar]
47. Basistha, A.; Arya, D.S.; Goel, N.K. Spatial Distribution of Rainfall in Indian Himalayas—A Case Study of Uttarakhand Region. Water Resour. Manag. 2008, 22, 1325–1346. [Google Scholar] [CrossRef]
48. Gentilucci, M.; Bisci, C.; Burt, P.; Fazzini, M.; Vaccaro, C. Interpolation of Rainfall Through Polynomial Regression in the Marche Region (Central Italy). In Proceedings of the Annual
International Conference on Geographic Information Science, Lund, Sweden, 12–15 June 2018; Springer: Cham, Swizerland, 2018; pp. 55–73. [Google Scholar]
Figure 1. Characteristics of the Upper Skawa River catchment in reference to digital elevation model (DEM).
Figure 2. Comparison of the interpolation grids investigated in the study (a) 250 m, (b) 500 m, (c) 750, (d) 1000 m, (e) 2500 m. (f) 5000 m.
Figure 3. Comparison of observed and simulated hydrographs using different grid sizes and inverse distance weighting (IDW) (inverse distance power (IDP) = 2.0) as an interpolation method for the
calibration events (a) Event 1: May 2014, (b) Event 2: May 2015, (c) Event 3: July 2016, (d) Event 4: October 2016.
Figure 4. Comparison of observed and simulated hydrographs using different grid sizes and IDW (IDP = 2.0) as an interpolation method for the validation events (a) Event 1: April 2017, (b) Event 2:
July 2018, (c) Event 3a: May 2019, (d) Event 3b: May 2019.
Figure 5. Comparison of observed and simulated hydrographs using two grid sizes (250 m and 2500 m) and three IDW (IDP = 0.5, 2.0, 5.0) as an interpolation method for the calibration events (a) Event
1: May 2014, (b) Event 2: May 2015, (c) Event 3: July 2016, (d) Event 4: October 2016.
Figure 6. Comparison of observed and simulated hydrographs using two grid sizes (250 m and 2500 m) and three IDW (IDP = 0.5, 2.0, 5.0) as an interpolation method for the validation events (a) Event
1: April 2017, (b) Event 2: July 2018, (c) Event 3a: May 2019, (d) Event 3b: May 2019.
Figure 7. Comparison of observed and simulated hydrographs using different grid sizes and first-degree polynomial as an interpolation method for the calibration events (a) Event 1: May 2014, (b)
Event 2: May 2015, (c) Event 3: July 2016, (d) Event 4: October 2016.
Figure 8. Comparison of observed and simulated hydrographs using different grid sizes and first-degree polynomial as an interpolation method for the validation events (a) Event 1: April 2017, (b)
Event 2: July 2018, (c) Event 3a: May 2019, (d) Event 3b: May 2019.
Rain Gauge Station Station Code Acronym Longitude Latitude Altitude [m a.s.l]
Maków Podhalański 249190190 RG-1 19°40′36.59″49° 49°43′51.29″ 367
Markowe Szczawiny 249190390 RG-2 19°30′58.55″ 49°35′17.05″ 1184
Spytkowice Górne 249190460 RG-3 19°50′0.57″ 49°34′38.78″ 525
Zawoja 249190350 RG-4 19°34′1″ 49°40′1″ 604
Table 2. Methods applied in the Hydrologic Engineering Center-Hydrologic Modelling System (HEC-HMS) hydrological model.
Catchment Model Meteorological Model
Parameter Method Parameter Method
Rainfall losses SCS Curve Number Precipitation
Transformation of effective precipitation Snyder Unit Hydrograph Specified hyetographs for each of the data sources of precipitation
Baseflow Recession baseflow
Routing Muskingum-Cunge
Table 3. The results of the evaluation criteria for the calibration events using different grids sizes and IDW (IDP = 2.0) as an interpolation method.
Nash-Sutcliffe Efficiency (NSE) Kling-Gupta Efficiency (KGE)
Event Grid Resolution [m] Grid Resolution [m]
Event 1 0.61 0.56 0.58 0.56 0.60 0.56 0.79 0.64 0.73 0.74 0.80 0.76
Event 2 0.65 0.65 0.64 0.65 0.60 0.59 0.69 0.64 0.62 0.69 0.60 0.67
Event 3 0.52 0.50 0.47 0.47 0.57 0.59 0.55 0.54 0.53 0.53 0.56 0.61
Event 4 0.75 0.80 0.73 0.72 0.77 0.8 0.85 0.88 0.86 0.85 0.88 0.89
Percent bias (PBIAS)
Grid resolution [m]
Event 1 2.6 −14.8 −5 −3.5 2.6 1.3
Event 2 3.3 12.4 5.8 1.5 14.6 12.2
Event 3 33.4 26 23.2 31.9 27.5 28.3
Event 4 −3.9 −3.3 −3.2 −5.9 −2.7 −4.2
Table 4. The results of the evaluation criteria for the validation events using different grids sizes and IDW (IDP = 2.0) as an interpolation method.
NSE KGE
Event Grid Resolution [m] Grid Resolution [m]
Event 1 0.56 0.6 0.65 0.72 0.67 0.46 0.67 0.67 0.67 0.73 0.7 0.62
Event 2 0.64 0.65 0.66 0.64 0.64 0.65 0.8 0.81 0.81 0.8 0.8 0.81
Event 3a 0.84 0.72 0.69 0.78 0.79 0.83 0.87 0.77 0.77 0.82 0.79 0.82
Event 3b 0.93 0.93 0.93 0.93 0.93 0.92 0.95 0.95 0.95 0.95 0.95 0.95
Grid Resolution [m]
Event 1 −30.4 −32.3 −32.6 −25.6 −29.7 −35.2
Event 2 6.5 5.9 6.3 6 10.6 8.8
Event 3a 4.2 0.5 −0.2 −2 0.2 0.4
Event 3b 1 −0.6 0.1 0.1 1.6 −1.7
Table 5. The results of the evaluation criteria for the calibration events using two grid sizes (250 m and 2500 m) and three IDW (IDP = 0.5, 2.0, 5.0) as an interpolation method.
NSE KGE
Event Grid Resolution [m] Grid Resolution [m]
IDP 0.5 2.0 5.0 0.5 2.0 5.0 0.5 2.0 5.0 0.5 2.0 5.0
Event 1 0.52 0.61 0.65 0.54 0.60 0.63 0.56 0.79 0.81 0.60 0.80 0.80
Event 2 0.64 0.65 0.63 0.55 0.60 0.60 0.66 0.69 0.70 0.61 0.60 0.65
Event 3 0.38 0.52 0.53 0.38 0.57 0.54 0.30 0.55 0.58 0.37 0.56 0.57
Event 4 0.69 0.75 0.79 0.72 0.77 0.79 0.84 0.85 0.89 0.82 0.88 0.89
Event Grid Resolution [m]
IDP 0.5 0.5 0.5 0.5 0.5 0.5
Event 1 −23.0 −23.0 −23.0 −23.0 −23.0 −23.0
Event 2 1.2 1.2 1.2 1.2 1.2 1.2
Event 3 2.0 2.0 2.0 2.0 2.0 2.0
Event 4 − 6.9 − 6.9 − 6.9 − 6.9 − 6.9 − 6.9
Table 6. The results of the evaluation criteria for the validation events using two grid sizes (250 m and 2500 m) and three IDW (IDP = 0.5, 2.0, 5.0) as an interpolation method.
NSE KGE
Event Grid Resolution [m] Grid Resolution [m]
IDP 0.5 2.0 5.0 0.5 2.0 5.0 0.5 2.0 5.0 0.5 2.0 5.0
Event 1 0.75 0.56 0.62 0.60 0.67 0.63 0.73 0.67 0.65 0.66 0.70 0.69
Event 2 0.68 0.64 0.62 0.68 0.64 0.61 0.83 0.80 0.79 0.83 0.80 0.77
Event 3a 0.81 0.84 0.86 0.83 0.79 0.85 0.77 0.87 0.84 0.81 0.79 0.84
Event 3b 0.92 0.93 0.90 0.92 0.93 0.93 0.96 0.95 0.92 0.96 0.95 0.96
Event Grid Resolution [m]
IDP 0.5 0.5 0.5 0.5 0.5 0.5
Event 1 −26.2 −26.2 −26.2 −26.2 −26.2 −26.2
Event 2 3.7 3.7 3.7 3.7 3.7 3.7
Event 3a −2.1 −2.1 −2.1 −2.1 −2.1 −2.1
Event 3b −1.1 −1.1 −1.1 −1.1 −1.1 −1.1
Table 7. The results of the evaluation criteria for the validation events using different grids sizes and first-degree polynomial as an interpolation method.
NSE KGE
Event Grid Resolution [m] Grid Resolution [m]
Event 1 0.47 0.52 0.48 0.50 0.48 0.48 0.72 0.73 0.72 0.74 0.72 0.72
Event 2 0.74 0.75 0.75 0.71 0.63 0.63 0.70 0.81 0.75 0.75 0.73 0.73
Event 3 0.48 0.56 0.53 0.52 0.50 0.50 0.56 0.56 0.58 0.55 0.57 0.57
Event 4 0.78 0.78 0.73 0.78 0.75 0.75 0.88 0.89 0.85 0.87 0.87 0.87
Event PBIAS
Grid Resolution [m]
Event 1 11.06 4.0 12.3 9.2 12.0 12.0
Event 2 −0.2 2.9 9.9 7.8 3.3 3.3
Event 3 21.7 20.3 29.0 23.1 29.2 29.2
Event 4 −6.3 −0.3 −8.7 −0.8 −3.4 −3.4
Table 8. The results of the evaluation criteria for the validation events using different grids sizes and first-degree polynomial as an interpolation method.
NSE KGE
Event Grid Resolution [m] Grid Resolution [m]
Event 1 0.61 0.54 0.68 0.52 0.57 0.57 0.67 0.66 0.70 0.64 0.65 0.65
Event 2 0.72 0.79 0.79 0.74 0.74 0.74 0.78 0.85 0.85 0.83 0.82 0.82
Event 3a 0.38 0.41 0.37 0.38 0.40 0.40 0.65 0.66 0.64 0.66 0.65 0.65
Event 3b 0.92 0.89 0.89 0.91 0.90 0.90 0.91 0.91 0.91 0.91 0.88 0.88
Event PBIAS
Grid Resolution [m]
Event 1 −32.7 −32.9 −30.0 −34.4 −34.0 −34.0
Event 2 −18.3 −11.0 −11.6 −12.5 −14.4 −14.4
Event 3a 19.1 19.6 20.0 17.6 19.4 19.4
Event 3b 1.7 0.5 25.3 0.9 −2.8 −2.8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://
Share and Cite
MDPI and ACS Style
Gilewski, P. Impact of the Grid Resolution and Deterministic Interpolation of Precipitation on Rainfall-Runoff Modeling in a Sparsely Gauged Mountainous Catchment. Water 2021, 13, 230. https://
AMA Style
Gilewski P. Impact of the Grid Resolution and Deterministic Interpolation of Precipitation on Rainfall-Runoff Modeling in a Sparsely Gauged Mountainous Catchment. Water. 2021; 13(2):230. https://
Chicago/Turabian Style
Gilewski, Paweł. 2021. "Impact of the Grid Resolution and Deterministic Interpolation of Precipitation on Rainfall-Runoff Modeling in a Sparsely Gauged Mountainous Catchment" Water 13, no. 2: 230.
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/2073-4441/13/2/230","timestamp":"2024-11-02T03:27:44Z","content_type":"text/html","content_length":"526781","record_id":"<urn:uuid:c0aaa377-f5a7-499d-afb8-ef9903bba81a>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00453.warc.gz"}
|
IAGA-AIGA blog
Hello Episode-I’ers, glad that you are back! There’s a trick I learned for thinking which is to go as slow as possible. I’m like a sub-compact car going over a very high hill. I may not be as
powerful as some of the others, but if I exercise my patience and switch to a very low gear, I can get over as high a hill as anyone. Maybe higher, if my gear is really very low. The real challenge
is in finding that very low gear. And in knowing which hill is worth climbing. It is my hope that I have got you started on a good hill, and if it seems a little steep at any point, just slow it down
a little more, and look at the view. Because when you see that it’s beautiful, you really can go extremely slow.
And when you have got to that place, it’s time to appreciate that we now have three criteria for electrostatic theory that we should be able to test, if we can just figure out what is meant by these
waves in an exact mathematical way. These criteria are given in equation (2) from Episode I, and for convenience let’s write them down here as well,
where σP is the usual (zero-frequency) Pedersen conductivity, l is the thickness of the gedanken ionosphere, kz = 2π/λz − i/ldz, λz is the parallel wavelength, ldz is the dissipation scale length, Y0
is the characteristic wave admittance, and we have named the composite quantity iY0kz the wave-Pedersen conductivity, for the (single) propagating electromagnetic wave-mode. So how can we go forth
and put some rigor to these quantities?
To begin with, this is physics, and so we will want some equations of motion. For these let’s take the Maxwell equations along with some order of the fluid equations for electrons and one species of
ion. Taking the Fourier transform in space, these electromagnetic fluid equations may be expressed in matrix form as,
where the matrix H5 has been given in previous work, both for the 5-moment fluid equations (Figure A.1 of Cosgrove (2022)), and for the reduced set where the continuity and energy equations are
omitted (Equation 7 of Cosgrove (2016)). F⃗(t) is a time-dependent vector containing the nonlinear terms. And most importantly, X⃗ is a vector containing the physical quantities we want to solve for,
such as electric field, magnetic field, electron velocity, ion velocity, electron density, and etc., depending on how many moments are retained in deriving the fluid equations from the kinetic
As they are for electrostatic theory, the nonlinear terms will be dropped, in which case the solution may be written exactly as,
are the eigenvectors and eigenvalues of
, respectively,
is the Fourier transform
variable (the wavevector), and the
are 16 arbitrary functions of
. The solution is easily verified by direct substitution, and this is something you can do. And if you wanted to set aside a couple of weeks you could also derive the matrix, except, I shouldn’t
assume your engine is as weak as mine.
Figure 1: Illustration of the calculation of dissipation scale length.
The number 16 arises for the case of the 5-moment fluid equations, which together with the Maxwell equations comprise 16 scalar equations. If we were to omit, for example, the energy equations there
would be only 14 modes, and this says something about the physical meaning of these modes. It is not quite right to equate these with the usual waves that are defined through physical approximations.
Our waves are defined by their role in the exact solution (3) to the (linearized) equations of motion (2). Nevertheless, there is clearly a close relationship between them.
The solution (3) is the source-free initial value solution, which will decay to zero over time according to the time scales 1/imag(ωj). But as described in Episode I, what we actually want is the
driven steady-state solution for a source that turns on, and then continues operating for a good while. Specifically we need the parallel wavelength (λjz), dissipation scale length (ldjz), and
polarization vector (P⃗j) that describe the driven steady-state solution for each mode. We can obtain these (approximately) from the initial-value solution (3) by recognizing that the latter will
describe the plasma evolution during any period when the source is turned off.
If the source that has been operating for a while were to suddenly turn off, the plasma would initially continue evolving in the same way, since it takes some time for the turn-off effect to
propagate away from the source. So for example the plasma would continue oscillating at the source frequency, ω0, and the solution (3) must predict this. Since the frequency for each mode is ωjr (k⃗)
= real(ωj (k⃗)), this means that ωjr (k⃗⊥, 2π/λjz) = ω0, where k⃗⊥ is the transverse wave vector determined by the source (Episode-I). So this allows us to solve for λjz, numerically at least.
Another quantity that must remain the same immediately after the source turn-off is the polarization vector, and so since the po- larization vector for each mode in equation (3) is the eigenvector h⃗j
(k⃗), we can use our result for λjz to find the polarization vector as P⃗j = h⃗j (k⃗⊥, 2π/λjz).
Finally, to get the dissipation scale length, consider that if the transmitter were to turn on and then off again after transmitting for several cycles, then it would transmit a wave-packet such as
is shown in the top panel of Figure 1 (one for each propagating mode). Using the usual understanding of wave-packet propagation, the wave-packets propagate with their group velocity, vgjz = −∂ωjr/∂kz
, while decaying with time scale τj = 1/imag(ωj), which is also illustrated in the panel. Thus if the transmitter were to do this repeatedly, that is, if it did not turn off, there would eventually
result a signal that diminishes away from the antenna with the “dissipation scale length” ldjz = vgjzτj, for each propagating mode. The bottom two panels of Figure 1 illustrate two instants in the
temporal-ascent of the signal to steady-state.
Figure 2: Illustration of magnetosphere-ionosphere coupling with electromagnetic ionosphere.
Since a propagating wave can proceed in either of two opposing directions for any k⃗, equation (3) shows that the solution consists of a sum of up to 8 such “wave-modes,” which are paired modes with ω
kr = − ωjr. The usual transmission line theory applies to the case where there is only one such wave-mode, and physical transmission lines are designed to ensure there is only one. But for our
ionospheric application we do not have this luxury, and so we will want to examine the properties of the modes to find out which must be retained, and which can be discarded. Once the matrix H5 has
been found, the eigenvalues and eigenvectors can be obtained using standard numerical tools, so that the properties of the modes can be determined.
We may discard modes that are not capable of propagating at the source frequency, that is, which cannot satisfy ωjr (k⃗⊥, kz) = ω0 for any kz, based on typical ionospheric ranges for frequency and
transverse scale (ω0 and k⃗⊥). In analyzing the eigenvectors and eigenvalues, we find two “evanescent” modes with ωjr (k⃗⊥, kz) = 0, and we find the X-, O-, and Z-modes known from radio-frequency
applications, which are way too high in frequency. Discarding these we are left with 4 wave-modes that can potentially contribute to ionospheric science, which we call the Whistler, Alfvén, Ion, and
Thermal waves.
Of these, we may discard any that are not capable of transmitting energy over any significant distance, based on the credo that we should give electrostatic theory its best possible chance, and
coupling into such modes would obviously prevent the electric field from mapping through the ionosphere. This failure-mode is embodied in the electrostatic criteria ldz ≫ l from equation (1). In
analyzing the four we find that the Ion and Thermal waves have dissipation scale lengths (ldjz) less than 10 km, and generally much less. (Here it is important to note that we are only considering
scales above a couple of hundred meters, and no conclusion is intended for smaller scales!) Therefore, there are only two wave-modes that need to be included in our transmission line for the
ionosphere, the Whistler and Alfvén waves.
Figure 3: Wave-Pedersen conductivity for the Alfvén and Whistler waves, compared to the usual (zero- frequency) Pedersen conductivity (λ⊥ = 100 km, ne = 10^11 m^-3).
Of these two, the Whistler wave only propagates in the E region. So a signal arriving from the magnetosphere must arrive in the form of an Alfvén wave, and the magnetosphere-ionosphere coupling
problem for our gedankenexperiment (Episode I) takes on the form shown in Figure 2. An ideal (non-collisional) Alfvén wave is incident from the magnetosphere, and in order to satisfy all the usual
boundary conditions, the transmitted signal will generally be composed of both the (collisional) Alfvén and Whistler waves.
Figure 4: Panels a and b: Calculation of phase rotation and dissipation through the vertically-inhomogeneous ionosphere. Panel c: Total phase rotation versus transverse wavelength, for two different
electron density profiles, and the same collision-frequency profiles as Figure 3.
Now we are in a position to test the electrostatic criteria (1). Figure 3 shows the wave-Pedersen conductivity (iY0kz) for both modes along with the usual (zero-frequency) Pedersen conductivity (σP),
plotted versus altitude for a typical ionospheric-profile of collision-frequencies (profiles given in Cosgrove (2016), and for this example we take λ⊥ = 100 km and ne = 10^11 m^-3). Neither mode
agrees very well at all, and so the third electrostatic criteria is not satisfied for either wave-mode.
To examine the first electrostatic criteria (
) we calculate the minimum possible phase-rotation for a
signal traversing the ionosphere. That is, using the numerical results obtained from setting
, 2π
) =
, we integrate the inverse of
over altitude, from 400 km down to 100 km, while choosing the longest wavelength mode at each altitude. This “best case” calculation is illustrated in Panel a of Figure 4, where the vertical
dashed-black line indicates the altitude for switching from the Alfvén to the Whistler wave.
This phase rotation is analogous to the argument for the tangent function that appears in the electrostatic criteria (1), except that the ionosphere is now vertically inhomogeneous. The results are
summarized in panel c of Figure 4, by plotting the best-case phase rotation versus transverse wavelength for two different electron density (ne) profiles. It is found that the phase rotation can
exceed 90°, even for transverse wavelengths as long as 100 km. Since tan 90° = ∞, this represents a complete failure for the first of the electrostatic criteria.
With respect to the second electrostatic criteria (ldz ≫ l), a similar analysis (Panel b of Figure 4) finds that this criteria is nearly satisfied, that is, it is satisfied by the Whistler wave, and
with the exception of the lower E region at longer wavelengths, it is also satisfied by the Alfvén wave. This result gives rise to the idea that the Whistler wave dominates in the lower E region, and
the Alfvén wave in the region above, such that there may be an important transitional effect.
Thus we have answered the first three questions posed in the last paragraph of Episode I, and in all three cases the answers speak against electrostatic theory in a very strong way. However, we might
continue to wonder whether a more general form of electrostatic theory, such as electrostatic wave theory, might be sufficient to salvage the situation. This question turns out to be tied up with the
question of the actual, vertical-inhomogeneity of the ionosphere, and of the interaction between the two modes that it causes. We will address both these questions in Episode III, and in the process
we will recover the most rigorous model ever for the ionospheric conductance, which must needs be an electromagnetic model.
by Russell Bonner Cosgrove
Cosgrove, R. B. (2016), Does a localized plasma disturbance in the ionosphere evolve to electrostatic equilibrium? Evidence to the contrary, J. Geophys. Res., 121, doi: https://doi.org/10.1002/
Cosgrove, R. B. (2022), An electromagnetic calculation of ionospheric conductance that seems to override the field line integrated conductivity, Zenodo and ArXiv, doi: 10.48550/ARXIV.2211.10818,
Division VI of the International Association of Geomagnetism and Aeronomy (IAGA) biennially organises a workshop on electromagnetic induction in the Earth and other planetary bodies since 1972.
This year, the 26th ElectroMagnetic Induction Workshop (EMIW24) will be held in Japan's southern island in Beppu from 7th to 13th September 2024. The registration and abstract submission are
currently open and financial assistance is also available for students and early career researchers.
The registration fee includes an ice breaker event on the first day, a farewell dinner on the last as well as an excursion. The deadline for both the financial support and early bird registration is
15th April 2024. The program comprises of multiple sessions such as instrumentation, modelling and inversion results for global and planetary EM fields. In addition, there are two special programs-
Instrument Demonstration and Women Networking. The week long workshop is packed with exciting events and science.
Head over to the website for more information about the workshop and details to attend!
If you are a scientist interested in outreach work, there are two open calls you can apply for funding!
1) IUGG Grants Program 2024-2027
Proposals will be funded for 1-2 years with topics pertaining to geosciences. You need to contact the Secretary General of the association you belong to under IUGG and have the support of another
association. In addition, there must be a supporting applicant in the proposal. You can apply for a funding of maximum USD 20,000. For more information, visit the webpage here.
2) IAGA Outreach Projects
You can now also submit proposals for IAGA science outreach to the Executive body. The project can be awarded for a maximum of EUR 5,000 and can last up to 2 years. A wide variety of projects can be
applied for, however, only one project will be selected each year in June. To learn more about the funding opportunity and project proposal, click here.
You can visit our older blogs under the blog series section 'Outreach Projects' to learn about previous outreach projects that were awarded by the 2 grants.
|
{"url":"https://iaga-aiga.blogspot.com/2024/03/","timestamp":"2024-11-02T12:17:49Z","content_type":"application/xhtml+xml","content_length":"247510","record_id":"<urn:uuid:0c88e4c6-9010-41d4-9df4-60f2733553d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00428.warc.gz"}
|
A severity of a particular type is trigged whenever the returned value from the check is higher than the configured threshold, the baseline.
The notification threshold / baseline can be defined in several ways.
Setting Description
Method Constant / Average / Median / Deviation
Percentage Percentage threshold based on the calculated value.
Deviation Number of standard deviations
Period Time period for historical results to include in the calculation.
Offset Constant to add to the calculated value.
Static calculations uses a constant threshold value.
Dynamic methods calculate the threshold value on the fly, based on a combination of the type (percentage or deviation), the method (average, median, deviation), the period (time frame for historical
data to use), and offset which is added to the calculated value.
Method Calculation Type Example
Constant A fixed value. Static 25
Average Average. Dynamic 120% of the average returned value for the last 2 hours + offset 50.
Median Median. Dynamic 120% of the median value for the last 2 hours + offset 50.
Deviation Standard deviation. Dynamic 2 standard deviations + average value for the last 2 hours + offset 50.
|
{"url":"http://docs.intenogroup.com/juci/v310/en/glossary/t/threshold","timestamp":"2024-11-05T03:25:49Z","content_type":"application/xhtml+xml","content_length":"70598","record_id":"<urn:uuid:28d4c90b-6d4e-43fa-87ea-33752e2d7ea4>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00566.warc.gz"}
|
A golf ball is hit from a tee of negligible height from a point A wit
A golf ball is hit from a tee of negligible height from a point A with speed of 24.2m/s at an angle of elevation 30° above the horizontal. The golf
ball travels from A and lands in a hole at point C on a grassy slope which is inclined at 45° to the horizontal. The point A is located5.5m from point B which is at the bottom of the slope. i. The
vertical displacement of the golf ball, y, is related to the horizontal displacement of the golf ball, x, by the equation y = ax + bx², where a and b are constants to be found. x and y are measured
in metres. Determine the values of a and b. The grassy slope may be modelled as a straight line with equation y = cx + d, where c and d are constants to be found. x and y are measured in metres.
Determine the equation of the slope ii.Find the horizontal displacement of the golf ball from the instant it is launched at A to the point at which it lands on the slope at C.[10 Marks]
Fig: 1
Fig: 2
Fig: 3
Fig: 4
Fig: 5
|
{"url":"https://tutorbin.com/questions-and-answers/a-golf-ball-is-hit-from-a-tee-of-negligible-height-from-a-point-a-with","timestamp":"2024-11-09T02:53:13Z","content_type":"text/html","content_length":"72661","record_id":"<urn:uuid:d8d3f288-b9b3-49b7-b4ee-85db43d2af4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00124.warc.gz"}
|
P/E ratio
Source: Freepik
If you have some know-how about the stock market, you must have heard about the term “Price-to-Earnings (P/E ratio).” It is one of the most popular metrics to estimate the valuation of a company’s
shares and its future prospects.
In this article, we have covered what exactly is P/E ratio, its significance, and how can it be used for evaluating stocks.
What is Price-to-Earnings (P/E) ratio?
Price-to-Earnings ratio defines the relationship between a company’s share price and its earnings per share (EPS). This valuation measure actually reflects the investors’ sentiments towards a company
compared to its actual standing. To further simplify, the “P” of this ratio indicates how the general public is feeling about that firm in the form of stock’s price and “E” denotes how good the
company is performing in reality in the form of EPS.
It can be simply calculated by dividing the share price by the company’s net profit per share (EPS). For instance, suppose a company has earnings of $50 million and 5 million shares outstanding. This
translates to an EPS of 10. If the current stock price is $80, the P/E ratio would be calculated as follows: 80/10 = 8.
It means that this stock is trading at a multiple of 8 times the company’s actual earnings or investors are willing to pay an 8 times higher price than its earnings per issued share.
Understanding the P/E ratio
The P/E ratio helps investors understand if a stock is fairly valued or overpriced. Moreover, this metric works best as a “comparison tool” as examining it in isolation may not provide enough
information to understand the company’s situation and performance.
Also, when comparing the P/E ratios of different stocks, it’s crucial to consider the context in which the ratios are being used.
For instance, comparing the P/E ratios of two companies in the same industry can be useful in determining which company is overvalued or undervalued relative to its peers. In contrast, comparing the
P/E ratios of two companies in different sectors may not be as useful due to differences in circumstances and earnings or growth potential.
It is also important to consider the stage of a company’s lifecycle, as a younger, growing startup may have a higher P/E ratio than a mature company with a stable earnings history.
To better understand the P/E ratio, let’s consider a comparison between two companies in the same industry. Suppose, Company A has a share price of $150 and a P/E ratio of 20, while Company B has a
share price of $100 and a P/E ratio of 25. This means that investors in Company A are paying $20 for every $1 of the company’s earnings, while investors in Company B are paying $25 for every $1 of
the company’s earnings. Despite Company A having higher share price, its P/E ratio is lower than that of Company B.
This situation suggests that investors may be more optimistic about the earnings growth potential of Company B and are therefore willing to pay more for its stock. However, it’s important to note
that this may not always be the case, and we’ll explore the different possibilities of high and low P/E ratios in the next section.
Assessing stocks from the P/E ratio
While there is no one-size-fits-all P/E ratio that is good or bad, here is what high and low ratios can probably indicate.
What does a High P/E ratio indicate?
A high price-to-earnings ratio can mean two things:
1. Expectations of higher future earnings – A hallmark of “growth stocks”
A notably high P/E ratio can signify that investors are positive about a company’s future earnings, expecting them to be much higher in the coming period than at the present levels. This sentiment is
a characteristic of growth stocks where investors are willing to pay more for a company’s current earnings because they anticipate that its future earnings will increase remarkably.
2. Maybe over-valued
A very high P/E ratio of a stock compared to other similar companies or its own previous ratio may indicate that the stock is “overvalued.” The high P/E ratio of overvalued stocks stems from a
broader optimism in the investors’ community instead of solid growth prospects. It can also be a result of declining earnings; if you look at the Price-to-earnings ratio’s mathematical formula, it
suggests that the P/E ratio will surge if the EPS falls provided the stock price remains the same.
So how to determine if a stock is overvalued due to market hype or a growth stock? Aren’t growth stocks also just overvalued entities? The fact is that the higher valuation of growth stocks is
somewhat justified as they accommodate a solid rationale for investors expecting higher future returns. Hence, investors need to delve deeper for understanding whether the high P/E ratio is due to a
prevailing craze, falling earnings, or solid fundamentals.
What does a low P/E ratio indicate?
A low P/E ratio actually indicates two scenarios:
1. Undervalued compared to the actual stats – a hallmark of “value stocks”
A low P/E ratio may be a sign that a stock is undervalued, as it suggests that the share price is lower than the company’s intrinsic value or earnings statistics. From a technological perspective,
this situation (a low PE ratio) can materialize if a company’s earnings climb while the stock’s price remains stationary.
Consequently, low P/E ratios provide value investors an opportunity to buy stocks with an expectation that the stock price will eventually increase to match the company’s true earnings.
Learn more about growth and value stocks: Growth Stocks vs Value Stocks: A Longstanding Debate
2. Due to the actual financial difficulties of the company
A low P/E ratio can also be a sign of actual financial trouble and the low growth potential of a company. Hence, a stock may be selling at a low P/E ratio because investors are not hopeful regarding
the future earnings of that company due to real statistics.
Closing thoughts
P/E ratios can be a useful tool for comparing the relative valuation of different stocks, but it is important to consider the context, underlying reasons, and other factors that may influence the
ratio. Moreover, while this metric cannot offer much on its own, it can be a great help if combined with other stock analysis techniques.
Filed under: News - @ December 26, 2022 10:21 am
|
{"url":"https://cryptopulpit.com/what-is-p-e-ratio-how-to-use-it-for-assessing-stocks/","timestamp":"2024-11-02T09:17:01Z","content_type":"text/html","content_length":"35701","record_id":"<urn:uuid:a53e7ac3-5fcd-4f38-a9d2-b0e8384538e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00344.warc.gz"}
|
isJust :: Maybe a -> Bool
Return True iff the argument is of the form Just _.
isNothing :: Maybe a -> Bool
Return True iff the argument is of the form Nothing.
fromJust :: Maybe a -> a
Extract the argument from the Just constructor and throw an error if the argument is Nothing.
fromMaybe :: a -> Maybe a -> a
Extract the argument from the Just constructor or return the provided default value if the argument is Nothing.
listToMaybe :: [a] -> Maybe a
Return Nothing on an empty list or Just x where x is the first list element.
maybeToList :: Maybe a -> [a]
Return an empty list for Nothing or a singleton list for Just x.
catMaybes :: [Maybe a] -> [a]
Return the list of all Just values.
mapMaybe :: (a -> Maybe b) -> [a] -> [b]
Apply a function which may throw out elements using the Nothing constructor to a list of elements.
(>>-) :: Maybe a -> (a -> Maybe b) -> Maybe b
Monadic bind for Maybe.
sequenceMaybe :: [Maybe a] -> Maybe [a]
Monadic sequence for Maybe.
mapMMaybe :: (a -> Maybe b) -> [a] -> Maybe [b]
Monadic map for Maybe.
mplus :: Maybe a -> Maybe a -> Maybe a
Combine two Maybes, returning the first Just value, if any.
|
{"url":"https://cpm.curry-lang.org/DOC/base-1.1.0/Maybe.html","timestamp":"2024-11-12T23:43:34Z","content_type":"text/html","content_length":"30772","record_id":"<urn:uuid:33b445a8-1fca-4f27-bfd4-efc2e28fa5a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00424.warc.gz"}
|
Compound interest rate calculator uk
Crypto Coin Growth. CCG News; Crypto News. All Altcoin News Bitcoin News Compound Daily Interest Calculator; Crypto Calculator; ICOs. Active ICOs; Calculate Your Daily Interest for a Fixed Amount of
Days. Initial Purchase Amount . Daily Interest Rate in Percentage. Length of Term (in days) Daily Reinvest Rate Include Weekends. Calculate
Add this calculator to your website. Compound Interest Calculator. Initial Investment $ Interest Rate % Regular Investment $ Term. Yr. Compounded. Start Date. Share Results: After 10 years your
investment will be worth $94,102.53. This is made up of. Initial Investment. $10,000.00 To see how compound interest differs from simple interest, use our simple interest vs compound interest
calculator. How does compound interest work? Compound interest has dramatic positive effects on savings and investments. Compound interest occurs when interest is added to the original deposit – or
principal – which results in interest earning interest. Financial institutions often offer compound interest on deposits, compounding on a regular basis – usually monthly or annually. Here we discuss
How to Calculate Daily Compound Interest along with practical examples. We also provide Daily Compound Interest Calculator with downloadable excel template. You may also look at the following
articles to learn more – Guide To Continuous Compounding Formula; Examples of Nominal Interest Rate Formula Interest Rate: Monthly Payment: Total Interest Total Amount Repayable Index of Interest
Calculators. Compound Interest Calculators Loan Repayment Calculator. Online-Calculators.co.uk may receive a commission from companies we link to. Take the interest rate you expect to earn and divide
it into 72 – the answer is the number of years it will take you to double your money. So at 6 per cent a year it will take you 12 years to double your money. A 3 per cent return will take 24 years.
At 12 per cent it will only take six years. One last point.
Increase deposits yearly with inflation? Inflation rate? Calculate.
Calculator Use. Calculate compound interest on an investment or savings. Using the compound interest formula, calculate principal plus interest or principal or rate or time. Includes compound
interest formulas to find principal, interest rates or final investment value including continuous compounding A = Pe^rt. Compound interest calculator online. Compound interest calculation. The
amount after n years A n is equal to the initial amount A 0 times one plus the annual interest rate r divided by the number of compounding periods in a year m raised to the power of m times n:. A n
is the amount after n years (future value).. A 0 is the initial amount (present value).. r is the nominal annual interest rate. Crypto Coin Growth. CCG News; Crypto News. All Altcoin News Bitcoin
News Compound Daily Interest Calculator; Crypto Calculator; ICOs. Active ICOs; Calculate Your Daily Interest for a Fixed Amount of Days. Initial Purchase Amount . Daily Interest Rate in Percentage.
Length of Term (in days) Daily Reinvest Rate Include Weekends. Calculate Compound interest is the most powerful concept in finance. It can either work for you or against you: Compound interest is the
foundational concept for both how to build wealth and why it's so important to pay off debt as quickly as possible.. The easiest way to take advantage of compound interest is to start saving!
How interest is calculated can greatly affect your savings. It is not possible to invest directly in an index and the compounded rate of return noted above does
25 Feb 2020 Still, you can refer to the same formula banks use to calculate your compound interest: Daily closing balance x interest rate percentage / 365. How to calculate compound interest. To
calculate how much $2,000 will earn over two years at an interest rate of 5% per year, compounded monthly: 1. Divide the This free calculator also has links explaining the compound interest formula.
grow, it grows at an increasing rate - is one of the most useful concepts in finance . 29 Jul 2013 Work out how much you will pay each month on different-sizes loans with different interest rates by
filling in the boxes below. So we can also directly calculate the value of the investment after 5 years. Compound Interest in Excel. which is the same as: Compound Interest Formula. Note:
Find the time value of money based on the different compound interest formulas for monthly, Compound Interest Calculator Annual Interest Rate (%).
Find the time value of money based on the different compound interest formulas for monthly, Compound Interest Calculator Annual Interest Rate (%). How interest is calculated can greatly affect your
savings. It is not possible to invest directly in an index and the compounded rate of return noted above does
Use our free regular investing calculator to find out: How much could your savings be worth? How much do you need to save? 5%. Growth rate. Calculate now
Compound interest is calculated by adding interest to most UK adults can earn up to £1,000 interest on their Use our free regular investing calculator to find out: How much could your savings be
worth? How much do you need to save? 5%. Growth rate. Calculate now Put simply, compound interest changes the amount of money in the bank each time and a new calculation has to be worked out.
Examples. Calculate the interest This is the 'official' interest rate for the UK, and the Bank of England set this eight times a year. When the How often is compound interest paid on a savings
account? The more often your interest is calculated, the more you're likely to get. Comprehensive mortgage calculator, as well as the basic mortgage calc you can check the impact of savings vs
mortgages, offset mortgages, overpayments and You can use our guide, How Is Credit Card Interest Charged to find out more. Help and advice. Understanding interest · Bank of England Base Rate and how
it 25 Feb 2020 Still, you can refer to the same formula banks use to calculate your compound interest: Daily closing balance x interest rate percentage / 365.
A quick and easy way to calculate how much a lump sum of money and regular monthly payments may be worth in the future given an expected rate of return. The How to Calculate in Excel; Formula for a
Series of Payments; Formula for Rate Per Payment Period; Compound Interest Compound interest is calculated by adding interest to most UK adults can earn up to £1,000 interest on their Use our free
regular investing calculator to find out: How much could your savings be worth? How much do you need to save? 5%. Growth rate. Calculate now Put simply, compound interest changes the amount of money
in the bank each time and a new calculation has to be worked out. Examples. Calculate the interest
|
{"url":"https://bestexmovxgv.netlify.app/girardot69414zo/compound-interest-rate-calculator-uk-277.html","timestamp":"2024-11-11T21:25:22Z","content_type":"text/html","content_length":"37518","record_id":"<urn:uuid:48955b6b-78d1-4cb9-86ce-d0a3b0483f58>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00102.warc.gz"}
|
Possible Worlds
First published Fri Oct 18, 2013; substantive revision Mon Feb 8, 2016
Anne is working at her desk. While she is directly aware only of her immediate situation — her being seated in front of her computer, the music playing in the background, the sound of her husband's
voice on the phone in the next room, and so on — she is quite certain that this situation is only part of a series of increasingly more inclusive, albeit less immediate, situations: the situation in
her house as a whole, the one in her neighborhood, the city she lives in, the state, the North American continent, the Earth, the solar system, the galaxy, and so on. On the face of it, anyway, it
seems quite reasonable to believe that this series has a limit, that is, that there is a maximally inclusive situation encompassing all others: things, as a whole or, more succinctly, the actual
Most of us also believe that things, as a whole, needn't have been just as they are. Rather, things might have been different in countless ways, both trivial and profound. History, from the very
beginning, could have unfolded quite other than it did in fact: The matter constituting a distant star might never have organized well enough to give light; species that survived might just as well
have died off; battles won might have been lost; children born might never have been conceived and children never conceived might otherwise have been born. In any case, no matter how things had gone
they would still have been part of a single, maximally inclusive, all-encompassing situation, a single world. Intuitively, then, the actual world of which Anne's immediate situation is a part is only
one among many possible worlds.
The idea of possible worlds is evocative and appealing. However, possible worlds failed to gain any real traction among philosophers until the 1960s when they were invoked to provide the conceptual
underpinnings of some powerful developments in modal logic. Only then did questions of their nature become a matter of the highest philosophical importance. Accordingly, Part 1 of this article will
provide an overview of the role of possible worlds in the development of modal logic. Part 2 explores three prominent philosophical approaches to the nature of possible worlds.^[1] Although many of
the finer philosophical points of Part 2 do presuppose the technical background of Part 1, the general philosophical landscape laid out in Part 2 can be appreciated independently of Part 1.
Although ‘possible world’ has been part of the philosophical lexicon at least since Leibniz, the notion became firmly entrenched in contemporary philosophy with the development of possible world
semantics for the languages of propositional and first-order modal logic. In addition to the usual sentence operators of classical logic such as ‘and’ (‘∧’), ‘or’ (‘∨’), ‘not’ (‘¬’), ‘if...then’
(‘→’), and, in the first-order case, the quantifiers ‘all’ (‘∀’) and ‘some’ (‘∃’), these languages contain operators intended to represent the modal adverbs ‘necessarily’ (‘□’) and ‘possibly’ (‘◇’).
Although a prominent aspect of logic in both Aristotle's work and the work of many medieval philosophers, modal logic was largely ignored from the modern period to the mid-20th century. And even
though a variety of modal deductive systems had in fact been rigorously developed in the early 20th century, notably by Lewis and Langford (1932), there was for the languages of those systems nothing
comparable to the elegant semantics that Tarski had provided for the languages of classical first-order logic. Consequently, there was no rigorous account of what it means for a sentence in those
languages to be true and, hence, no account of the critical semantic notions of validity and logical consequence to underwrite the corresponding deductive notions of theoremhood and provability. A
concomitant philosophical casualty of this void in modal logic was a deep skepticism, voiced most prominently by Quine, toward any appeal to modal notions in metaphysics generally, notably, the
notion of an essential property. (See Quine 1953 and 1956, and the appendix to Plantinga 1974.) The purpose of the following two subsections is to provide a simple and largely ahistorical overview of
how possible world semantics fills this void; the final subsection presents two important applications of the semantics. (Readers familiar with basic possible world semantics can skip to §2 with no
significant loss of continuity.)
Since the middle ages at least, philosophers have recognized a semantical distinction between extension and intension. The extension of a denoting expression, or term, such as a name or a definite
description is its referent, the thing that it refers to; the extension of a predicate is the set of things it applies to; and the extension of a sentence is its truth value. By contrast, the
intension of an expression is something rather less definite — its sense, or meaning, the semantical aspect of the expression that determines its extension. For purposes here, let us say that a logic
is a formal language together with a semantic theory for the language, that is, a theory that provides rigorous definitions of truth, validity, and logical consequence for the language.^[2] A logic
is extensional if the truth value of every sentence of the logic is determined entirely by its form and the extensions of its component sentences, predicates, and terms. An extensional logic will
thus typically feature a variety of valid substitutivity principles. A substitutivity principle says that, if two expressions are coextensional, that is, if they have the same extension, then
(subject perhaps to some reasonable conditions) either can be substituted for the other in any sentence salva veritate, that is, without altering the original sentence's truth value. In an
intensional logic, the truth values of some sentences are determined by something over and above their forms and the extensions of their components and, as a consequence, at least one classical
substitutivity principle is typically rendered invalid.
Extensionality is a well known and generally cherished feature of classical propositional and predicate logic. Modal logic, by contrast, is intensional. To illustrate: the substitutivity principle
for sentences tells us that sentences with the same truth value can be substituted for one another salva veritate. So suppose that John's only pets are two dogs, Algol and BASIC, say, and consider
two simple sentences and their formalizations (the predicates in question indicating the obvious English counterparts):
All John's dogs are mammals: ∀x(Dx → Mx).
All John's pets are mammals: ∀x(Px → Mx)
As both sentences are true, they have the same extension. Hence, in accordance with the classical substitutivity principle for sentences, we can replace the occurrence of (1) with (2) in the false
Not all John's dogs are mammals: ¬∀x(Dx → Mx)
and the result is the equally false sentence
Not all John's pets are mammals: ¬∀x(Px → Mx).
However, when we make the same substitution in the true sentence
Necessarily, all John's dogs are mammals: □∀x(Dx → Mx),
the result is the sentence
Necessarily, all John's pets are mammals: □∀x(Px → Mx),
which is intuitively false, as John surely could have had a non-mammalian pet. In a modal logic that accurately represents the logic of the necessity operator, therefore, the substitutivity principle
for sentences will have to fail.
The same example illustrates that the substitutivity principle for predicates will have to fail in modal logic as well. For, according to our example, the predicates ‘D’ and ‘P’ that are true of
John's dogs and of John's pets, respectively, are coextensional, i.e., ∀x(Dx ↔ Px). However, while substituting the latter predicate for the former in (3) results in a sentence with the same truth
value, the same substitution in (5) does not.
Modal logic, therefore, is intensional: in general, the truth value of a sentence is determined by something over and above its form and the extensions of its components. Absent a rigorous semantic
theory to identify the source of its intensionality and to systematize intuitions about modal truth, validity, and logical consequence, there was little hope for the widespread acceptance of modal
The idea of possible worlds raised the prospect of extensional respectability for modal logic, not by rendering modal logic itself extensional, but by endowing it with an extensional semantic theory
— one whose own logical foundation is that of classical predicate logic and, hence, one on which possibility and necessity can ultimately be understood along classical Tarskian lines. Specifically,
in possible world semantics, the modal operators are interpreted as quantifiers over possible worlds, as expressed informally in the following two general principles:
Nec A sentence of the form ^⌈Necessarily, φ^⌉ (^⌈◻φ^⌉) is true if and only if φ is true in every possible world.^[3]
Poss A sentence of the form ^⌈Possibly, φ^⌉ (^⌈◇φ^⌉) is true if and only if φ is true in some possible world.
Given this, the failures of the classical substitutivity principles can be traced to the fact that modal operators, so interpreted, introduce contexts that require subtler notions of meaning for
sentences and their component parts than are provided in classical logic; in particular, a subtler notion (to be clarified shortly) is required for predicates than that of the set of things they
happen to apply to.
Tarskian Semantics. Standard model theoretic semantics for the languages of predicate logic deriving from the work of Tarski (1933, 1944) is the paradigmatic semantic theory for extensional logics.
Given a standard first-order language ℒ, a Tarskian interpretation I for ℒ specifies a set D for the quantifiers of ℒ to range over (typically, some set of things that ℒ has been designed to
describe) and assigns, to each term (constant or variable) τ of ℒ, a referent a[τ] ∈ D and, to each n-place predicate π of ℒ, an appropriate extension E[π] — a truth value (TRUE or FALSE) if n = 0, a
subset of D if n = 1, and a set of n-tuples of members of D if n > 1. Given these assignments, sentences are evaluated as true under the interpretation I — true[I], for short — according to a more or
less familiar set of clauses. To facilitate the definition, let I[ν/a] be the interpretation that assigns the individual a to the variable ν and is otherwise exactly like I. Then we have:
• An atomic sentence ^⌈πτ[1]...τ[n]^⌉ (of ℒ) is true[I] if and only if
□ n = 0 (i.e., π is a sentence letter) and the extension of π is the truth value TRUE; or
□ n = 1 and a[τ[1]] is in the extension of π; or
□ n > 1 and 〈a[τ[1]], ..., a[τ[n]]〉 is in the extension of π.
• A negation ^⌈¬ψ^⌉ is true[I] if and only if ψ is not true[I].
• A material conditional^⌈ψ → θ^⌉ is true[I] iff, if ψ is true[I], then θ is true[I].
• A universally quantified sentence ^⌈∀νψ^⌉ is true[I] if and only if, for all individuals a ∈ D, ψ is true[I[ν/a]].^[4]
Clauses for the other standard Boolean operators and the existential quantifier under their usual definitions follow straightaway from these clauses. In particular, where
it follows that:
• An existentially quantified sentence ^⌈∃νψ^⌉ is is true[I] if and only if, for some individual a ∈ D, ψ is true[I[ν/a]].
It is easy to verify that, in each of the above cases, replacing one coextensional term, predicate, or sentence for another has no effect on the truth values rendered by the above clauses, thus
guaranteeing the validity of the classical substitutivity principles and, hence, the extensionality of first-order logic with a Tarskian semantics.
From Tarskian to Possible World Semantics. The truth conditional clauses for the three logical operators directly reflect the meanings of the natural language expressions they symbolize: ‘¬’ means
not; ‘→’ means if...then; ‘∀’ means all. It is easy to see, however, that we cannot expect to add an equally simple clause for sentences containing an operator that symbolizes necessity. For a
Tarskian interpretation fixes the domain of quantification and the extensions of all the predicates. Pretty clearly, however, to capture necessity and possibility, one must be able to consider
alternative “possible” domains of quantification and alternative “possible” extensions for predicates as well. For, intuitively, under different circumstances, fewer, more, or other things might have
existed and things that actually exist might, in those circumstances, have had very different properties. (6), for example, is false because John could have had non-mammalian pets: a canary, say, or
a turtle, or, under very different circumstances, a dragon. A bit more formally put: Both the domain of quantification and the extension of the predicate ‘P’ could, in some sense or other, have been
Possible world semantics, of course, uses the concept of a possible world to give substance to the idea of alternative extensions and alternative domains of quantification. (Possible world semantics
can be traced most clearly back to the work of Carnap (1947), its basic development culminating in the work of Hintikka (1957, 1961), Bayart (1958, 1959), and Kripke (1959, 1963a, 1963b).^[5])
Similar to Tarskian semantics, a possible world interpretation M of a modal language ℒ specifies a nonempty set D, although thought of now as the set of “possible individuals” of M. Also as in
Tarskian semantics, M assigns each term τ of ℒ a referent a[τ] in D.^[6] Additionally however, M specifies a set W, the set of “possible worlds” of M, one of which is designated its “actual world”,
and each world w in W is assigned its own domain of quantification, d(w) ⊆ D, intuitively, the set of individuals that exist in w.^[7] To capture the idea of both the actual and possible extensions
of a predicate, M assigns to each n-place predicate π a function M[π] — the intension of π — that, for each possible world w, returns the extension M[π](w) of π at w: a truth value, if n = 0; a set
of individuals, if n = 1; and a set of n-tuples of individuals, if n > 1.^[8] We can thus rigorously define a “possible extension” of a predicate π to be any of its w-extensions M[π](w), for any
world w.
The Tarskian truth conditions above are now generalized by relativizing them to worlds as follows: for any possible world w (the world of evaluation):
• An atomic sentence ^⌈πτ[1]...τ[n]^⌉ (of ℒ) is true[M] at w if and only if:
□ n = 0 and the w-extension of π is the truth value TRUE; or
□ n = 1 and a[τ[1]] is in the w-extension of π; or
□ n > 1 and 〈a[τ[1]],..., a[τ[n]]〉 is in the w-extension of π.
• A negation ^⌈¬ψ^⌉ is true[M] at w if and only ψ is not true[M] in w.
• A material conditional^⌈ψ→θ^⌉ is true[M] at w iff, if ψ is true[M] at w, then θ is true[M] at w.
And to these, of course, is added the critical modal case that explicitly interprets the modal operator to be a quantifier over worlds, as we'd initially anticipated informally in our principle Nec:
• A necessitation ^⌈◻ψ^⌉ is true[M] at w if and only if, for all possible worlds u of M, ψ is true[M] at u.^[9]
A sentence φ is false[M] at w just in case it is not true[M] at w, and φ is said to be true[M] just in case φ is true[M] at the actual world of M.
On the assumption that there is a (nonempty) set of all possible worlds and a set of all possible individuals, we can define “objective” notions of truth at a world and of truth simpliciter, that is,
notions that are not simply relative to formal, mathematical interpretations but, rather, correspond to objective reality in all its modal glory. Let ℒ be a modal language whose names and predicates
represent those in some fragment of ordinary language (as in our examples (5) and (6) above). Say that M is the “intended” interpretation of ℒ if (i) its set W of “possible worlds” is in fact the set
of all possible worlds, (ii) its designated “actual world” is in fact the actual world, (iii) its set D of “possible individuals” is in fact the set of all possible individuals, and (iv) the
referents assigned to the names of ℒ and the intensions assigned to the predicates of ℒ are the ones they in fact have. Then, where M is the intended interpretation of ℒ, we can say that a sentence φ
of ℒ is true at a possible world w just in case φ is true[M] at w, and that φ is true just in case it is true[M] at the actual world. (Falsity at w and falsity, simpliciter, are defined accordingly.)
Under the assumption in question, then, the modal clause above takes on pretty much the exact form of our informal principle Nec.
Call the above basic possible world semantics. Spelling out the truth conditions for (6) (relative to the intended interpretation of its language), basic possible world semantics tells us that (6) is
true if and only if
For all possible worlds w, ‘∀x(Px → Mx)’ is true at w.
And by unpacking (8) in terms of the quantificational, material conditional, and atomic clauses above we have that (6) is true if and only if
For all possible worlds w, and for all possible individuals a that exist in w, if a is in the w-extension of ‘P’ then a is in the w-extension of ‘M’.
Since we are evaluating (6) with regard to the intended interpretation of its language, the w-extension of ‘P’ that is returned by its intension, for any world w, is the (perhaps empty) set of John's
pets in w and that of ‘M’ is the set of mammals in w. Hence, if w is a world where John has a pet canary — COBOL, say — COBOL is in the w-extension of ‘P’ but not that of ‘M’ , i.e., ‘∀x(Px → Mx)’ is
false at w and, hence, by the truth condition (9), (6) is false at the actual world — that is, (6) is false simpliciter, as it should be.
Note that interpreting modal operators as quantifiers over possible worlds provides a nice theoretical justification for the usual definition of the possibility operator in terms of necessity,
That is, a sentence is possible just in case its negation isn't necessary. Since, semantically speaking, the necessity operator is literally a universal quantifier, the definition corresponds exactly
to the definition (7) of the existential quantifier. For, unpacking the right side of definition (10) according to the negation and necessitation clauses above (and invoking the definitions of truth
and truth at a world simpliciter), we have:
^⌈◇φ^⌉ is true iff it is not the case that, for all possible worlds w, φ is not true at w.
Clearly, however, if it is not the case that φ fails to be true at all possible worlds, then it must be true at some world; hence:
^⌈◇φ^⌉ is true iff, for some possible world w, φ is true at w.
And that corresponds exactly to our intuitive truth condition Poss above. Thus, spelling out the negation ‘¬□∀x(Px → Mx)’ of our false sentence (6) above in accordance with definition (10) (and the
standard definition of conjunction ∧), we have:
Possibly, one of John's pets is not a mammal: ◇∃x(Px ∧ ¬Mx),
for which (12) and the possible world truth conditions for quantified, Boolean, and atomic sentences yield the correct truth condition:
There is a possible world w and an individual a existing in w that is in the w-extension of ‘P’ but not that of ‘M’,
that is, less stuffily, there is a possible world in which, among John's pets, at least one is not a mammal.
Summary: Intensionality and Possible Worlds. Analyzed in terms of possible world semantics, then, the general failure of classical substitutivity principles in modal logic is due, not to an
irreducibly intensional element in the meanings of the modal operators, but rather to a sort of mismatch between the surface syntax of those operators and their semantics: syntactically, they are
unary sentence operators like negation; but semantically, they are, quite literally, quantifiers. Their syntactic similarity to negation suggests that, like negation, the truth values of ^⌈□φ^⌉ and ^
⌈◇φ^⌉, insofar as they are determinable at all, must be determined by the truth value of φ. That they are not (in general) so determined leads to the distinctive substitutivity failures noted above.
The possible worlds analysis of the modal operators as quantifiers over worlds reveals that the unary syntactic form of the modal operators obscures a semantically relevant parameter. When the modal
operators are interpreted as quantifiers, this parameter becomes explicit and the reason underlying the failure of extensionality in modal logic becomes clear: That the truth values of ^⌈□φ^⌉ and ^
⌈◇φ^⌉ are not in general determined by the truth value of φ at the world of evaluation is, semantically speaking, nothing more than the fact that the truth values of ‘∀xFx’ and ‘∃xFx’ are not in
general determined by the truth value of ‘Fx’, for any particular value of ‘x’. Possible world semantics, therefore, explains the intensionality of modal logic by revealing that the syntax of the
modal operators prevents an adequate expression of the meanings of the sentences in which they occur. Spelled out as possible world truth conditions, those meanings can be expressed in a wholly
extensional fashion. (For a more formal exposition of this point, see the supplemental article The Extensionality of Possible World Semantics.)
As noted, the focus of the present article is on the metaphysics of possible worlds rather than applications. Of course, the semantics of modal languages is itself an application, but one that is of
singular importance, both for historical reasons and because most applications are in fact themselves applications of (often extended or modified versions of) the semantical apparatus. Two
particularly important examples are the analysis of intensions and a concomitant explication of the de re/de dicto distinction.^[10]
The Analysis of Intensions. As much a barrier to the acceptance of modal logic as intensionality itself was the need to appeal to intensions per se — properties, relations, propositions, and the like
— in semantical explanations. Intensional entities have of course featured prominently in the history of philosophy since Plato and, in particular, have played natural explanatory roles in the
analysis of intentional attitudes like belief and mental content. For all their prominence and importance, however, the nature of these entities has often been obscure and controversial and, indeed,
as a consequence, they were easily dismissed as ill-understood and metaphysically suspect “creatures of darkness” (Quine 1956, 180) by the naturalistically oriented philosophers of the early- to
mid-20th century. It is a virtue of possible world semantics that it yields rigorous definitions for intensional entities. More specifically, as described above, possible world semantics assigns to
each n-place predicate π a certain function I[π] — π's intension — that, for each possible world w, returns the extension I[π](w) of π at w. We can define an intension per se, independent of any
language, to be any such function on worlds. More specifically:
• A proposition is any function from worlds to truth values.
• A property is any function from worlds to sets of individuals.
• An n-place relation (n > 1) is any function from worlds to sets of n-tuples of individuals.
The adequacy of this analysis is a matter of lively debate that focuses chiefly upon whether or not intensions, so defined, are too “coarse-grained” to serve their intended purposes. (See, e.g.,
Stalnaker 1987 and 2012 for a strong defense of the analysis.) However, Lewis (1986, §1.5) argues that, even if the above analysis fails for certain purposes, it does not follow that intensions
cannot be analyzed in terms of possible worlds, but only that more subtle constructions might be required. This reply appears to side-step the objections from granularity while preserving the great
advantage of the possible worlds analysis of intensions, viz., the rigorous definability of these philosophically significant notions.
The De Re / De Dicto Distinction. A particularly rich application of the possible world analysis of intensions concerns the analysis of the venerable distinction between de re and de dicto modality.^
[11] Among the strongest modal intuitions is that the possession of a property has a modal character — that things exemplify, or fail to exemplify, some properties necessarily, or essentially, and
others only accidentally. Thus, for example, intuitively, John's dog Algol is a pet accidentally; under less fortunate circumstances, she might have been, say, a stray that no one ever adopted. But
she is a dog essentially; she couldn't have been a flower, a musical performance, a crocodile or any other kind of thing.
Spelling out this understanding in terms of worlds and the preceding analysis of intensions, we can say that an individual a has a property F essentially if a has F in every world in which it exists,
that is, if, for all worlds w in which a exists, a ∈ F(w). Likewise, a has F accidentally if a has F in the actual world @ but lacks it in some other world, that is, if a ∈ F(@) but, for some world w
in which a exists, a ∉ F(w). Thus, let ‘G’ and ‘T’ symbolize ‘is a dog’ and ‘is someone's pet’, respectively; then, where ‘E!x’ is short for ‘∃y(x=y)’ (and, hence, expresses that x exists), we have:
Algol is a dog essentially: □(E!a → Ga)
Algol is a pet accidentally: Ta ∧ ◇(E!a ∧ ¬Ta)
More generally, sentences like (15) and (16) in which properties are ascribed to a specific individual in a modal context — signaled formally by the occurrence of a name or the free occurrence of a
variable in the scope of a modal operator — are said to exhibit modality de re^[12] (modality of the thing). Modal sentences that do not, like
Necessarily, all dogs are mammals: □∀x(Gx → Mx)
are said to exhibit modality de dicto (roughly, modality of the proposition). Possible world semantics provides an illuminating analysis of the key difference between the two: The truth conditions
for both modalities involve a commitment to possible worlds; however, the truth conditions for sentences exhibiting modality de re involve in addition a commitment to the meaningfulness of transworld
identity, the thesis that, necessarily, every individual (typically, at any rate) exists and exemplifies (often very different) properties in many different possible worlds. More specifically, basic
possible world semantics yields intuitively correct truth values for sentences of the latter sort by (i) permitting world domains to overlap and (ii) assigning intensions to predicates, thereby, in
effect, relativizing predicate extensions to worlds. In this way, one and the same individual can be in the extension of a given predicate at all worlds in which they exist, at some such worlds only,
or at none at all. (For further discussion, see the entry on essential vs. accidental properties.)
The power and appeal of basic possible world semantics is undeniable. In addition to providing a clear, extensional formal semantics for a formerly somewhat opaque, intensional notion, cashing
possibility as truth in some possible world and necessity as truth in every such world seems to tap into very deep intuitions about the nature of modality and the meaning of our modal discourse.
Unfortunately, the semantics leaves the most interesting — and difficult — philosophical questions largely unanswered. Two arise with particular force:
QW What, exactly, is a possible world?
And, given QW:
QE What is it for something to exist in a possible world?
In this section we will concern ourselves with, broadly speaking, the three most prominent philosophical approaches to these questions.^[13]
Recall the informal picture that we began with: a world is, so to say, the “limit” of a series of increasingly more inclusive situations. Fleshed out philosophical accounts of this informal idea
generally spring from rather different intuitions about what one takes the “situations” in the informal picture to be. A particularly powerful intuition is that situations are simply structured
collections of physical objects: the immediate situation of our initial example above, for instance, consists of, among other things, the objects in Anne's office — notably Anne herself, her desk and
her computer, with her seated at the former and typing on the latter — and at least some of the things in the next room — notably, her husband and the phone he is talking on. On this view, for one
situation s to include another r is simply for r to be a (perhaps rather complex and distributed) physical part of s. The actual world, then, as the limit of a series of increasingly more inclusive
situations in this sense, is simply the entire physical universe: all the things that are some spatiotemporal distance from the objects in some arbitrary initial situation, structured as they in fact
are; and other possible worlds are things of exactly the same sort. Call this the concretist intuition, as possible worlds are understood to be concrete physical situations of a special sort.
The originator and, by far, the best known proponent of concretism is David Lewis. For Lewis and, as noted, concretists generally, the actual world is the concrete physical universe as it is,
stretched out in space-time. As he rather poetically expresses it (1986, 1):
The world we live in is a very inclusive thing....There is nothing so far away from us as not to be part of our world. Anything at any distance is to be included. Likewise the world is inclusive
in time. No long-gone ancient Romans, no long-gone pterodactyls, no long-gone primordial clouds of plasma are too far in the past, nor are the dead dark stars too far in the future, to be part of
this same world....[N]othing is so alien in kind as not to be part of our world, provided only that it does exist at some distance and direction from here, or at some time before or after or
simultaneous with now.
The actual world provides us with our most salient example of what a possible world is. But, for the concretist, other possible worlds are no different in kind from the actual world (ibid., 2):
There are countless other worlds, other very inclusive things. Our world consists of us and all our surroundings, however, remote in time and space; just as it is one big thing having lesser
things as parts, so likewise do other worlds have lesser other-worldly things as parts.
It is clear that spatiotemporal relations play a critical role in Lewis's conception. However, it is important to note that Lewis understands such relations in a very broad and flexible way so as to
allow, in particular, for the possibility of spirits and other entities that are typically thought of as non-spatial; so long as they are located in time, Lewis writes, “that is good enough” (ibid.,
73). So with this caveat, let us say that that an object a is connected if any two of its parts bear some spatiotemporal relation to each other,^[14] and that a is maximal if none of its parts is
spatiotemporally related to anything that is not also one of its parts. Then we have the following concretist answers to our questions:
AW1 w is a possible world =[def] w is a maximal connected object.^[15]
And, hence, to exist in a world is simply to be a part of it:
AE1 Individual a exists in world w =[def] a is a part of w.
It follows from AW1 (and reasonable assumptions) that distinct worlds do not overlap, spatiotemporally; that no spatiotemporal part of one world is part of another.^[16] Moreover, given Lewis's
counterfactual analysis of causation, it follows from this that objects in distinct worlds bear no causal relations to one another; nothing that occurs in one world has any causal impact on anything
that occurs in any other world.
Critically, for Lewis, worlds and their denizens do not differ in the manner in which they exist. The actual world does not enjoy a kind of privileged existence that sets it apart from other worlds.
Rather, what makes the actual world actual is simply that it is our world, the world that we happen to inhabit. Other worlds and their inhabitants exist just as robustly as we do, and in precisely
the same sense; all worlds and all of their denizens are equally real.^[17] A significant semantic corollary of this thesis for Lewis is that the word ‘actual’ in the phrase ‘the actual world’ does
not indicate any special property of the actual world that distinguishes it from all other worlds; likewise, an assertion of the form ‘a is actual’ does not indicate any special property of the
individual a that distinguishes it from the objects existing in other worlds. Rather, ‘actual’ is simply an indexical whose extension is determined by the context of utterance. Thus, the referent of
‘the actual world’ in a given utterance is simply the world of the speaker, just as the referent of an utterance of ‘the present moment’ is the moment of the utterance; likewise, an utterance of the
form ‘a is actual’ indicates only that a shares the same world as the speaker. The speaker thereby ascribes no special property to a but, essentially, expresses no more than when she utters ‘a is
here’, understood in the broadest possible sense. By the same token, when we speak of non-actual possibilia — Lewis's preferred label for the denizens of possible worlds — we simply pick out those
objects that are not here in the broadest sense. In the mouth of an other-worldly metaphysician, we here are all among the non-actual possibilia of which she speaks in her lectures on de re modality.
Modal Reductionism and Counterparts. Lewis parted ways dramatically with his mentor W. V. O. Quine on modality. Quine (1960, §41) stands in a long line of philosophers dating back at least to David
Hume who are skeptical, at best, of the idea that modality is an objective feature of reality and, consequently, who question whether modal assertions in general can be objectively true or false, or
even coherent. Lewis, by contrast, wholly embraces the objectivity of modality and the coherence of our modal discourse. What he denies, however, is that modality is a fundamentally irreducible
feature of the world. Lewis, that is, is a modal reductionist. For Lewis, modal notions are not primitive. Rather, truth conditions for modal sentences can be given in terms of worlds and their
parts; and worlds themselves, Lewis claims, are defined entirely in non-modal terms. The earliest presentation of Lewis's theory of modality (Lewis 1968) — reflecting Quine's method of regimentation
— offers, rather than a possible world semantics, a scheme for translating sentences in the language of modal predicate logic into sentences of ordinary first-order logic in which the modal operators
are replaced by explicit quantifiers over worlds.^[18] The mature account of Lewis 1986 is much more semantic in orientation: it avoids any talk of translation and offers instead a (somewhat
informal) account of concretist possible world truth conditions for a variety of modal assertions. Nonetheless, it is useful to express the logical forms of these truth conditions explicitly in terms
of worlds, existence in a world (in the sense of AE1, of course), and the counterpart relation, which will be discussed shortly:
Wx: x is a world
Ixy: x exists in world y
Cxy: x is a counterpart of y
For sentences like (17) involving only de dicto modalities, Lewis's truth conditions are similar in form to the truth conditions generated by the modal clauses of basic possible world semantics;
specifically, for (17):
For every world w, every individual x in w that is a dog is a mammal: ∀w(Ww → ∀x(Ixw → (Gx → Mx))).
As in possible world semantics, the modal operators ‘□’ and ‘◇’ “turn into” quantifiers over worlds in concretist truth conditions (1986, 5). Also as in possible world semantics, a quantifier (in
effect) ranging over individuals that occurs in the scope of a quantifier (in effect) ranging over worlds — ‘∀x’ and ‘∀w’, respectively, in (18) — is, for each value w of the bound world variable,
restricted to the objects existing in w. However, unlike possible world semantics, predicates are not to be thought of as having different extensions at different worlds. Rather, for Lewis, each (n
-place) predicate has a single extension that can contain (n-tuples of) objects across many different worlds — intuitively, all of the objects that have the property (or n-tuples of objects that
stand in the relation) expressed by the predicate across all possible worlds. Thus, in particular, the predicate ‘G’ picks out, not just this-worldly dogs but other-worldly canines as well. Likewise,
the pet predicate ‘T’ picks out both actual and other-worldly pets. Such a move is not feasible in basic possible world semantics, which is designed for a metaphysics in which one and the same
individual can exemplify a given property in some worlds in which they exist but not others. Hence, a typical predicate will be true of an individual with respect to some worlds and false of it with
respect to others. But, for Lewis, as we've seen, distinct possible worlds do not overlap and, hence, objects are worldbound, thereby eliminating the need to relativize predicate extensions to
However, this very feature of Lewis's account — worldboundedness — might appear to threaten its coherence. For example, since Algol is in fact a pet, given worldboundedness and the definition AE1 of
existence in a world w, we have:
There is no world w such that Algol exists in w and fails to be someone's pet: ¬∃w(Iaw ∧ ¬Ta),
But, according to Lewis's analysis, the modal operators ‘□’ and ‘◇’, semantically, are quantifiers over worlds. Hence, (19) might appear to be exactly the concretist truth condition for the denial of
(the right conjunct of) (16), i.e., it might appear that, on Lewis's analysis, Algol is not a pet accidentally but essentially; likewise, more generally, any individual and any intuitively accidental
property of that individual.
In fact, Lewis whole-heartedly accepts that things have accidental properties and, indeed, would accept that (16) is robustly true. His explanation involves one of the most interesting and
provocative elements of his theory: the doctrine of counterparts. Roughly, an object y in a world w[2] is a counterpart of an object x in w[1] if y resembles x and nothing else in w[2] resembles x
more than y.^[19] Each object is thus its own (not necessarily unique) counterpart in the world it inhabits but will typically differ in important ways from its other-wordly counterparts. A typical
other-worldly counterpart of Algol, for example, might resemble her very closely up to some point in her history — a point, say, after which she continued to live out her life as a stray instead of
being brought home by our kindly dog-lover John. Hence, sentences making de re assertions about what Algol might have done or what she could or could not have been are unpacked, semantically, as
sentences about her counterparts in other possible worlds. Thus, when we analyze (16) accordingly, we have the entirely unproblematic concretist truth condition:
Algol is a pet, but there is a world in which exists a counterpart of hers that is not:
Ta ∧ ∃w(Ww ∧ ∃x(Ixw ∧ Cxa ∧ ¬Tx)).
Ascriptions of essential properties, as in (15), are likewise unpacked in terms of counterparts: to say that Algol is a dog essentially is to say that
All of Algol's counterparts in any world are dogs:
∀w(Ww → ∀x((Ixw ∧ Cxa) → Gx)).
The Analysis of Intensions. Lewis's possible world truth conditions are expressed in classical non-modal logic and, hence, they are to be interpreted by means of standard Tarskian semantics. Thus, n
-place predicates π are assigned extensions E[π] — in particular, for 1-place predicates, sets of individuals — as their semantic values, as described in the exposition in §1.2 above. However, given
worldboundedness and the fact that predicate extensions are drawn not simply from the actual world but from all possible worlds, these extensions are able to serve as intensions in Lewis's theory. As
in basic possible world semantics, intensional entities in general can be defined in terms of the basic ontology of the theory independent of the linguistic roles they can play as the intensions of
predicates. And because individuals are worldbound, Lewis is able to simplify the definitions given in §1.3 by defining intensions as sets rather than functions:
• A proposition is any set of worlds.
• A property is any set of individuals.
• An n-place relation (n > 1) is any set of n-tuples of individuals.^[20]
Thus, on this analysis, a proposition p is true in a world w just in case w ∈ p and an individual a has a property P just in case a ∈ P. (Note that propositions are thus simply properties of worlds
on these definitions.) a has P accidentally just in case a ∈ P but b ∉ P for some other-worldly counterpart of b of a; and a has P essentially if b ∈ P for every counterpart b of a.
In Lewis's theory of modality, then, modal operators are understood semantically to be quantifiers over concrete worlds, predicates denote intensions understood as sets of (n-tuples of) parts of
those worlds, and sentences involving de re modalities are understood in terms of counterparts. To the extent that these notions are free of modality, Lewis has arguably reduced modal notions to
That Lewis's truth conditions for modal statements are themselves free of modality and, hence, that his theory counts as a genuine reduction of modal notions to non-modal is not terribly
controversial (albeit not undisputed — see Lycan 1991, 224–27; Divers and Melia 2002, 22–24). Significantly more controversial, and perhaps far more critical to the project, is whether or not his
account is complete, that is, whether or not, for all modal statements φ, (i) if φ is intuitively true, then its Lewisian truth condition holds (ii) if φ is intuitively false, then its Lewisian truth
condition fails.^[21] The challenge to Lewis, then, is that his account can be considered successful only if it is complete in this sense.
The chief question Lewis faces in this regard is whether there are enough worlds to do the job. The truth condition (20) for the intuitively true (16) says that there exists a possible world in which
a counterpart of Algol is no one's pet. By virtue of what in Lewis's theory does such a world exist? The ideal answer for Lewis would be that some principle in his theory guarantees a plenitude of
worlds, a maximally abundant array of worlds that leaves “no gaps in logical space; no vacancies where a world might have been, but isn't” (Lewis 1986, 86). From this it would follow that the worlds
required by the concretist truth condition for any intuitive modal truth exist. Toward this end, Lewis initially considers the evocative principle:
Ways Absolutely every way that a world could be is a way that some world is.
Since, in particular, a world satisfying (20) seems quite obviously to be a way a world could be, by Ways such a world exists. But there is a fatal flaw here: Lewis himself (1973, 84) identifies ways
that a world could be with worlds themselves. So understood, Ways collapses into the triviality that every world is identical to some world.^[22]
Lewis finds a replacement for Ways in a principle of recombination whereby “patching together parts of different possible worlds yields another possible world” (1986, 87–88). The principle has two
aspects. The first is the principle that “anything can coexist with anything”. For “if there could be a dragon, and there could be a unicorn,” Lewis writes, “but there couldn't be a dragon and a
unicorn side by side, that would be ... a failure of plenitude” (ibid., 88). Given that individuals are worldbound, however, the principle is expressed more rigorously (and more generally) in terms
of other-worldly duplicates:
R1 For any (finite or infinite) number of objects a[1], a[2], ..., there is a world containing any number of duplicates of each of those objects in any spatiotemporal arrangement (size and shape
The second aspect of the principle expresses “the Humean denial of necessary connections” (ibid., 87), that is, the idea that anything can fail to coexist with anything else. For “if there could be a
talking head contiguous to the rest of a living human body, but there couldn't be a talking head separate from the rest of a human body, that too would be a failure of plenitude” (ibid). To express
this a bit more rigorously, say that objects a[1], a[2], ..., are independent of objects b[1], b[2], ..., if no sum of any parts of the former are parts or duplicates of any sum of any parts of the
latter and vice versa; then we have:
R2 For any world w any (finite or infinite number of) objects a[1], a[2], ..., in w and any objects b[1], b[2], ..., in w that are independent of a[1], a[2], ..., there is a world containing
duplicates of a[1], a[2], ..., and no duplicates of b[1], b[2], ... .
Worlds that satisfy the concretist truth conditions for workaday possibilities like (16) are easily conceived as consisting of duplicates of relevant parts of the actual world — suitably organized to
retain their actual properties, or not, as needed. Hence, the existence of such worlds does indeed appear to follow from the existence of the actual world by recombination. Worlds containing talking
donkeys, exotic species resulting from a wholly different evolutionary history, worlds with silicon-based life forms, and so on present a bigger challenge to the view. Nonetheless, it is not entirely
implausible to think such worlds exist given suitable duplication and reorganization of microphysical objects.^[23]
Whether recombination completely captures our modal intuitions regarding plenitude is still a matter of some dispute.^[24] However, even if it doesn't, it is less than clear whether this counts
against the success of Lewis's reductionist project. For, as a realist about worlds, Lewis does not seem to be under any obligation to “derive” plenitude from more fundamental principles. Hence,
there is no obvious reason why he cannot respond to charges of incompleteness by saying that it is simply a presupposition of his theory that logical space has no gaps, that there are always enough
worlds to satisfy the concretist truth condition for any intuitive modal truth.^[25] So understood, the role of recombination for a realist about worlds like Lewis is something like the role of such
axioms as powerset and replacement for a realist about sets: given some sets, these principles provide us with a detailed — but always less than complete — characterization of what further sets there
are. Their role, therefore, is to give us insight into the richness and diversity of set theoretic space, not a complete mechanism for proving which particular sets do or do not exist. Likewise
recombination vis-à-vis worlds and logical space.
Lewis's theory is particularly commendable for its striking originality and ingenuity and for the simple and straightforward answers AW1 and AE1 that it provides to our two questions QW and QE above.
Furthermore, because worlds are (plausibly) defined entirely in nonmodal terms, the truth conditions provided by Lewis's translation scheme themselves appear to be free of any implicit modality.
Hence, unlike many other popular accounts of possible worlds (notably, the abstractionist accounts discussed in the following section), Lewis's promises to provide a genuine analysis of the modal
Perhaps the biggest — if not the most philosophically sophisticated — challenge to Lewis's theory is “the incredulous stare”, i.e., less colorfully put, the fact that its ontology is wildly at
variance with common sense. Lewis faces this objection head on: His theory of worlds, he acknowledges, “does disagree, to an extreme extent, with firm common sense opinion about what there is” (1986,
133). However, Lewis argues that no other theory explains so much so economically. With worlds in one's philosophical toolkit, one is able to provide elegant explanations of a wide variety of
metaphysical, semantical, and intentional phenomena. As high as the intuitive cost is, Lewis (135) concludes, the existence of worlds “ought to be accepted as true. The theoretical benefits are worth
Additional discussion of, and objections to, concretism can be found in the supplemental document Further Problems for Concretism.
A rather different set of intuitions about situations is that they are abstract entities of a certain sort: They are states or conditions, of varying detail and complexity, that a concrete world
could be in — they are ways that things, as a whole, could be.^[26] Thus, returning to our original example, one very simple way things could be is for our philosopher Anne to be in her office. We
can now imagine, as in our example, further detail being successively added to that description to yield more complex ways things could be: Anne working at her desk in her office; music being in the
background; her husband being on the phone in the next room; her neighbor mowing the lawn next door; and so on. Roughly speaking, then, a possible world for an abstractionist is the limit of such a
“process” of consistently extending and adding detail to some initial state of the world; it is a total way things could be, a consistent state of the world that settles every possibility; a
consistent state to which no further detail could be added without rendering it inconsistent.
To give the notion of a state, or condition, of the world a little more metaphysical substance, abstractionists typically appeal to more traditional ontological categories. Thus, for example, that
things could be in the simple state described above might be spelled out in one of the following ways:
• The proposition that Anne is in her office and at her desk is possibly true.
• The set of propositions {that Anne is in her office, that Anne is at her desk} is such that, possibly, all of its members are true.
• The property being such that Anne is in her office and at her desk is possibly exemplified (by “things as a whole”).
Possible worlds are then defined as special cases of the type of entity in question that are in some relevant sense total. Adams (1974), for example, defines possible worlds to be consistent sets of
propositions that are total in the sense of containing, for every proposition p, either p or its negation; Fine (1977), fleshing out ideas of Prior, defines a possible world to be a consistent
proposition w that is total in the sense that, for every proposition p, w entails either p or its negation. For purposes here, however, we will sketch the fundamentals of the abstractionist view in
terms of states of affairs, following the basic features of the account developed by Plantinga (1974, 1976), an account that, in the literature, frequently serves as a particularly trenchant
abstractionist counterpoint to Lewis's concretism.^[27]
States of affairs (SOAs) are abstract, intensional entities typically signified by sentential gerundives like “Algol's being John's pet” and “There being more than ten solar planets”. Importantly,
SOAs constitute a primitive ontological category for the abstractionist; they are not defined in terms of possible worlds in the manner that propositions are in §1.3. Just as some propositions are
true and others are not, some SOAs are actual and others are not.^[28] Note, then, that to say an SOA is non-actual is not to say that it does not actually exist. It is simply to say that it is not,
in fact, a condition, or state, that the concrete world is actually in. However, because ‘____ is actual’ is often used simply to mean ‘____ exists’, there is considerable potential for confusion
here. So, henceforth, to express that an SOA is actual we will usually say that it obtains.
An SOA is said to be possible (necessary, impossible) insofar as it is possible (necessary, impossible) that it obtain. One SOA s is said to include another t if, necessarily, s obtains only if t
does; s precludes t if, necessarily, s obtains only if t doesn't. So, for example, Algol's being John's pet includes Algol's being someone's pet and precludes there being no pets. Thus, on the
abstractionist's understanding of a situation as a state or condition of the physical world rather than a concrete, structured piece of it, the inclusion of one situation in another is a purely
logical relation, not a mereological one. Finally, say that an SOA s is total if, for every SOA t, s either includes or precludes t. (Abstractionists often use ‘maximal’ instead of ‘total’, but we
have already introduced this term in the context of concretism.) Abstractionist possible worlds are now definable straightaway:
AW2 w is a possible world =[def] w is an SOA that is both possible and total.^[29]
It is easy to see that this definition covers the more intuitive characterizations of abstract possible worlds above: they are consistent — i.e., possible — states of the world that settle every
possibility, consistent states to which no further detail could be added without rendering them inconsistent. Note also that, for the abstractionist, as for the concretist, the actual world is no
different in kind from any other possible world; all possible worlds exist, and in precisely the same sense as the actual world. The actual world is simply the total possible SOA that, in fact,
obtains. And non-actual worlds are simply those total possible SOAs that do not.
What of existence in such worlds? As we've seen, on Lewis's account, to exist in a concrete world w is literally to exist in w, that is, within the spatiotemporal boundaries of w. Clearly, because
SOAs are abstract, individuals cannot exist in abstractionist worlds in anything like the same literal, mereological sense. Accordingly, the abstractionist defines existence in a world simply to be a
special case of the inclusion relation:
AE2 Individual a exists in possible world w =[def] w includes a's existing.
Unlike concretism, then, abstractionism does not entail that individuals are worldbound; there is no inconsistency whatever in the idea that many distinct worlds can include the existence of one and
the same individual. Indeed, typically, abstractionists are staunchly committed to transworld identity and hold that most any given individual exists in many possible worlds and, moreover, that
contingent individuals, at least, can exemplify very different properties from world to world. Abstractionists, therefore, have no need to appeal to counterparts to understand de re modalities and
can therefore accept the truth conditions for such modalities given by basic possible world semantics (spelled out, of course, in terms of their definitions AW2 and AE2). In particular, they can take
the standard possible world truth condition for, e.g., the right conjunct of (16) at face value: ‘◇(E!a ∧ ¬Ta)’ is true on the abstractionist's approach if and only if there is is a world in which
Algol herself, rather than some counterpart of hers, exists but fails to be anyone's pet.
It is important to note that the possible worlds of abstractionism do not yield a reductive analysis of modality. The reason for this is clear: abstract possible worlds are defined in irreducibly
modal terms — a possible world is an SOA that (among other things) possibly obtains; or a set of propositions such that it is possible that all of its members are true; or a property that is possibly
exemplified; and so on. Hence, unpacked in terms of the abstractionist's definitions, the possible world truth conditions for modal propositions are themselves irreducibly modal. For example, when we
unpack Plantinga's definition of a possible world in the semantic clause for sentences of the form ^⌈◻ψ^⌉ in order to derive the truth condition for (17), ‘□∀x(Gx → Mx)’, we end up with this:
For all SOAs w, if (i) possibly, w obtains and (ii) for all SOAs s, either (a) necessarily, w obtains only if s does or (b) necessarily, w obtains only if s doesn't, then, ‘∀x(Gx → Mx)’ is true at
If we now unpack the modal operators in (22) using the corresponding truth conditional clauses of standard possible world semantics, the result will contain further world quantifiers. And spelling
out those world quantifiers in turn using Plantinga’s definition will re-introduce those same modal operators yet again.
More generally, and a bit more exactly, put: As noted above, the logical framework of basic possible world semantics is classical predicate logic. The logical framework of abstractionism is modal
predicate logic. Hence, if possible world semantics is supplemented with abstractionist definitions of possible worlds, then the logical framework of possible world semantics becomes modal predicate
logic as well and, as a consequence, the extensionality of the semantics is lost once again. (This point is expressed somewhat more formally in the supplemental document The Intensionality of
Abstractionist Possible World Semantics.) Since, as noted above, the central motivation for possible world semantics was to deliver an extensional semantics for modal languages, any motivation for
abstractionism as a semantic theory is arguably undermined.^[30]
However, it is not entirely clear that this observation constitutes an objection to abstractionism. For abstractionists can argue that the goal of their analysis is the converse of the reductionist's
goal: The reductionist wants to understand modality in terms of worlds; the abstractionist, by contrast, wants to understand worlds in terms of modality. That is, abstractionists can argue that we
begin with a primitive notion of modality and, typically upon a certain amount of philosophical reflection, we subsequently discover an intimate connection to the notion of a possible world, as
revealed in the principles Nec and Poss. The analysis that abstractionists provide is designed to make this connection explicit, ideally, in such a way that Nec and Poss fall out as theorems of their
theory (see, e.g., Plantinga 1985 and Menzel and Zalta 2014).
Hand in glove with the irreducible nature of modality is the nature of intensional entities. Concretists define intensional entities in terms of worlds, as described in §2.1.3. Abstractionists, by
contrast, define worlds in terms of intensional entities. This divergence in their choice of ontological primitives reflects, not only their differing stances toward modality, but also an important
methodological difference with regard to metaphysical inquiry. The concretist is far more pragmatic; notions of property, relation, proposition, and the like play certain roles in our theorizing and
are subject to a “jumble of conflicting desiderata” (Lewis 1986, 54). Within a given theory, any entities that can play those roles fruitfully for the purposes at hand are justifiably identified with
those notions — regardless of how well they comport with pre-theoretic intuitions. Thus, Lewis finds it to be a strength of his position that he is able to adopt the set theoretic definitions in
§2.1.3. By contrast, at least some abstractionists — Plantinga (1987) perhaps most notably — believe that we have intuitive, pre-theoretic knowledge of intensional entities that precludes their being
identified with set theoretic constructions of any sort.^[31] (See Stalnaker 1976 for a particularly illuminating discussion of the contrast between concretism and abstractionism with respect to the
treatment intensional entities.)
As was noted in §2.1.2, for the concretist, there is no special property of the actual world — actuality — that distinguishes it, in any absolute sense, from all of the others; it is simply the world
that we inhabit. For abstractionists, however, actuality is a special property that distinguishes exactly one possible world from all others — the actual world is the only world that happens to
obtain; it is the one and only way things could be that is the way things as a whole, in fact, are. However, for most abstractionists, the distinctiveness of the actual world does not lie simply in
its actuality but in its ontological comprehensiveness: the actual world encompasses all that there is. In a word: most abstractionists are actualists.
Actualism is the thesis that everything that there is, everything that has being in any sense, is actual. In terms of possible worlds: Everything that exists in any world exists in the actual world.^
[32] Possibilism, by contrast, is the denial of actualism; it is the thesis that there are mere possibilia, i.e., things that are not actual, things that exist in other possible worlds but fail to
exist in the actual world. Concretists are obviously not actualists (on their understanding of ‘actual’, at any rate).^[33] Indeed, for the concretist, since individuals are worldbound, everything
that exists in any nonactual possible world is distinct from everything in the actual world. However, although possibilism and abstractionism are entirely compatible — Zalta (1983), for example,
embraces both positions — abstractionists tend to be actualists. The reason for this is clear: Basic possible world semantics appears to be committed to possibilism and abstractionism promises a way
of avoiding that commitment.
The specter of possibilism first arises with regard to non-actual possible worlds, which would seem by definition to be prime examples of mere possibilia. However, we have just seen that the
abstractionist can avoid this apparent commitment to possibilism by defining possible worlds to be SOAs of a certain sort. So defined, non-actual worlds, i.e., worlds that fail to obtain, can still
actually exist. Hence, the commitment of basic possible world semantics to non-actual worlds does not in itself threaten the actualist's ontological scruples.
However, the specter of possibilism is not so easily exorcised. For non-actual worlds are not the only, or even the most compelling, examples of mere possibilia that seem to emerge out of basic
possible world semantics. For instance, it is quite reasonable to think that evolution could have taken a very different course (or, if you like, that God could have made very different creative
choices) and that there could have been individuals — call them Exotics — that are biologically very different from all actually existing individuals; so different, in fact, that no actually existing
thing could possibly have been an Exotic. According to basic possible world semantics, the sentence ‘There could have been Exotics’ or, more formally,
is true just in case there is a world in which ‘∃xEx’ is true, i.e., when all is said and done, just in case:
There is a possible world w and an individual a in w such that a is an Exotic in w,
which, a bit less formally, is simply to say that
Some individual is an Exotic in some possible world.
However, since no actually existing thing could have been an Exotic, anything that is an Exotic in some possible world cannot be among the things that exist in the actual world. Thus, the truth
conditions that basic possible world semantics assigns to some of our intuitive modal beliefs appear to entail that there are non-actual individuals as well as non-actual possible worlds. Defining
possible worlds as SOAs provided a way for the actualist to embrace non-actual worlds without compromising her actualism. But how is the actualist to understand the apparent commitment to non-actual
individuals in such truth conditions as (25)?
Answers that have been given to this question represent a rather deep divide between actualist abstractionists. On the one hand, “trace” actualists introduce actually existing entities into their
ontologies that can play the role of mere possibilia in (25) and its like. Trace actualists come in two varieties: new actualists and haecceitists. New actualists like Linsky and Zalta (1996) and
Williamson (1998, 2000, 2013) argue that, in fact, all individuals are actually existing, necessary beings but not all of them are necessarily concrete. Some concrete individuals — those
traditionally (mis-)categorized as contingent beings — are only contingently concrete. Likewise, some non-concrete individuals — those, like possible Exotics, traditionally (mis-)categorized as
contingently non-actual mere possibilia — are only contingently non-concrete.^[34]
This novel take on modal metaphysics allows the new actualist to reinterpret possible world semantics so as to avoid possibilism. Notably, the domain d(w) of a world w is understood not as the set of
things that exist in w — for all individuals exist in all worlds — but the set of things that are concrete in w.^[35] Hence, for the new actualist, the correct truth condition for (23) is:
There is a possible world w and an individual a that is (i) concrete in w and (ii) an Exotic in w.
On the other hand, haecceitists like Plantinga introduce special properties — haecceities — to similar ends. The haecceity of an individual a is the property of being that very individual, the
property being a. A property is a haecceity, then, just in case it is possible that it is the haecceity of some individual.^[36] It is a necessary truth that everything has a haecceity. More
importantly, for haecceitists, haecceities are necessary beings. Thus, not only is it the case that, had any particular individual a failed to exist, its haecceity h[a] would still have existed, it
is also the case that, for any “merely possible” individual a, there is an actually existing haecceity that would have been a's haecceity had a existed. More generally (and more carefully) put:
Necessarily, for any individual a, (i) a has a haecceity h and (ii) necessarily, h exists.
Like the new actualists, then, the metaphysics of the haecceitists enables them to systematically reinterpret possible world semantics in such a way that the truth conditions of modal discourse are
expressed solely in term of actually existing entities of some sort rather than actual and non-actual individuals. More specifically, for the haecceitist, the domain d(w) of a world w is taken to be
the set of haecceities that are exemplified in w, that is, the set of haecceities h such that w includes h's being exemplified. Likewise, the w-extension of a (1-place) predicate π is taken to be a
set of haecceities — intuitively, those haecceities that are coexemplified in w with the property expressed by π. So reinterpreted, the truth condition for (23) is:
There is a possible world w and a haecceity h that is (i) exemplified in w and (ii) coexemplified with the property being an Exotic in w.
By contrast, “no-trace”, or strict, actualists like Prior (1957), Adams (1981), and Fitch (1996) hew closely to the intuition that, had a contingent individual a failed to exist, there would have
been absolutely no trace, no metaphysical vestige, of a — neither a itself in some non-concrete state nor any abstract proxy for a. Hence, unlike trace actualism, there are no such vestiges in the
actual world of objects that are not actual but only could have been. The logical consequences for no-trace actualists, however, appear to be severe; at the least they cannot provide a standard
compositional semantics for modal languages, according to which (roughly) the meaning of a sentence is determined by its logical form and the meanings of its semantically significant constituents. In
particular, if there is nothing to play the role of a “possible Exotic”, nothing that is, or represents, an Exotic in some other possible world — a mere possibile, a contingently non-concrete
individual, an unexemplified haecceity — then the strict actualist cannot provide standard, compositional truth conditions for quantified propositions like (23) that yield the intuitively correct
truth value. For, understood compositionally, (23) is true if and only if ‘∃xEx’ is true at some world w. And that, in turn, is true at w if and only if ‘Ex’ is true at w for some value of ‘x’. But,
as just noted, for the strict actualist, there is no such value of ‘x’. Hence, for the strict actualist, ‘Ex’ is false at w for all values of ‘x’ and, hence, (23) is false as well. (These issues are
explored in much greater detail in §4 of the entry Actualism.)
Like concretism, abstractionism provides a reasonably clear and intuitive account of what worlds are and what it is to exist in them, albeit from a decidedly different perspective. Although, as noted
in §2.2.2, the fact that modality is a primitive in abstractionist definitions of possible worlds arguably compromises its ability to provide semantically illuminating truth conditions for the modal
operators, those definitions can be taken to illuminate the connection between our basic modality concepts and the evocative notion of a possible world that serves as such a powerful conceptual tool
for constructing philosophical arguments and for analyzing and developing solutions to philosophical problems. In this regard, particularly noteworthy are: Plantinga's (1974) influential work on the
ontological argument and the free will defense against the problem of evil; Adams' (1974, 1981) work on actualism and actuality; and Stalnaker's (1968, 1987) work on counterfactual conditionals and
mental content.
A number of important objections have been voiced in regard to abstractionism. Some of these are addressed in the document Problems with Abstractionism.
As its name might suggest, our third approach — combinatorialism — takes possible worlds to be recombinations, or rearrangements, of certain metaphysical simples. Both the nature of simples and the
nature of recombination vary from theory to theory. Quine (1968) and Cresswell (1972), for example, suggest taking simples to be space-time points (modeled, perhaps, as triples of real numbers) and
worlds themselves to be arbitrary sets of such points, each set thought of intuitively as a way that matter could be distributed throughout space-time. (A world w, so construed, then, is actual just
in case a space-time point p is a member of w if and only if p is occupied by matter.) Alternatively, some philosophers define states a world could be in, and possible worlds themselves, simply to be
maximally consistent sets of sentences^[37] in an expressively rich language — “recombinations”, certainly, of the sentences of the language. (Lewis refers to this view as linguistic ersatzism.^[38])
However, the predominant version of combinatorialism finds its origins in Russell's (1918/1919) theory of logical atomism and Wittgenstein's (1921, 1922, 1974) short but enormously influential
Tractatus Logico-Philosophicus. A suggestive paper by Skyrms (1981) spelling out some of the ideas in the Tractatus, in turn, inspired a rich and sophisticated account that is developed and defended
in great detail in an important series of books and articles by D. M. Armstrong (1978a, 1978b, 1986a, 1989, 1997, 2004b, 2004c). In this section, we present a somewhat simplified version of
combinatorialism that draws primarily upon Armstrong's work. Unless otherwise noted, this is what we shall mean by ‘combinatorialism’ for the remainder.
Wittgenstein famously asserted that the world is the totality of facts, not of things (ibid., §1.1). The combinatorialist spells out Wittgenstein's aphorism explicitly in terms of an ontology of
objects (a.k.a., particulars), universals (a.k.a., properties and relations), and facts. Facts are either atomic or molecular. Every atomic fact — Sachverhalt, in the language of the Tractatus — is
“constituted” by an n-place relation (= property, for n=1) and n objects that stand in, or exemplify, that relation. Thus, for example, suppose that John is 1.8 meters tall. Then, in addition to John
and the property being 1.8 meters tall, there is for the combinatorialist the atomic fact of John's exemplifying that property. More generally, atomic facts exist according to the following
AF Objects a[1], ..., a[n] exemplify n-place relation R iff there is the fact a[1], ..., a[n]'s exemplifying R ([R,a[1],...,a[n]], for short).
Say that the a[i] are the constituent objects of the fact in question and R its constituent universal, and that R and the a[i] all exist in [R,a[1],...,a[n]].
A fact is monadic if its constituent universal is a property. A molecular fact f is a conjunction of atomic facts. Its constituent objects and universals are exactly those of its conjuncts and an
entity exists in f just in case it exists in one of its conjuncts. (For simplicity, we stipulate that an atomic fact has (only) itself as a conjunct and, hence, is “trivially” molecular.) One fact f
includes another g if every conjunct of g is a conjunct of f. (Note, importantly, that inclusion, so defined, is quite different from the homonymous notion defined in the discussion of abstractionism
above — most notably, combinatorial inclusion is not a modal notion.) For purposes below, say that an object a is a bare particular in a molecular fact f if there is no monadic conjunct of f of which
a is the constituent object, no conjunct of the form a exemplifies F, for some property F. a is a bare particular if it is bare in every molecular fact. Intuitively, of course, a bare particular is
an unpropertied object.
There is no upper bound on the “size” of a molecular fact and no restriction on which atomic facts can form a conjunction; for any atomic facts at all, there is a molecular fact whose conjuncts are
exactly those facts. As a first cut, then, we can spell out Wittgenstein's characterization of the (actual) world as the totality of facts by defining the world to be the largest molecular fact, the
molecular fact that includes all of the atomic facts.^[39]
Although objects and universals are typically included along with facts in the basic ontology of combinatorialism, facts are typically considered more fundamental. Indeed, taking his queue from the
Tractarian thesis that the world consists of facts, not things, Armstrong (1986a, 577) argues that facts alone are ontologically basic and that objects and universals are simply “aspects of,
abstractions from” facts. Accordingly, he calls the object constituent of a fact of the form [P,a] a “thin” particular, an object “considered in abstraction from all its [intrinsic] properties”
(1993, 433); and where N is the conjunction of “all the non-relational properties of that particular (which would presumably include P), the atomic fact a's exemplifying N itself is the corresponding
“thick” particular ” (ibid., 434 — we will occasionally use italics to distinguish a thin particular a from the corresponding thick particular a). Though not all combinatorialists of every stripe buy
into Armstrong's “factualist” metaphysics (Bricker 2006), they do generally agree that facts are more fundamental, at least to the extent that both the notion of a bare particular, i.e., an object
exemplifying no properties, and that of an unexemplified property are considered incoherent; insofar as they exist at all, the existence of both particulars and universals depends on their
“occurring” in some fact or other. Whatever their exact ontological status, it is an important combinatorialist thesis that exactly what objects and universals exist is ultimately a matter for
natural science, not metaphysics, to decide.
Objects can be either simple or complex. An object is simple if it has no proper parts, and complex otherwise. Like objects, universals too divide into simple and complex. A universal is simple if it
has no other universal as a constituent, and complex otherwise. Complex universals accordingly come in two varieties: conjunctive — the constituents of which are simply its conjuncts — and
structural. A structural universal U is one that is exemplified by a complex object O, and its constituents are universals (distinct from U) exemplified by simple parts of O that are relevant to O's
being an instance of U.^[40] It is important to note that, for Armstrong, the constituency relation is not the mereological parthood relation. Rather, complex universals (hence also complex facts of
which they are constituents) enjoy a “non-mereological mode of composition” (1997, 119–123) that, in particular, allows for a richer conception of their structure.^[41] (An assumption of our
simplified account here will be that both the proper part of relation and the constituency relation are well-founded. It follows that (i) there is no gunk, i.e., that every complex object is
composed, ultimately, entirely of simples and (ii) complex universals — hence the complex facts in which they are exemplified — are ultimately “grounded” in simple facts, i.e., that they cannot be
infinitely decomposed into further complex universals/facts.^[42])
To illustrate the basic idea: in Figure 1, the left-hand diagram depicts a water molecule W comprising an oxygen atom o and two hydrogen atoms h[1] and h[2]. For the combinatorialist, “thick”
particulars like the molecule itself as well as its constituent atoms are themselves facts: o is the fact [O,o] in which the universal oxygen (O) is exemplified by a thin particular o;^[43] likewise
h[1] and h[2]. W in turn comprises those monadic facts and the relational facts [B,o,h[1]], [B,o,h[2]] wherein the covalent bonding relation B holds between the oxygen atom and the two hydrogen
atoms. The structural universal Water itself, then, shares this structure — it is, so to say, an isomorph consisting of the monadic universals O and H and the binary relation B, structured as
indicated in the right-hand diagram of Figure 1.^[44]
Figure 1: A Water Molecule W and the Structural Universal Water
It should be clear from principle AF that all atomic facts hold; that is, all of them reflect actual exemplification relations. Obviously, however, possibility encompasses more than what is actual,
that is, there are possible facts as well as actual facts; the world's universals might have been exemplified by its objects very differently. If they had — if the world's objects and universals had
combined in a very different way — there would have been a very different set of atomic facts and, hence, a very different world.
To spell out the idea of a possible fact, the combinatorialist introduces the more general notion of an atomic (combinatorial) state of affairs, that is, an entity that simply has the form of an
atomic fact — n objects exemplifying an n-place relation — but without any requirement that the exemplification relation in question actually holds between them. More exactly:
AS For any objects a[1], ..., a[n] and any n-place relation R, there is an atomic (combinatorial) state of affairs a[1], ..., a[n]'s exemplifying R (again, [R,a[1],...,a[n]], for short).
Thus, even if the two hydrogen atoms h[1] and h[2] in a water molecule do not in fact stand in the covalent bonding relation B, there is nonetheless the (non-factual) state of affairs [B,h[1],h[2]].
Combinatorialism takes facts to be literal, structured parts of the physical world. This suggests that a non-factual state of affairs — a merely possible fact — must be part of a merely possible
physical world. This idea is at odds with the strong, scientifically-grounded form of actualism that typically motivates combinatorialism. Two options are available: The combinatorialist can follow
the (actualist) abstractionists and define states of affairs to be philosophical or mathematical constructs consisting only of actual objects, properties, relations, and facts. For example, the state
of affairs [R,a[1],...,a[n]] can simply be identified with the ordered n-tuple 〈R,a[1],...,a[n]〉. So long as the combinatorialist is willing to adopt the additional metaphysical or set theoretic
machinery, this sort of approach offers a way of introducing non-factual states of affairs that does not involve any untoward ontological commitments to merely possible entities. Alternatively,
following Armstrong (1989, 46–51; 1997, 172–4), the combinatorialist can refuse to grant non-factual states of affairs any genuine ontological status and adopt a form of modal fictionalism that
nonetheless permits one to speak as if such states of affairs exist. The exposition to follow will remain largely neutral between these options.
Constituency for states of affairs is understood as for facts. Additionally, analogous to molecular facts, there are molecular states of affairs — conjunctions of atomic states of affairs. Inclusion
between states of affairs is understood exactly as it is between facts and being a bare particular in a molecular state of affairs s is understood as for facts: a is a bare particular in s if there
is no monadic conjunct of s of the form a exemplifies F. The notion of recombination is now definable straightaway:
s is a recombination of a molecular state of affairs f =[def] s is a molecular state of affairs whose constituent objects and constituent universals are exactly those of f. s is a non-trivial
recombination of f if it does not include the same states of affairs as f.
Very roughly then, a possible world will be a certain sort of recombination of (some portion of) the actual world, the molecular fact that includes all of the atomic facts. This idea will be refined
in the following sections.
Say that a state of affairs is structural if it is atomic and its constituent universal is structural or it is molecular and includes a structural state of affairs; and say that it is simple
otherwise. The difference between structural and simple universals and states of affairs is particularly significant with regard to the important concept of supervenience (Armstrong 1989, Ch 8).^[45]
Entity or entities S supervene on entity or entities R if and only if the existence of R necessitates that of S (ibid., 103). (Necessitation here is, of course, ultimately to be spelled out in terms
of combinatorial possible worlds.) Non-structural states of affairs supervene directly on their atomic conjuncts.^[46] However, things are not in general quite so straightforward for structural
states of affairs. For, although structural states of affairs are ultimately constituted entirely by simple states of affairs, unlike non-structural states of affairs, structural states of affairs
typically supervene on more than the totality of their constituents. For, in many cases, whether or not a structural fact exists depends not only on the existence of its constituent facts but also on
the absence of certain others (Armstrong 1997, 34ff). For example, as noted in our example above, our water molecule W comprises two further facts in which two hydrogen atoms h[1] and h[2] both stand
in the covalent binding relation with an oxygen atom o. However, if o were to bind with a further hydrogen atom h[3], then, despite the fact that the constituent facts of W would still hold, W would
not be water; there would be no such fact as W's being water.^[47] Rather, W would exist only as a complex part of a hydronium ion; the new binding [B,o,h[3]] would, so to say, “spoil” the
instantiation of Water. Thus, more generally, whether or not a structural state of affairs S exists in a possible world typically requires something over and above its constituent states of affairs
being “welded together” in the right sort of way (Armstrong, 1997, 36); it requires also that there be no relevant “spoilers” for S.^[48] Armstrong draws directly on the initial passages of the
Tractatus^[49] for the necessary apparatus: a structural state of affairs S in any possible world w, supervenes, not simply on its constituent atomic states of affairs but on a certain higher-order
state of affairs T[w], namely, the state of affairs that the (first-order) atomic states of affairs of w are all the (first-order) atomic states of affairs and, hence, that w includes no spoilers for
S. Armstrong (ibid., 35, 134–5, 196–201) calls T[w] the totality state of affairs for the atomic states of affairs of w.^[50]
The idea of possibility being rooted in arbitrary recombinations of the actual world, rearrangements of its objects and universals, is intuitively appealing. Clearly, however, not just any such
recombination can count as a possible world. Some states of affairs are intuitively impossible — [being an elephant, e], where e is an individual electron, say — and some pairs of states of affairs,
while individually possible, are not compossible — the states of affairs [having 1kg mass, a] and [having 2kg mass, a] for a given object a, or, for a given mereological sum m of simples, the states
of affairs [being a baboon, m] and [being a hoolock, m]. But nothing that has been said rules out the existence of recombinations of the actual world — rearrangements of its objects and universals —
that include such states of affairs. Obviously, however, such recombinations cannot be thought to represent genuinely possible worlds. Of course, like the abstractionist, the combinatorialist could
simply stipulate as part of the definition that all legitimate recombinations must be genuinely possible states of affairs of a certain sort, genuinely possible recombinations. But this will not do.
For, like concretism, combinatorialism purports to be a reductive account of modality, an account of possible worlds that does not depend ultimately on modal notions (see Armstrong 1989, 33).^[51]
Here the distinction between simple and structural states of affairs together with the combinatorialist's strong notion of supervenience come to the fore. For, given that structural facts supervene
on simple facts and the actual totality fact T[@], the actual world can be defined more parsimoniously as the molecular fact that includes all the simple atomic facts and the totality fact T[@]. And
at the level of simples, there are no limitations whatever on recombination (Wittgenstein 1921, 2.062-2.063); hence, any recombination of simple objects and universals is by definition considered
possible. Thus Armstrong (1986a, 579):
The simple individuals, properties, and relations may be combined in all ways to yield possible [simple] atomic states of affairs, provided only that the form of atomic facts is respected. That
is the combinatorial idea.
Worlds, in particular, can be defined as special cases of such recombinations, together with appropriate totality facts. To state this, we need a condition that ensures the existence of a unique
actual world:
States of affairs s and t are identical iff they include exactly the same states of affairs.
Given this, we have:
The (combinatorial) actual world =[def] the fact @ that includes exactly all the simple atomic facts and the totality state of affairs T[@] for the conjunction of those facts.
AW3 w is a (combinatorial) possible world =[def] w is a recombination of the simple atomic facts of the actual world conjoined with the totality fact T[w] for that recombination.^[52]
Armstrong's ontological commitments are notoriously rather slippery but, given AW3, a reasonably complete notion of existence in a world is forthcoming. First, let us note that, for Armstrong, the
“combinatorial idea” yields a substantial metaphysical thesis, as well, viz., the ontological free lunch (1986, 12ff), i.e., the thesis that “[w]hat supervenes is no addition of being”; that
“whatever supervenes ... is not something ontologically additional to the subvenient entity or entities.” Hence, for Armstrong, it appears that simple states of affairs and their constituents exist
most fundamentally and that the existence of more complex entities is in a certain sense derivative. Thus:
Entity a exists fundamentally in (combinatorial) possible world w =[def] (i) a is a simple state of affairs that w includes or (ii) a is a constituent or conjunct of an entity that exists
fundamentally in w.
Given this, existence in a world generally — both fundamental and derivative — both for simples and (first-order^[53]) non-simples alike, is definable as follows:
AE3 Entity a exists in (combinatorial) possible world w =[def] either (i) a exists fundamentally in w or (ii) a supervenes on entities that exist in w.
Semantics receives rather short shrift in Armstrong's version of combinatorialism — at least, semantics in the model theoretic sense of §1.2 — but, as it has played an important role in our
discussion of concretism and abstractionism, we note briefly how the ontology of combinatorialism might be taken to populate a possible world interpretation of the language of modal predicate logic.
Specifically, we can take the range of the modal operators — understood, semantically, as quantifiers — to be all of the combinatorial possible worlds in the sense of AW3. The domain d(w) of each
world w is the set of all simple and complex objects that exist in w according to AE3 and the w-extension I[π](w) of a predicate π expressing a simple or complex universal R is the set of all n
-tuples, 〈a[1], ..., a[n]〉 such that the atomic fact [R,a[1],...,a[n]] exists in w.
There are, then, for the combinatorialist no intrinsically modal phenomena; there are just all of the various worlds that exist on unrestricted combinatorial grounds alone. Ultimately, all genuine
possibilities, simple or not, are just states of affairs that exist in these combinatorial worlds in the sense of AE3. However, it is not immediately as clear how to understand many intuitive
necessities/impossibilities involving complex structural universals, for example, the impossibilities noted in the previous section, viz., that something simultaneously have a mass of both 1kg and
2kg or simultaneously be both a baboon and a hoolock. Likewise, it is not entirely clear how combinatorialism accounts for intuitive facts about essential properties, such as that our water molecule
W is essentially water or that Algol is essentially a dog. Combinatorialists argue that such modal facts can nevertheless be explained in terms that require no appeal to primitive modal features of
the world (Armstrong 2004b, 15).
Analytic Modalities. Armstrong argues that many intuitive modal facts — notably, the impossibility of an object exemplifying more than one determinate of the same determinable — can be understood
ultimately as logical, or analytic, modalities that are grounded in meaning rather than any primitive modal features of reality. For example, intuitively it is impossible that an object
simultaneously exemplify the structural properties having 2kg mass and having 1kg mass. The combinatorial reason for this (cf. Armstrong 1989, 79) is that, for an object a to exemplify the former
property is simply for there to be a division of a into two wholly distinct parts, both of which exemplify the latter property. Moreover, this division into parts is entirely arbitrary, that is, for
any part a[1] of a exemplifying having 1kg mass, there is a (unique) part a[2] of a wholly distinct from a[1] that also exemplifies that property. It follows that, if our 2kg object a itself also
exemplifies having 1kg mass, then, as a is a part of itself, there must be a 1kg part of a that is wholly distinct from a. And that is analytically false, false “solely by virtue of the meaning we
attach to” the word ‘part’ (ibid., 80).^[54]
Emergent Modalities. Combinatorialism purports to explain a further class of intuitive modal facts as features that simply “emerge” from facts about structural properties.^[55] The discussion of
structural states of affairs and supervenience above provides an example. Let us suppose the actual world w[1] includes our water molecule W from Figure 1 plus a further hydrogen atom h[3]. In this
world, only h[1] and h[2] bind to o. Hence, this world includes the state of affairs W's being water but not the state of affairs I's being hydronium in which o, h[1], h[2], and h[3] are so bonded as
to constitute a hydronium ion I. Conversely, however, given the unrestricted nature of recombination, there is a world w[2] that includes W structured as it actually is in w[1] but which also
includes the spoiler [B,o,h[3]] — where o and h[3] bond — and, hence, the structural state of affairs I's being hydronium. Thus, the absence of [B,o,h[3]] in w[1] enables the emergence of W's being
water and precludes I's being hydronium whilst its presence in w[2] enables the emergence of the latter but precludes the former. As a consequence, it is impossible that the states of affairs W's
being water and I's being hydronium coexist.^[56]
Figure 2: W's being water and (given a bond between o and h[3]) I's being hydronium
Although more dramatic, large-scale examples of incompatible states of affairs — such as a thing's being simultaneously both a baboon and a hoolock — might be vastly more complex, there is no obvious
reason why their impossibility could not have the same sort of combinatorial explanation.
Essential Properties. It follows from the unrestricted nature of recombination that, for any simple object a and simple universal P, a recombines with P in some worlds and fails to recombine with P
in others. Generalizing from this fact, it follows that no simple or sum of simples has any simple universal or conjunction of simple universals essentially. It also follows that no such object has
any structural property essentially. For assume o is such an object and that it exemplifies a structural property P. Since P is structural, it supervenes on some set of simple states of affairs. But
by the nature of recombination, there are combinatorial worlds in which those states of affairs do not exist and, hence, in which P doesn't but o — being a simple or a sum of simples — does.
Thick particulars like our water molecule W don't fare much better because of the possibility of spoilers. For Armstrong (1997, 35), W is simply the conjunction of its constituent states of affairs.
As we've just seen, however, in the presence of spoilers, that conjunction would exist — hence, W would exist — without being Water. Hence, it would seem that at least some properties that,
intuitively, are essential to their bearers turn out not to be for the combinatorialist. The problem is compounded by the fact that some intuitively non-essential properties of some thick particulars
are arguably essential for the combinatorialist. The shape properties of a thick particular A, for example, would seem to be a function of its constituent states of affairs. Moreover, the
exemplification of such properties are not obviously subject to spoilers the way that natural kind properties like Water are. Hence, as A is identical to the conjunction of its constituent states of
affairs, it would seem that it will have the same shape in any world in which it exists, i.e., it will have that shape essentially.
That said, combinatorialism can arguably provide a reasonably robust analysis of intuitions about the essential properties of ordinary thick particulars like dogs or persons. Such objects can be
taken to be temporal successions of sums of simples and each sum in the succession as its temporal parts. Sums in the same rough temporal neighborhood are composed of roughly the same simples and are
structured in roughly the same way. Similarities between such objects across worlds in turn determine counterpart relations. Following Lewis, the essential properties of such objects can then be
identified with those properties exemplified by (all of the temporal parts of) all of its counterparts in every world in which it exists (Armstrong 1997, 99–103, 169).^[57]
Since a possible world is a recombination of the actual world and every recombination includes states of affairs involving every simple individual and every simple universal, by AE3, every simple
entity exists in every world. Hence, there could not have been fewer of them; nor could there have been simples other than the ones there actually are. In this section, we address this issue and the
issue of contingent existence generally in combinatorialism.
Fewer things. Combinatorialism as it stands has no problem accounting for the general intuition that there could have been fewer things. We have already noted in §2.3.3 and again in §2.3.5 how our
water molecule W, as such, might not have existed. More generally, given the unrestricted nature of recombination, for any a involving a structural fact S, there are recombinations of the actual
world wherein either (a) some of the relations among a's constituents that are critical to S's structure fail to be exemplified by those constituents, or (b) there are further states of affairs
included by those recombinations that act as spoilers for S. Consequently, the combinatorialist seems to have no difficulty explaining how there might have been fewer water molecules, humans, etc.
Intuitively, however, there isn't anything in the idea of a simple that suggests that simples are necessary beings — especially if, as combinatorialists generally agree, simples are physical things
of some sort and simple universals are properties of, and relations among, those things. For there is nothing in the nature of a simple object to suggest that any given simple had to have existed.
Likewise, there is nothing in the nature of a simple universal to suggest it had to have been exemplified and, hence, on the combinatorialist's own conception of universals, that it had to exist.
Otherwise put, as simples exist only insofar as they are constituents of facts, there seems no reason why there couldn't have been a very small number of facts, indeed, just a single simple, atomic,
monadic fact and, hence, a lone simple object and a lone simple universal.
In fact, however, AW3 can be easily modified to accomodate these intuitions without doing any serious violence to combinatorialist intuitions. Specifically, the combinatorialist can admit
“contracted” worlds in which fewer simples exist by allowing any recombination of any simple fact — that is, equivalently, by allowing any state of affairs — to count as a possible world:
AW3′ w is a (combinatorial) possible world =[def] w is a recombination of some simple fact f conjoined with the totality state of affairs T[w] for that recombination.
AE3 requires no modification, as it was defined with sufficient generality above. Under AW3′, however, AE3 entails that all entities alike — objects and universals, simple and structural — are
contingent and, indeed, that every simple object is the sole constituent of some combinatorial possible world.
Other things. Intuitively, not only could there have been fewer things, there could have been more things or, more generally, things other than those that actually exist. As above, combinatorialism
as it stands seems able to account for many instances of this intuition: Figure 2 illustrates how a non-actual hydronium ion I might exist in another world. Likewise, there seems no reason to deny,
e.g., that there are rearrangements w of the actual world's simples wherein exist all of the human beings that actually exist (at, say, 0000GMT 1 January 2013) and more besides that are composed of
simples that, in fact, constitute things other than human beings (Armstrong 1997, 165).^[58] Combinatorialism also seems able to account for the possibility of conjunctive and structural universals
that are simply rearrangements of actual simples. It is not implausible to think that such recombinations can give rise to, say, exotic biological kinds that have no actual instances (Armstrong 1989,
55–56). Thus, in particular, combinatorialism seems quite able to provide the truth condition (24) for (23) and, hence, can account for some possibilities involving “missing” universals that,
intuitively, ought to be possible.
However, it is far from clear that such possibilities exhaust the modal intuition that other things could have existed. Notably, intuitively, there could have been different simple universals
distinct from any that actually exist — different fundamental properties of simples, for example. Likewise for simple objects. Either way, there seems to be nothing in the idea of a simple object or
simple universal that suggests there couldn't have been simples other than, or in addition to, the simples there are in fact. But AW3′ does not allow for this; the simples of every possible world are
a subset of the actual simples and there is no obvious way of modifying the principle to accommodate the intuition. Nor is there any obvious way of modifying the principle to accomodate the intuition
in question.^[59]
The combinatorialist could of course abandon actualism and include merely possible simples into her ontology. Again, she could follow the new actualists and draw a division between actually concrete
and non-actual, possibly concrete simples; or she could introduce Plantinga-style haecceities to go proxy for merely possible simples. But all of these options would be badly out of step with the
strong, naturalist motivations for combinatorialism: There is but the one physical world comprising all of the facts; recombinations of (at least some of) those facts — arbitrary rearrangements of
their simple objects and universals — determine the possible worlds. Mere possibilia, merely possible non-concretia, and non-qualitative haecceities have no real place in that picture.
The “purest” option for the combinatorialist is simply to brazen it out and argue that the actual simples are, in fact, all the simples there could be (Armstrong 1989, 54ff; Driggers 2011, 56–61). A
more robust option suggested by Skyrms (1981) makes some headway against the problem by introducing an “outer”, or “second-grade” realm of possibility, but at the cost of moving beyond the basic
intuitions of combinatorialism (Armstrong 1989, 60; 1997, 165–167). Finally, Sider (2005, 681) suggests that combinatorialists who (like Armstrong) are modal fictionalists can deal with the problem
of missing entities simply by appealing to yet more fictionalism: As the combinatorialist fiction already includes non-actual states of affairs with actually existing constituents, there seems no
reason not to extend the fiction to include non-actual states of affairs whose constituents include non-actual particulars and universals. Fictionalism itself, however, leaves the combinatorialist
with the deep problems detailed by Kim (1986), Lycan (1993), and Rosen (1993).^[60]
As with concretism and abstractionism, combinatorialism provides reasonably clear definitions of possible worlds and existence in a world and is noteworthy for its attempt to avoid what might be
thought of as the metaphysical excesses of the two competing views. In contrast to concretism, combinatorialism is staunchly actualist: instead of an infinity of alternative physical universes, each
with its own unique inhabitants existing as robustly as the inhabitants of the actual world, the worlds of combinatorialism are simply rearrangements of the universals and particulars of the actual
world; and commitment even to them might be avoided if some version of fictionalism is tenable. Likewise, in contrast to abstractionism's rather rich and unrestrained ontology of SOAs,
combinatorialism's states of affairs are comparatively modest. Moreover, in contrast to nearly all versions of abstractionism, combinatorialism shares with concretism the virtue of a reductive theory
of modality: Modal statements, ultimately, are true or false in virtue of how things stand with respect to worlds that are themselves defined in non-modal terms.
Combinatorialism's ontological modesty, however, is also a weakness. For, unlike, the two competing approaches, there are modal intuitions that the combinatorialist is not easily able to account for,
notably, the intuition that there could have been other things. Additional difficulties are discussed in the supplemental document Further Problems for Combinatorialism.
• Adams, R., 1974. ‘Theories of Actuality’, Noûs, 8: 211–31; reprinted in Loux (1979): 190–209
• –––, 1981. ‘Actualism and Thisness’, Synthese, 49: 3–41
• Armstrong, D. M., 1978a, Universals and Scientific Realism, Volume I: Nominalism and Realism, Cambridge: Cambridge University Press.
• –––, 1978b, Universals and Scientific Realism, Volume II: A Theory of Universals, Cambridge: Cambridge University Press.
• –––, 1986a, ‘The Nature of Possibility’, The Canadian Journal of Philosophy, 16(4): 575–594.
• –––, 1986b. ‘In Defense of Structural Universals’, Australasian Journal of Philosophy, 64(1): 85–88.
• –––, 1989. A Combinatorial Theory of Possibility, New York: Cambridge University Press.
• –––, 1993. ‘A World of States of Affairs’, in J. Tomberlin (ed.), Philosophical Perspectives, 7: 429–440.
• –––, 1997. A World of States of Affairs, New York: Cambridge University Press.
• –––, 2004a. ‘Theorie Combinatoire Revue et Corrigée’, in J-M. Monnoyer (Ed.) La Structure du Mond: Objets, Propriétés, États et Choses, Paris: Vrin, 185–198.
• –––, 2004b. ‘Combinatorialism Revisited’, translation (with minor corrections) of Armstrong 2004a, URL = http://eprints.nottingham.ac.uk/716/, ISBN: 2-7116-1627-4.
• –––, 2004c. Truth and Truthmakers, Cambridge: Cambridge University Press.
• –––, 2009. ‘Reply to Keller,’, in Monnoyer 2007, 157–162.
• Barcan, R., 1946. ‘A Functional Calculus of First Order Based on Strict Implication’, Journal of Symbolic Logic, 11: 1–16.
• Bayart, A., 1958, ‘La correction de la logique modale du premier et second ordre S5’, Logique et Analyse, 1: 28–44. Translated in Cresswell forthcoming.
• –––, 1959, ‘Quasi-adéquation de la logique modale de second ordre S5 et adéquation de la logique modale de premier ordre S5’, Logique et Analyse, 2: 99–121.
• Beall, J., 2000. ‘A Neglected Response to the Grim result’, Analysis, 60(1): 38–41.
• Bigelow, J. and R. Pargetter, 1987. ‘Beyond the Blank Stare’, Theoria, 53: 97–114.
• Bradley, R., 1989. ‘Possibility and Combinatorialism: Wittgenstein Versus Armstrong’, Canadian Journal of Philosophy, 19(1): 15–41.
• Bricker, P., 1980. ‘Prudence’, Journal of Philosophy, 77(7): 381–401.
• –––, 1987. ‘Reducing Possible Worlds to Language’, Philosophical Studies, 52(3): 331–355.
• –––, 1996. ‘Isolation and Unification: The Realist Analysis of Possible Worlds’, Philosophical Studies, 84(2/3): 225–238.
• –––, 2001. ‘Island Universes and the Analysis of Modality’, in G. Preyer, F. Siebelt (eds.), Reality and Humean Supervenience: Essays on the Philosophy of David Lewis, Rowman and Littlefield.
• –––, 2006. ‘The Relation Between General and Particular: Supervenience vs. Entailment’, in D. Zimmerman (ed.), Oxford Papers in Metaphysics, vol. 3, Oxford: Oxford University Press, 251–287.
• –––, 2008. ‘Concrete Possible Worlds’, in T. Sider, J. Hawthorne, and D. Zimmerman (eds.), Contemporary Debates in Metaphysics Blackwell Publishing, 111–134.
• Bringsjord, S., 1985. ‘Are There Set Theoretic Possible Worlds?’, Analysis, 45(1): 64.
• Bueno, O., C. Menzel, and E. Zalta, 2014. ‘Worlds and Propositions Set Free’, Erkenntnis, 79: 797–820.
• Carnap, R., 1947. Meaning and Necessity. Chicago: The University of Chicago Press.
• Chihara, C., 1998. The Worlds of Possibility, Oxford: Clarendon Press
• Copeland, B. J., 2002. ‘The Genesis of Possible Worlds Semantics’, Journal of Philosophical Logic, 31: 99–137.
• –––, 2006. ‘Meredith, Prior, and the History of Possible Worlds Semantics’, Synthese, 150(3): 373–397.
• Cresswell, M. J., 1972. ‘The World is Everything that is the Case’, Journal of Philosophy, 50: 1–13. Reprinted in Loux (1979), 129–145.
• –––, 1973. Logics and Languages, London, Methuen.
• –––, 1985a. Structured Meanings: The Semantics of Propositional Attitudes, Bradford Books/MIT Press.
• –––, 1985b. Adverbial Modification. Dordrecht, Reidel.
• –––, 1988. Semantical Essays: PossibleWorlds and Their Rivals. Dordrecht, Kluwer Academic Publishers.
• –––, 1990. Entities and Indices. Dordrecht, Kluwer.
• –––, 1994. Language in the World. Cambridge: Cambridge University Press.
• –––, 1996. Semantic Indexicality. Dordrecht: Kluwer.
• –––, 2004. ‘Adequacy Conditions for Counterpart Theory’, Australasian Journal of Philosophy, 82(1): 28–41.
• –––, 2015. ‘Arnould Bayart's Modal Completeness Theorems — Translated with an Introduction and Commentary’, Logique et Analyse, 229: 89–142.
• Darby G., and D. Watson, 2010. ‘Lewis's Principle of Recombination: Reply to Efird and Stoneham’, Dialectica, 64(3): 435–445.
• Davies, M., 1981. Meaning, Quantification, Necessity: Themes in Philosophical Logic, London: Routledge and Kegan Paul.
• DeRosset, L., 2009a. ‘Possible Worlds I: Modal Realism’, Philosophy Compass, 4(6): 998–1008.
• –––, 2009b. ‘Possible Worlds II: Non-reductive Theories of Possible Worlds’, Philosophy Compass, 4(6): 1009–1021.
• Divers, J., 2002. Possible Worlds, London: Routledge.
• Divers, J. and J. Melia, 2002. ‘The Analytic Limit of Genuine Modal Realism’, Mind, 111(441): 15–36.
• Driggers, R. K., 2011. ‘M-Combinatorialism and the Semantics of SQML’, M.A. Thesis, Texas A&M University. URL = http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9484
• Eddon, M., 2007. ‘Armstrong on Quantities and Resemblance’, Philosophical Studies, 136(3): 385–404.
• Efird, D. and T. Stoneham, 2008. ‘What is the Principle of Recombination?’, Dialectica, 62(4): 483–494.
• Egan, A., 2004. ‘Second-order predication and the Metaphysics of Properties’, Australasian Journal of Philosophy, 82(1): 48–66.
• Etchemendy, J., 1990. The Concept of Logical Consequence, Cambridge, MA: Harvard University Press. (Reissued by Stanford: CSLI Publications, 1999.)
• Feldman, F., 1971. ‘Counterparts’, Journal of Philosophy, 68: 406–9.
• Fara, D. G., 2009. ‘Dear Haecceitism’, Erkenntnis, 70: 285–297.
• Fara, M. and T. Williamson, 2005. ‘Counterparts and Actuality’, Mind, 114(453): 1–30.
• Fine, K., 1977. ‘Postscript’, in Prior 1977, 116–161.
• –––, 1978. ‘Model Theory for Modal Logics: Part I — The De Re/De Dicto Distinction’, Journal of Philosophical Logic, 7: 125–56.
• Forbes, G., 1985. The Metaphysics of Modality, Oxford: Clarendon Press.
• –––, 1982. ‘Canonical Counterpart Theory’, Analysis, 42(1): 33–37.
• Fitch, G. W., 1996, ‘In Defense of Aristotelian Actualism’, Philosophical Perspectives, 10: 53–71.
• Forrest, P. and D. M. Armstrong, 1984. ‘An Argument against David Lewis's Theory of Possible Worlds’, Australasian Journal of Philosophy, 62: 164–168.
• Garson, J., 2006. Modal Logic for Philosophers. New York: Cambridge University Press.
• Goldblatt, R., 2003. “Mathematical Modal Logic: A View of its Evolution”, in D. M. Gabbay and J. Woods (eds.), Handbook of the History of Logic, Vol. 7: Logic and the Modalities in the Twentieth
Century. Amsterdam: Elsevier, pp. 1–98.
• Grim, P., 1986. ‘On Sets and Worlds: A Reply to Menzel’, Analysis, 46(4): 186–191.
• Hazen, A., 1979. ‘Counterpart Theoretic Semantic for Modal logic’, Journal of Philosophy, 76: 319–338.
• Hunter, G. and W. Seager, 1981. ‘The Discreet Charm of Counterpart Theory’, Analysis, 41(2): 73–76.
• Heil, J., 2007. ‘The Legacy of Linguisticism’, Australasian Journal of Philosophy, 84(2): 233–244.
• Hintikka, J., 1957. ‘Modality as referential multiplicity’, Ajatus, 20: 49–64.
• –––, 1961. ‘Modality and Quantification’, Theoria, 27: 119–28.
• Hughes, G. and Cresswell, M., 1968. An Introduction to Modal Logic, London: Methuen.
• Jager, T., 1982. ‘An Actualist Semantics for Quantified Modal Logic’, Notre Dame Journal of Formal Logic, 23(3) (July): 335–49.
• Kanger, S., 1957. Provability in Logic, Stockholm: Almqvist and Wiksell.
• Kaplan, D., 1975. ‘How to Russell a Frege-Church’, Journal of Philosophy, 72: 716–29; reprinted in Loux (1979), pp. 210–24.
• Kaplan, D., 1979. ‘Transworld Heir Lines’, in Loux (1979), pp. 88–109.
• –––, 1995. ‘A Problem in Possible World Semantics’, in W. Sinnott-Armstrong, D. Raffman, and N. Asher (eds.), Modality, Morality and Belief: Essays in Honor of Ruth Barcan Marcus, Cambridge:
Cambridge University Press, 41–52.
• Keller, P., 2007. ‘A World of Truthmakers’, in Monnoyer 2007, 105–156.
• –––, 2009. ‘Review of D. M. Armstrong, Truth and Truthmakers’, Mind, 118(472): 1101–1105.
• Kemeny, J. G., 1956a. ‘A New Approach to Semantics — Part I’, The Journal of Symbolic Logic, 21(1): 1–27.
• –––, 1956b. ‘A New Approach to Semantics — Part II’, The Journal of Symbolic Logic, 21(2): 149–161.
• Kim, J., 1986. ‘Possible Worlds and Armstrong's Combinatorialism’, Canadian Journal of Philosophy, 16(4): 595–612.
• Kripke, S., 1959. ‘A Completeness Theorem in Modal Logic’, Journal of Symbolic Logic, 24(1): 1–14.
• –––, 1963a. ‘Semantical Analysis of Modal Logic I: Normal Modal Propositional Calculi’, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 9: 67–96.
• –––, 1963b. ‘Semantical Considerations on Modal Logic’, Acta Philosophica Fennica, 16: 83–94.
• –––, 1972. ‘Naming and Necessity’, Cambridge, Massachusetts: Harvard, 1980.
• Lewis, D., 1968. ‘Counterpart Theory and Quantified Modal Logic’, Journal of Philosophy, 65: 113–126.
• –––, 1970. ‘General Semantics’, Synthese, 22: 18–67.
• –––, 1973. Counterfactuals, Cambridge, Massachussetts: Harvard University Press.
• –––, 1986. On The Plurality of Worlds, Oxford: Blackwell.
• –––, 1986a. ‘Against Structural Universals’, Australasian Journal of Philosophy, 64(1): 25–46.
• –––, 1991. Parts of Classes, Oxford: Basil Blackwell Ltd.
• –––, 1992, ‘Critical Notice: Armstrong, D. M., A Combinatorial Theory of Possibility’, Australasian Journal of Philosophy, 70(2): 211–224.
• Lewis, C. I. and C. H. Langford, 1932. Symbolic Logic, New York: The Appleton-Century Company; reprinted in paperback by New York: Dover Publications, 1951.
• Linsky, B. and E. Zalta, 1994. ‘In Defense of the Simplest Quantified Modal Logic’, in J. Tomberlin (ed.), Philosophical Perspectives, 8: 431–458.
• –––, 1996. ‘In Defense of the Contingently Concrete’, Philosophical Studies, 84: 283–294.
• Loux, M. (ed.), 1979. The Possible and the Actual, Ithaca: Cornell.
• Lycan, W., 1988. ‘On the Plurality of Worlds by David Lewis’, The Journal of Philosophy, 85(1): 42–47.
• –––, 1991. ‘Two — No, Three — Concepts of Possible Worlds’, Proceedings of the Aristotelian Society, New Series, 91: 215–227.
• –––, 1993. ‘Armstrong's New Combinatorial Theory of Modality’, in J. Bacon, K. Campbell, and L. Reinhardt (eds.), Ontology, Causality, and Mind: Essays in Honour of D. M. Armstrong, Cambridge:
Cambridge University Press, 3–22.
• Maddy, P., 1980. ‘Perception and Mathematical Intuition’, The Philosophical Review, 89(2): 163–196.
• Mates, B., 1968. ‘Leibniz on Possible Worlds’, in B. van Rootselaar and J. F. Staal (eds.), Logic, Methodology and Philosophy of Science, vol. 3, Amsterdam: North-Holland Publishing Company,
507–29; reprinted in H. Frankfurt (ed.), Leibniz, Notre Dame: University of Notre Dame Press (1972), 335–364.
• Marcus, R., 1986. ‘Possibilia and Possible Worlds’, Grazer Philosophische Studien, 25/26 (1985/1986): 107–33.
• McDaniel, K., 2006. ‘Modal Realisms”, Philosophical Perspectives, 20: 303–331.
• McNamara, P., 1993. ‘Does the Actual World Actually Exist?’, Philosophical Studies, 69: 59–81.
• Menzel, C., 1986. ‘On Set Theoretic Possible Worlds’, Analysis, 46(2): 68–72.
• –––, 1989. ‘On an Unsound Proof of the Existence of Possible Worlds’, Notre Dame Journal of Formal Logic, 30(4): 598–603.
• –––, 1990. ‘Actualism, Ontological Commitment, and Possible Worlds Semantics’, Synthese, 85: 355–89.
• –––, 1991. ‘The True Modal Logic’, Journal of Philosophical Logic, 20: 331–374.
• –––, 2012. ‘Sets and Worlds Again’, Analysis, 72(2): 304–309.
• Menzel, C., and E. Zalta, 2014, ‘The Fundamental Theorem of World Theory’, Journal of Philosophical Logic, 43(2): 333–363.
• Meredith, C. and A. Prior, 1956. ‘Interpretations of Different Modal Logics in the “Property Calculus”’, in Copeland, B. (ed.) 1996, Logic and Reality: Essays on the Legacy of Arthur Prior,
Oxford: Clarendon Press.
• Merricks, T., 2003. ‘The End of Counterpart Theory ’, The Journal of Philosophy, 100(10): 521–549.
• Merrill, G., 1978. ‘Formalization, Possible Worlds and the Foundations of Modal Logic’, Erkenntnis, 12: 305–327.
• Monnoyer, J. (ed.), 2007. Metaphysics and Truthmakers, Frankfurt am Main: Ontos Verlag.
• Montague, R., 1974. Formal Philosophy, New Haven, CT: Yale University Press
• Moreland, J. P., 2011. ‘Exemplification and Constituent Realism: A Clarification and Modest Defense’, Axiomathes, online. doi:10.1007/s10516-011-9148-x
• Nolan, D., 1996. ‘Recombination Unbound’, Philosophical Studies, 84(2/3): 239–262.
• Nortmann, U., 2002. ‘The Logic of Necessity in Aristotle: An Outline of Approaches to the Modal Syllogistic, Together with a General Account of de dicto- and de re-Necessity’, History and
Philosophy of Logic, 23: 253–265.
• Novakofski, J. M. S. Brewer, N. Mateus-Pinilla, J. Killefer, and R. H. McCusker, 2005. ‘Prion Biology Relevant to Bovine Spongiform Encephalopathy’, Journal of Animal Science, 83(6): 1455–1476 [
available online].
• Pickavance, T., 2014. ‘Bare Particulars and Exemplification’, American Philosophical Quarterly, 51(2): 95–108.
• Plantinga, A., 1974. The Nature of Necessity, Oxford: Oxford University Press.
• –––, 1976. ‘Actualism and Possible Worlds’, Theoria, 42: 139–60; reprinted in Loux (1979): 253–73.
• –––, 1985. ‘Replies’, in Alvin Plantinga, J. Tomberlin and P. van Inwagen (eds.), Dordrecht: D. Reidel: 313–96.
• –––, 1987. ‘Two Concepts of Modality: Realism and Modal Reductionism’, in J. Tomberlin (ed.), Philosophical Perspectives, 1: 189–231.
• Pollock, J., 1985. ‘Plantinga on Possible Worlds’, in Alvin Plantinga, J. Tomberlin and P. van Inwagen (eds.), Dordrecht: D. Reidel, pp. 121–44.
• Prior, A. N., 1952. ‘Modality de dicto and modality de re, Theoria, 18(3): 174–180.
• –––, 1977. Worlds, Times, and Selves, Amherst: University of Massachusetts Press.
• –––, 1956. ‘Modality and Quantification in S5’, Journal of Symbolic Logic, 21: 60–2.
• Pruss, A., 2001. ‘The Cardinality Objection to David Lewis's Modal Realism’, Philosophical Studies, 104: 169–178.
• Quine, W. V. O., 1948. ‘On What There Is’, in From a Logical Point of View, New York: Harper, 1953, 1–19.
• –––, 1953. ‘Three Grades of Modal Involvement’, Proceedings of the XIth International Congress of Philosophy, 14: 65–81.
• –––, 1956. ‘Quantifiers and Propositional Attitudes’, The Journal of Philosophy, 53: 177–187.
• –––, 1960. Word and Object, Cambridge, MA: MIT Press.
• –––, 1968. ‘Propositional Objects’, Critica: Revista Hispanoamericana de Filosofia, 2(5): 3–29. Reprinted in W. V. Quine, 1977, Ontological Relativity, Columbia University Press: 139–160.
• Rosen, G., 1993. ‘A Problem for Fictionalism about Possible Worlds’, Analysis, 53(2): 71–81.
• Roy, T., 1995. ‘In Defense of Linguistic Ersatzism’, Philosophical Studies, 80(3): 217–242.
• Russell, B., 1918/1919. ‘The Philosophy of Logical Atomism’, The Monist, 28: 495–527; 29: 32–63, 190–222, 345–380. Reprinted in Russell 1956, 177–281 and published in book form as Russell 1985.
• –––, 1956. Logic and Knowledge, London: Allen and Unwin.
• –––, 1985. The Philosophy of Logical Atomism, with an introduction by D. Pears, London: Routledge.
• Salmon, N., 1988. ‘Review of On the Plurality of Worlds, Philosophical Review, 97(2): 237–244.
• Shalkowski, S., 1994. ‘The Ontological Ground of the Alethic Modality’, The Philosophical Review, 103(4): 669–688.
• Schneider, S., 2001. ‘Alien Individuals, Alien Universals, and Armstrong's Combinatorial Theory of Possibility’, The Southern Journal of Philosophy, 39: 575–593.
• Sider, T., 2002. ‘The Ersatz Pluriverse’, Journal of Philosophy, 99(6): 279–315.
• –––, 2005. ‘Another Look at Armstrong's Combinatorialism’, Noûs, 39(4): 679–695.
• Skyrms, B., 1981. ‘Tractarian Nominalism’, Philosophical Studies, 40(2): 199–206.
• Stalnaker, R., 1968. ‘A Theory of Conditionals’ in Studies in Logical Theory, American Philosophical Quarterly Monograph Series, 2. Oxford: Blackwell, pp. 98–112.
• –––, 1976. ‘Possible Worlds’, Noûs, 10(1): 65–75.
• –––, 1987. Inquiry, Boston: Bradford Books, MIT Press.
• –––, 2012. Mere Possibilities: Metaphysical Foundations of Modal Semantics, Princeton: Princeton University Press.
• Tarski, A., 1933. ‘The concept of truth in the languages of the deductive sciences’ (Polish), Prace Towarzystwa Naukowego Warszawskiego, Wydzial III Nauk Matematyczno-Fizycznych, 34, Warsaw;
expanded English translation in Tarski (1983), 152–278.
• –––, 1944. ‘The semantic conception of truth’, Philosophy and Phenomenological Research, 4: 13–47.
• –––, 1983. Logic, Semantics, Metamathematics, papers from 1923 to 1938, edited by John Corcoran. Indianapolis: Hackett Publishing Company.
• Tomberlin, J., and van Inwagen, P. (eds.), 1985. Profiles: Alvin Plantinga, Dordrecht: D. Reidel, pp. 121–44.
• van Inwagen, P., 1986. ‘Two Concepts of Possible Worlds’, in Midwest Studies in Philosophy, XI, P. French, T. Uehling, and H. Wettstein (eds.), Minneapolis: University of Minnesota Press,
• –––, 2008. ‘McGinn on Existence’, The Philosophical Quarterly, 58(20): 36–58.
• Washington, C., 1998. ‘Use/Mention Distinction and Quotation’, in Routledge Encyclopedia of Philosophy, vol. 9, W. Craig (ed.), London and New York: Routledge.
• Williamson, T., 1998. ‘Bare Possibilia’, Erkenntnis, 48: 257–273.
• –––, 2000. ‘The Necessary Framework of Objects’, Topoi, 19: 201–208.
• –––, 2013. Modal Logic as Metaphysics. Oxford: Oxford University Press.
• Wittgenstein, L., 1921. ‘Logisch-Philosophische Abhandlung’, with a forward by Bertrand Russell, Annalen der Naturphilosophie, 14, published by Wilhelm Ostwald, Leipzig: Verlag Unesma: 185–262.
Also available online in HTML, PDF, and ePub formats in a side-by-side presentation with the translations Wittgenstein (1922) and Wittgenstein (1974) at http://people.umass.edu/klement/tlp/.
• –––, 1922. Tractatus Logico-Philosophicus, trans. of Wittgenstein (1921) by C. K. Ogden, Routledge & Kegan Paul.
• –––, 1974. Tractatus Logico-Philosophicus, revised edition, trans. of Wittgenstein (1921) by D. F. Pears and B. F. McGuinness, New York and London: Routledge & Kegan Paul.
• Yagisawa, T., 2010. Worlds and Individuals, Possible and Otherwise, Oxford: Oxford University Press.
• Zalta, E., 1983. Abstract Objects: An Introduction to Axiomatic Metaphysics, Dordrecht: D. Reidel.
• –––, 1993. ‘Twenty-Five Basic Theorems in Situation and World Theory’, Journal of Philosophical Logic, 22(4): 385–428.
How to cite this entry.
Preview the PDF version of this entry at the Friends of the SEP Society.
Look up this entry topic at the Indiana Philosophy Ontology Project (InPhO).
Enhanced bibliography for this entry at PhilPapers, with links to its database.
• Browse papers by related topics at Phil Papers:
The author wishes to express his deep gratitude to Phillip Bricker and Max Cresswell for extensive comments on several drafts of this entry and for numerous illuminating discussions of its content
and related topics. The entry is vastly better for their generous input. Errors and other infelicities that remain are of course the sole responsibility of the author. A great deal of this entry was
written with the support of the Alexander von Humboldt Foundation while the author was a Visiting Fellow at the Munich Center for Mathematical Philosophy in 2011–12. Thanks are due to the Center's
director, Professor Hannes Leitgeb, for making the author's stay at this remarkable venue possible. Finally, the author would like to express his thanks to the SEP Editors for their extraordinary
patience in dealing with the very tardy author of a badly-needed entry.
|
{"url":"https://plato.stanford.edu/archivES/FALL2017/Entries/possible-worlds/","timestamp":"2024-11-02T11:33:50Z","content_type":"text/html","content_length":"183320","record_id":"<urn:uuid:bd55c876-8db1-4af9-8419-4f98c0396bf6>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00895.warc.gz"}
|
How to Run A Matrix Column By Column In Matlab?
To run a matrix column by column in MATLAB, you can use a loop or vectorized operations. Here are two approaches:
1. Looping through columns: You can use a for loop to iterate over each column of the matrix and perform operations on it. Here's an example:
1 matrix = [1 2 3; 4 5 6; 7 8 9]; % Example matrix
2 [numRows, numCols] = size(matrix);
4 for col = 1:numCols
5 columnData = matrix(:, col); % Accessing the column data
6 % Perform operations on the column here
7 disp(columnData); % Example operation: display the column data
8 end
This will display each column of the matrix individually.
1. Vectorized operations: MATLAB is optimized for vectorized operations, which can be faster than using loops. Here's an example of running a matrix column by column using vectorized operations:
1 matrix = [1 2 3; 4 5 6; 7 8 9]; % Example matrix
2 [numRows, numCols] = size(matrix);
4 for col = 1:numCols
5 columnData = matrix(:, col); % Accessing the column data
6 % Perform vectorized operations on the column here
7 disp(columnData); % Example operation: display the column data
8 end
Using vectorized operations, you can directly access and process each column of the matrix without a loop.
Feel free to adapt these approaches to your specific use case or operations you need to perform on the matrix columns.
How to initialize a matrix with zeros in Matlab?
To initialize a matrix with zeros in Matlab, you can use the zeros() function.
Here is an example of how to use the zeros() function to create a matrix with 3 rows and 4 columns filled with zeros:
This will create the following matrix:
You can replace the values inside the parentheses with your desired number of rows and columns.
What is the size of a matrix in Matlab?
The size of a matrix in MATLAB is given by the number of rows and columns it contains. The size function in MATLAB can be used to determine the dimensions of a matrix. For example, if matrix A has 4
rows and 3 columns, its size would be represented as [4, 3].
What is the difference between a row matrix and a column matrix?
A row matrix is a matrix with one row and multiple columns. It can be denoted as [a₁, a₂, a₃, ..., aₙ], where a₁, a₂, ..., aₙ are the elements of the row matrix.
On the other hand, a column matrix is a matrix with one column and multiple rows. It can be denoted as: [a₁] [a₂] [a₃] [ ... ] [ aₙ ]
The main difference between a row matrix and a column matrix is the orientation of the matrix. In a row matrix, the elements are arranged horizontally in a single row, while in a column matrix, the
elements are arranged vertically in a single column.
Additionally, the number of elements in a row matrix is equal to the number of columns, whereas the number of elements in a column matrix is equal to the number of rows.
|
{"url":"https://freelanceshack.com/blog/how-to-run-a-matrix-column-by-column-in-matlab","timestamp":"2024-11-07T09:05:03Z","content_type":"text/html","content_length":"306511","record_id":"<urn:uuid:c77de290-1320-4a82-b997-e54b2c8cf61a>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00864.warc.gz"}
|
M02: Functions
Hint: If possible, make your browser wide enough to place the slides and commentary side-by-side.
The slides on this page are not accessible (for screen readers, etc). The 1up and 3up slides are better.
Module Topics
This lecture module is the start of the actual course content. It has more video content than will be typical because there is some stuff that’s easier to just show, some that benefits from body
language, and some that is simply important to reinforce for multiple learning styles.
• The green “exercise box” signals something to do. Take a moment to solve the problem. We think it will help you understand the course content more thoroughly. In this case, working out how many
natural numbers divide 12 evenly will help you understand the next point.
• The box with an ! on the left is a “don’t box”. It is used in a couple of circumstances:
□ To pose a question we want you to think about but not solve (like this case).
□ To highlight something you should not do, for example on assignments.
Finding the number of divisors of 12 is reasonable to do by hand. But doing the same thing for 5,218,303 is too difficult to solve by hand in a reasonable amount of time. Our task in this course will
be to write instructions to allow the computer to solve tasks such as these.
Instead of solving this particular problem, we will write a function that solves the problem in general, for any number.
We can test our function using small numbers like 12, 31, and 63. Once we are confident the function is correct, we can use it to answer the “big” question.
This video discusses programming language design and why we’ve chosen a functional programming language (Racket) rather than a more traditional choice. We then review some of the core concepts that
functional programming languages borrow from mathematics.
There are three fundamental problems that programming language designers must solve for their programming language:
• How programs are expressed (the syntax)
• What programs mean (the semantics)
• Ensure that each valid program has exactly one meaning (avoid ambiguity)
Syntax has to do with the form or structure of something. The sample sentence does not have the expected structure – first word begins with a capital letter, punctuation at the end, etc.
“Trombones fly hungrily.” is a syntactically correct sentence. It has the expected form/structure. But semantically it is meaningless. Trombones don’t fly nor do they get hungry. Note that we are
referring in this example to the real world rather than a constructed world in a fantasy novel or poetry where there might be flying hungry trombones.
That said, the semantics of our programs will be with respect to the constructed world of our programming language.
English can be ambiguous. In “Sally was given a book by Joyce” it’s unclear whether the person giving the book was named Joyce or the person who wrote the book was named Joyce.
Another example is “Time flies like an arrow”. One possible meaning is that time passes quickly. But another meaning parallels the sentence “Fruit flies like a banana.” There are several other
possible meanings as well.
It’s very helpful to be able to predict what your program means (what it will do) while you are writing it. If you can’t, you’re no better than one of those proverbial monkeys banging on a typewriter
trying to come up with Shakespeare.
Fortunately, Racket allows us to develop a simple semantic model that we can use to predict what a program does.
There are other reasons we chose Racket for CS135. Watch the video for more about that.
The following video introduces the DrRacket programming environment and the core features to get you started programming.
You will be changing language levels periodically as you become a more experienced programmer. Changing these settings, in this manner, will be required each time.
Rational numbers are those that can be represented as a ratio between two integers. For example, 22/7. Irrational (“inexact”) numbers cannot be written as a fraction. Examples include π and the
square root of 2.
A calculator doesn’t make much distinction between the rational and irrational numbers. A calculator displays π as 3.141592654 and 22/7 as 3.142857143. There is nothing to distinguish the rational
number from the irrational number.
Racket does make the distinction. Use rational (“exact”) numbers whenever you can.
The n^th argument is associated with the function’s n^th parameter. For example, in g(1,3), the first argument (1) is associated with the first parameter (x). Each place that x is used, substitute 1.
Similarly, each place y is used, substitute 3.
“Evaluation by substitution” means to take a subexpression, find its value, and then replace the subexpression by the value (that’s the “substitution” part). Repeat until you’re left with just
If you choose a complicated subexpression you may need to use “evaluation by substitution” to find its value.
This notion of evaluation by substitution is an important one. We’ll be defining rules to make this rigorous and then adding to those rules as we introduce new language constructs. You’ll get the
first taste of them in just a few slides.
Noting the two conventions as the end of the slide, consider g(g(1,3), f(3)) again. We can’t apply g to g(1,3) and f(3) because of the first rule – g(1,3) and f(3) are not values.
So we have a choice to make. Should we evaluate the subexpression g(1,3) or f(3) first? The second rule says to do the leftmost one or g(1,3).
Stated another way: read the expression from left to right, top to bottom.
Substitute the first function application that has fully evaluated arguments.
Review the derivation on the previous slide. Verify that these two conventions were obeyed at each step.
Now, for any expression:
• there is at most one choice of substitution;
• the computed final result is the same as for other choices.
This video is the transition from mathematical notation to Racket notation. The slides give a good overview of the changes. The video can help solidify your understanding of how to transform the
traditional mathematical notation to Racket notation.
To transform from traditional mathematical notation to Racket notation, proceed as if you were evaluating the expression via substitution. But instead of substituting back the value, substitute the
expression re-written as Racket.
For example:
(6 - 4) / (5 + 7) ⇒
(- 6 4) / (5 + 7) ⇒
(- 6 4) / (+ 5 7) ⇒
(/ (- 6 4) (+ 5 7))
Note that the first line is valid math; the last line is valid Racket; the middle lines are a mixture.
You can test your understanding by transforming mathematical expressions to equivalent Racket expressions. Then enter them into DrRacket to verify that they give the correct answer.
An example of “extra” parentheses in math: (2 + 3). The parentheses add nothing to simply writing 2 + 3. If you translate this literally you would get the Racket expression ((+ 2 3)). The outside set
of parentheses will cause Racket to “think” it needs to apply a function, but there is no function specified to apply.
We also have a tool to show the substitution process interactively. Play around with the following to see how it corresponds to what is on the slide and how it implements the rules we’ve seen so far.
Here’s our first substitution rule.
Essentially, the rule says that we just “know” what the built-in functions do. It might be because we’ve studied them since early grade school (+, *, etc) or because we’ve read the DrRacket
documentation (string<?). Use this knowledge to compute the result for the given arguments (which will be values) and then substitute that result into the expression in place of the function
In the stepper, this rule has the name “as-if-by-magic” because we don’t try to explain how the built-in function works.
It’s a great idea to fire up DrRacket and enter these into the interactions window. Learning what the various error messages mean on these small examples will help you debug your assignment problems
more efficiently.
The third one is “interesting”. Watch the video to understand what’s going on.
Here’s a video showing how to find the list of built-in numerical functions available in our language level.
Here’s a direct link to the Help Desk documentation .
Bookmark it! For technical documentation that’s a better strategy than Googling it every time. There is less risk of getting out-of-date material.
Defining functions is the core activity in programming in Racket. This video covers the syntax, reviews applications of functions to arguments, defining constants, etc.
Special forms are typeset just a little differently in the slides. A special form, like define is in a bolder and slightly darker font than a normal function. If you skip ahead to M04 you’ll notice
that and, or, and cond are typeset the same way. They’re also special forms. Unfortunately, we’re not completely consistent. else is also typeset this way but is not a special form.
Verify that the two rules given are used to choose the next subexpression to evaluate.
A Racket program is read top-to-bottom, left-to-right. That is,
(define (foo x y) (+ x x y))
(foo 1 2)
is read as
(define (foo x y) (+ x x y)) (foo 1 2)
The function definition is already as simple as it gets, so (foo 1 2) is the left-most subexpression that we can simplify. Furthermore, (define (foo ... occurs to its left, as required by the
substitution rule.
Evaluating (foo 1 2) means substituting the first argument (1) wherever the first parameter (x) occurs in the body expression and doing similarly for the second argument/parameter. That gives us (+ 1
1 2). That expression is substituted back in place of (foo 1 2).
That is,
(define (foo x y) (+ x x y)) (foo 1 2)
⇒ (define (foo x y) (+ x x y)) (+ 1 1 2)
Recall that => means “yields” and separates one substitution step from another.
Or, with the stepper. You’ll notice in the “Rule” section that it calls this rule “Big-sub”. We’ll have another rule named “Small-sub” soon.
Rules so far:
1. (f v1...vn) => v when f is built-in…
2. (f v1...vn) => exp' when (define (f x1..xn) exp) occurs to the left…
An in-depth explanation of the trace shown in the slide:
Substitution Explanation
(foo (- 3 1) foo is not built in, so the first rule does not apply; the second rule does. But the arguments to foo are not yet values. So move on to (- 3 1). - is a built-in function and both
(+ 1 2)) arguments are values, so the first rule applies. Substitute the value 2 for (- 3 1).
(foo 2 (+ 1 This is very similar to the first step. We can’t use the second rule on foo because some arguments are not yet values. So move right to 2. It’s already a value; nothing to do. Moving
2)) right to (+ 1 2), the first rule applies. Use it to reduce the expression to 3 and substitute it for (+ 1 2).
(foo 2 3) The second rule now applies (foo is user-defined and all the arguments are values). Substitute 2 (the first argument) everywhere x occurs in foo’s body expression. Similar for 3 and y.
Substitute that rewritten body expression in place of (foo 2 3).
(* 2 (sqr 3)) Use the first rule for built-in functions to evaluate (sqr 3) and substitute it back.
(* 2 9) Use the first rule for built-in fuctions.
18 18 is a value, so we’re done.
The fact that parameter names are local to the function is really handy. It means that you can choose their names without regard for the rest of the program. You can focus on deciding which names
make sense for the function, given its purpose.
The parameter names as used in the slides so far won’t be acceptable once we move beyond toy examples to functions that do something meaningful. Then we’ll want parameter names that convey meaning
about their purpose in the function.
We can see this example in action in the following stepper. Note that completely processed definitions that will not change are at the top of the listing followed by a blank line. k is already
completely defined and starts out above that blank line.
p will move up after it’s completely defined in the fourth step. foo is also completely defined but doesn’t get to move up until the definitions preceding it are completely defined.
Function definitions are always in simplest form and aren’t further reduced. That’s not necessarily the case with constant definitions. Notice that the right hand side of id => val is a value. If the
expression starts as (define p (* 3 3)) the (* 3 3) must be simplified to 9 first.
This is illustrated in the stepper with the next slide. Note that the stepper gives this rule the name “small-sub”. For both a constant and a user-defined function we’re making a substitution. For
the constant, it’s a pretty small substitution; for a function, it’s always much bigger.
The convention that we stop repeating a definition is represented in the stepper by moving the definition above the blank line. Without this convention we would need to copy the entire program for
every step. This convention will benefit you big time when it comes to writing exams!
Comments are for the benefit of humans. The computer ignores them.
Repeating code leads to several problems:
• If there is a bug in the repeated code, it needs to be fixed multiple times. Some places may be missed.
• The programmer has to spend time typing the same thing multiple times.
• The repeated code may be useful elsewhere but can’t be easily reused.
• The repeated code is difficult to test. In a rigourous testing environment, it may force many additional tests that are essentially the same.
• Someone reading the code needs to stop and figure out what the code does, interupting the flow of understanding the main function.
Using constants and helper functions are both examples of the “DRY principle” – “Don’t Repeat Yourself”.
Being DRY reduces the amount of code you need to write, debug, and maintain. If there is a bug, there’s only one place to go to fix it.
Factoring out complex calculations gives a separate function that is usually easier to test.
Humans can’t keep too much in our heads at once. We need to chunk things. Factoring out calculations into a helper function is one way to do that. It’s often worth it even if the helper function is
only used once.
Having meaningful names in the code to identify an operation also helps with that chunking. “Oh yeah, I know what that does.”
Programmers can use the interactions pane to do one-time testing, or they write their tests using check-expect. Using check-expect is preferrable because:
• All you need to do is click run to re-run all of the tests you’ve developed.
• The interpretation of the test (did it pass or fail?) is built in to the test. That isn’t true if you just say (distance 0 0 3 4).
• It’s less error-prone. Even tests sometimes need to be debugged!
You are welcome to use check-expect on Assignment 01 but it is not required. Starting with A02, testing with check-expect will be checked and will be expected.
The next video shows how DrRacket can help you determine the scope of an identifier.
The x parameter in function f is said to “shadow” the global constant x.
Another way to say it is that x “hides” the global constant.
This video is almost entirely in DrRacket to show you the definitions window and how to use it to define functions. There are a number of tools built into DrRacket that are covered:
• a stepping tool that may be helpful for debugging your functions
• a tool to help you with scoping issues
• a feature that helps you identify code that hasn’t been tested (making sure everything is tested is a core skill for earning a high mark in CS135!)
• a feature that automatically formats/indents your code for you.
|
{"url":"https://student.cs.uwaterloo.ca/~cs135/smods/02-functions/index.html","timestamp":"2024-11-03T03:37:12Z","content_type":"text/html","content_length":"60827","record_id":"<urn:uuid:ebf2dcd4-d977-47b4-8823-9da664340c53>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00147.warc.gz"}
|
Section: Research Program
The concept of observer originates in control theory. This is particularly pertinent for epidemiological systems. To an input-output system, is associated the problem of reconstruction of the state.
Indeed for a given system, not all the states are known or measured, this is particularly true for biological systems. This fact is due to a lot of reasons : this is not feasible without destroying
the system, this is too expensive, there are no available sensors, measures are too noisy ...The problem of knowledge of the state at present time is then posed. An observer is another system, whose
inputs are the inputs and the outputs of the original system and whose output gives an estimation of the state of the original system at present time. Usually the estimation is required to be
exponential. In other words an observer, using the signal information of the original system, reconstructs dynamically the state. More precisely, consider an input-output nonlinear system described
$\left\{\begin{array}{c}\stackrel{˙}{x}=f\left(x,u\right)\hfill \\ y=h\left(x\right),\hfill \end{array}\right\$ (1)
where $x\left(t\right)\in {ℝ}^{n}$ is the state of the system at time $t$, $u\left(t\right)\in U\subset {ℝ}^{m}$ is the input and $y\left(t\right)\in {ℝ}^{q}$ is the measurable output of the system.
An observer for the the system (1 ) is a dynamical system
$\stackrel{˙}{\stackrel{^}{x}}\left(t\right)=g\left(\stackrel{^}{x}\left(t\right),y\left(t\right),u\left(t\right)\right),$ (2)
where the map $g$ has to be constructed such that: the solutions $x\left(t\right)$ and $\stackrel{^}{x}\left(t\right)$ of (1 ) and (2 ) satisfy for any initial conditions $x\left(0\right)$ and $\
$\parallel x\left(t\right)-\stackrel{^}{x}\left(t\right)\parallel \le c\phantom{\rule{0.166667em}{0ex}}\parallel x\left(0\right)-\stackrel{^}{x}\left(0\right)\parallel \phantom{\rule{0.166667em}
{0ex}}{e}^{-a\phantom{\rule{0.166667em}{0ex}}t}\phantom{\rule{0.277778em}{0ex}},\phantom{\rule{4pt}{0ex}}\forall t>0.$
or at least $\parallel x\left(t\right)-\stackrel{^}{x}\left(t\right)\parallel$ converges to zero as time goes to infinity.
The problem of observers is completely solved for linear time-invariant systems (LTI). This is a difficult problem for nonlinear systems and is currently an active subject of research. The problem of
observation and observers (software sensors) is central in nonlinear control theory. Considerable progress has been made in the last decade, especially by the “French school", which has given
important contributions (J.P. Gauthier, H. Hammouri, E. Busvelle, M. Fliess, L. Praly, J.L. Gouze, O. Bernard, G. Sallet ) and is still very active in this area. Now the problem is to identify
relevant class of systems for which reasonable and computable observers can be designed. The concept of observer has been ignored by the modeler community in epidemiology, immunology and virology. To
our knowledge there is only one case of use of an observer in virology ( Velasco-Hernandez J. , Garcia J. and Kirschner D. [22] ) in modeling the chemotherapy of HIV, but this observer, based on
classical linear theory, is a local observer and does not allow to deal with the nonlinearities.
|
{"url":"https://radar.inria.fr/report/2013/masaie/uid27.html","timestamp":"2024-11-04T21:35:31Z","content_type":"text/html","content_length":"43288","record_id":"<urn:uuid:a9b1abb8-926e-4bd7-b059-07eb928c7845>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00063.warc.gz"}
|
Understanding Mathematical Functions: What Alternatives To The Copy An
Understanding mathematical functions is crucial in various fields, from engineering to data analysis. These functions represent the relationship between a set of inputs and their corresponding
outputs, playing a vital role in solving equations, making predictions, and understanding complex systems.
• Define mathematical functions: Mathematical functions are relationships between a set of inputs and their corresponding outputs, often represented by an equation or a graph.
• Importance of understanding mathematical functions in various fields: In engineering, understanding functions is essential for designing systems and analyzing data. In finance, they help in
modeling and predicting market trends. In computer science, they are fundamental to creating algorithms and solving complex problems.
Key Takeaways
• Mathematical functions play a crucial role in various fields, from engineering to data analysis.
• Understanding the types and notations of mathematical functions is essential for effectively utilizing them.
• Copy and paste functionality has limitations such as inaccuracy and time-consuming nature, especially for large datasets.
• Alternatives like using software for data manipulation, built-in functions in spreadsheet software, and custom scripts offer advantages in accuracy, efficiency, and scalability.
• Practical examples demonstrate the effectiveness of alternative methods in data manipulation and automation of tasks.
Understanding Mathematical Functions
Mathematical functions are a fundamental concept in mathematics and are essential for various applications in fields such as physics, engineering, economics, and computer science. In this blog post,
we will delve into the definition of mathematical functions, the different types of functions, and the common notations used to represent them.
A. Definition of mathematical functions
A mathematical function is a relationship between a set of inputs and a set of possible outputs, where each input is related to exactly one output. In other words, a function assigns each input value
to a unique output value. This relationship is typically represented by a rule or equation.
B. Types of mathematical functions
There are several types of mathematical functions, each with its own unique properties and characteristics. Some of the most common types of functions include:
• Linear functions: These functions have a constant rate of change and can be represented by a straight line on a graph.
• Quadratic functions: These functions have a squared term and can be represented by a parabolic curve on a graph.
• Exponential functions: These functions involve a constant base raised to a variable exponent and often grow or decay at an increasing rate.
• Trigonometric functions: These functions are based on the trigonometric ratios of angles in a right-angled triangle, such as sine, cosine, and tangent.
C. Common notations used in mathematical functions
Mathematical functions are often represented using various notations that help to convey the relationship between the input and output variables. Some of the common notations include:
• f(x): This notation represents a function of the variable x, where f is the name of the function.
• y=mx+b: This is the standard form of a linear equation, where m is the slope of the line and b is the y-intercept.
• g(x), h(x), etc.: Functions are often denoted by different letters, such as g(x) or h(x), to distinguish between multiple functions within the same context.
The Limitations of Copy and Paste Functionality
When it comes to working with mathematical functions, the copy and paste functionality may seem like a convenient option for transferring data from one place to another. However, it comes with
several limitations that can hinder accuracy and efficiency.
Inaccuracy and potential for errors
• One of the main drawbacks of using copy and paste for mathematical functions is the potential for errors and inaccuracies. When manually moving data from one location to another, there is a high
risk of unintentional mistakes, such as misplacing numbers or formulas.
• Furthermore, copy and paste does not account for any changes that need to be made to the data during the transfer process, leading to potential discrepancies in the final results.
Time-consuming for large datasets
• Another limitation of the copy and paste functionality is its time-consuming nature, especially when dealing with large datasets. Manually copying and pasting extensive amounts of data can be a
tedious and laborious task.
• Moreover, the time spent on copying and pasting data could be better utilized for more critical tasks, such as analyzing and interpreting the data.
Alternatives to Copy and Paste Functionality
When it comes to mathematical functions, there are several alternatives to the traditional copy and paste functionality that can greatly improve efficiency and accuracy in data manipulation and
A. Utilizing software for data manipulation and analysis (Python, R, MATLAB, etc.)
One of the most powerful alternatives to copy and paste functionality is leveraging specialized software for data manipulation and analysis. These tools offer a wide range of mathematical functions
and capabilities that can streamline complex tasks and provide more accurate results.
1. Python
Python is a versatile programming language that is widely used in the field of data analysis. Its extensive library of mathematical functions and packages such as NumPy and Pandas make it a valuable
tool for handling complex mathematical operations.
2. R
R is another popular programming language specifically designed for statistical computing and graphics. It offers a wide array of built-in functions for mathematical analysis, making it a powerful
alternative to traditional copy and paste methods.
3. MATLAB
MATLAB is a high-level programming language and interactive environment for numerical computation, visualization, and programming. It is widely used in engineering and scientific research for its
extensive mathematical functions and toolboxes.
B. Using built-in functions in spreadsheet software (Excel, Google Sheets, etc.)
Most spreadsheet software such as Excel and Google Sheets offer built-in functions that can perform a variety of mathematical operations without the need for manual copying and pasting.
1. Excel
Excel provides a wide range of built-in functions for mathematical calculations, from simple arithmetic to complex statistical analysis. These functions can be used to automate repetitive tasks and
ensure accuracy in data manipulation.
2. Google Sheets
Google Sheets also offers a variety of built-in functions for mathematical operations, making it a convenient alternative to traditional copy and paste methods. These functions can be used to
streamline data analysis and improve efficiency.
C. Writing custom scripts for repetitive tasks
For more specialized tasks, writing custom scripts or programs can provide a powerful alternative to copy and paste functionality. By automating repetitive tasks through scripting, mathematicians and
data analysts can save time and reduce the risk of errors in their work.
Advantages of Alternative Methods
When it comes to understanding mathematical functions, exploring alternatives to the traditional copy and paste functionality can offer various advantages. These alternative methods can enhance
accuracy in data manipulation and analysis, improve efficiency in handling large datasets, and provide scalability for more complex functions and tasks.
A. Accuracy in data manipulation and analysis
• Utilizing built-in functions:
Built-in functions in mathematical software can provide more precise calculations compared to manual copy and paste operations. These functions are designed to perform specific mathematical
operations with high accuracy, reducing the potential for human error.
• Automated data processing:
Alternative methods, such as scripting and automation tools, can streamline data manipulation and analysis tasks. By automating repetitive processes, the likelihood of errors introduced through
manual copy and paste actions can be minimized, leading to more accurate results.
B. Efficiency in handling large datasets
• Batch processing:
Alternative methods enable batch processing of large datasets, allowing for faster and more efficient data handling compared to manual copy and paste operations. This can result in significant
time savings and improved productivity in mathematical functions and analysis tasks.
• Parallel processing:
Some alternative methods support parallel processing, which can speed up the execution of mathematical functions on large datasets by distributing the workload across multiple processors or
computing resources.
C. Scalability for more complex functions and tasks
• Custom functions and libraries:
Alternative methods offer the flexibility to create custom functions and leverage existing libraries tailored to specific mathematical tasks. This scalability allows for the implementation of
more advanced mathematical functions and analysis techniques beyond the limitations of traditional copy and paste operations.
• Integration with advanced tools:
By utilizing alternative methods, such as programming languages and advanced mathematical software, users can integrate with sophisticated tools and frameworks to address complex functions and
tasks that may not be feasible with basic copy and paste functionality.
Practical Examples
When it comes to understanding mathematical functions, it’s important to explore practical examples of how they are utilized in real-world scenarios. Let’s take a look at some practical examples that
demonstrate alternatives to the copy and paste functionality.
A. Using Python for data manipulation and analysis
Python is a powerful programming language that offers a wide range of mathematical functions for data manipulation and analysis. With libraries such as NumPy and Pandas, Python provides a robust set
of tools for performing complex mathematical operations on large datasets. For example, you can use Python to calculate statistical measures, apply mathematical transformations, and perform advanced
data analysis without the need for manual copy and paste.
B. Implementing built-in functions in Excel for mathematical operations
Excel is a popular tool for performing mathematical operations, and it offers a variety of built-in functions that can be used to automate calculations and analyses. Functions such as SUM, AVERAGE,
MAX, and MIN allow users to quickly perform mathematical operations on large sets of data, saving time and reducing the need for manual copy and paste. Additionally, Excel’s ability to create custom
functions using VBA (Visual Basic for Applications) provides even more flexibility for automating mathematical tasks.
C. Custom scripting for automating repetitive tasks
For more complex and repetitive mathematical tasks, custom scripting can be utilized to automate processes and reduce the reliance on copy and paste functionality. Whether it’s using scripting
languages such as JavaScript, Perl, or PowerShell, custom scripts can be developed to handle specific mathematical functions and operations, allowing for greater efficiency and accuracy in
Understanding mathematical functions is crucial for solving complex problems and making informed decisions in various fields. It allows us to analyze data, predict outcomes, and optimize processes,
ultimately leading to more efficient and effective solutions.
When it comes to utilizing mathematical functions, alternative methods such as using coding languages or specialized software can offer significant advantages over the copy and paste functionality.
These alternatives allow for greater flexibility, automation, and scalability, enabling users to handle large datasets and perform complex calculations with ease.
ONLY $99
Immediate Download
MAC & PC Compatible
Free Email Support
|
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-alternatives-copy-paste-functionality-available","timestamp":"2024-11-11T17:05:15Z","content_type":"text/html","content_length":"218245","record_id":"<urn:uuid:268ca384-7a02-4091-aba1-2d5c4cb24e3b>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00813.warc.gz"}
|
Scrabble Word
Our site is designed to help you unscramble or descramble the letters & words in the Scrabble® word game, Words with Friends®, Chicktionary, Word Jumbles, Text Twist, Super Text Twist, Text Twist 2,
Word Whomp, Literati, Wordscraper, Lexulous, Wordfeud and many other word games.
© 2020 - ScrabbleWordFind.com
|
{"url":"https://www.scrabblewordfind.com/words/starts-with/h?page=4","timestamp":"2024-11-12T15:40:21Z","content_type":"text/html","content_length":"53790","record_id":"<urn:uuid:4949ad5b-d430-4e24-816e-387ab9c42fa6>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00146.warc.gz"}
|
Will any of my open markets created before this one have fewer than 5 traders by the end of 2022?
resolved Jan 1
Specifially, all markets that are still open immediately before the end of 2022.
I believe this is all of the ones that are still below that threshold:
This question is managed and resolved by Manifold.
|
{"url":"https://manifold.markets/IsaacKing/will-any-of-of-my-open-markets-crea","timestamp":"2024-11-11T19:45:13Z","content_type":"text/html","content_length":"132881","record_id":"<urn:uuid:7235abda-02fe-4880-b765-9535cabe033a>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00034.warc.gz"}
|
Higher homotopy operations and André-Quillen cohomology
There are two main approaches to the problem of realizing a Π-algebra (a graded group Λ equipped with an action of the primary homotopy operations) as the homotopy groups of a space X. Both involve
trying to realize an algebraic free simplicial resolution G [.] of Λ by a simplicial space W [.], and proceed by induction on the simplicial dimension. The first provides a sequence of André-Quillen
cohomology classes in H ^n+2(Λ;Ω ^nΛ) (n≥1) as obstructions to the existence of successive Postnikov sections for W [.] (cf. Dwyer et al. (1995) [27]). The second gives a sequence of geometrically
defined higher homotopy operations as the obstructions (cf. Blanc (1995) [8]); these were identified in Blanc et al. (2010) [16] with the obstruction theory of Dwyer et al. (1989) [25]. There are
also (algebraic and geometric) obstructions for distinguishing between different realizations of Λ. In this paper we. (a)provide an explicit construction of the cocycles representing the cohomology
obstructions;(b)provide a similar explicit construction of certain minimal values of the higher homotopy operations (which reduce to "long Toda brackets"); and(c)show that these two constructions
correspond under an evident map.
Bibliographical note
Funding Information:
We would like to thank the referee for his or her careful reading of the paper. This research was supported by BSF grant 2006039, and the third author was supported by a Calvin Research Fellowship
• André-Quillen cohomology
• Higher homotopy operations
• Homotopy-commutative diagram
• K-Invariants
• Obstruction theory
• Primary
• Secondary
ASJC Scopus subject areas
Dive into the research topics of 'Higher homotopy operations and André-Quillen cohomology'. Together they form a unique fingerprint.
|
{"url":"https://cris.haifa.ac.il/en/publications/higher-homotopy-operations-and-andr%C3%A9-quillen-cohomology","timestamp":"2024-11-11T13:54:53Z","content_type":"text/html","content_length":"55629","record_id":"<urn:uuid:3e36cba1-48f5-4c04-926a-f70aa6e673ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00824.warc.gz"}
|
(575a) Sampling Domain Reduction for Surrogate Model Generation – Applied to Hydrogen Production with Carbon Capture
AIChE Annual Meeting
Wednesday, November 13, 2019 - 3:30pm to 3:49pm
The definition of the sampling
domain impacts both the performance of a surrogate model and the number of
sampling points required to obtain a satisfactory fit. The most commonly
applied approach to limit the sampling domain is by use of simple box
constraints for each of the independent variables. This leads to a choice
between two problems; 1. The chosen box constraints are tight, which limits the
applicability of the surrogate model, and 2. the chosen box constraints are
large, causing weak bounds on the relevant sampling domain and thereby sampling
in operating regions never encountered in the application of the surrogate
model. The latter is particularly a problem in chemical engineering, in which the
component flow rates are normally depending on each other. Here, the
application of box constraints and an adaptive sampling algorithm may result in
extensive sampling in regions far outside the nominal operating conditions.
This, in turn, may cause sampling in regions that exhibit highly nonlinear characteristics
that are not relevant or prevailing in the regions of nominal operations
conditions [1].
Structured sampling domain reduction through incorporation of constraints from
known physical relations between the chosen independent variables may
significantly improve the numerical efficiency of adaptive surrogate model
If an inlet stream to the
surrogate model is the overall feed to the system, it is straightforward to
implement proportional or inverse proportional dependencies in-between the
component flow rates [2].
This already limits the size of the sampling domain to relevant regions, as compositions
far away from the nominal operation conditions are rarely encountered. However,
this approach fails if the feed stream to the surrogate model is the product of
a chemical reaction or, to a lesser extent, the product of a separation. As a
motivating example, consider the product stream of a steam methane reformer (SMR)
which is fed to the water-gas shift reactors (WGS). Two contradictory
conclusions may be drawn for the dependency between methane and hydrogen:
1. the more methane is in the feed to the water-gas shift reactors,
the more hydrogen is in the feed due to a larger inlet flow rate of methane to the
steam methane reformer while maintaining a similar extent of reaction
(proportional dependency);
2. the more methane is in the feed to the water-gas shift reactors,
the less hydrogen is in the feed due to a reduced extent of reaction in the
steam methane reformer (inverse proportional dependency).
However, it is not possible to draw
a conclusion on the exact nature of the dependency. Hence, we propose to use a
data-driven approach to solve this problem and identify a constrained sampling
domain of relevance that is, regions which can be achieved in practice based on
the outlet conditions of the previous unit operations. In the first step of the
approach, it is therefore necessary to sample points for the feed composition
of the surrogate model with components. Here, the previous
unit operations are used for creating outlet points. Based on the already sampled
points (denoted by superscript cal),
it is possible to calculate in total inequality constraints
given by
The first set of inequality constraints corresponds to box
constraints and define upper and lower bounds of each component flow rate. The
second set of constraints limits the ratios of two component flow rates,
whereas the last set limits the sum of two component flow rates. In the context
of the outlined issues above, the second set of constraints provides bounds on
proportional dependencies in-between the component molar flow rates, whereas
the third set provides bounds on inverse proportional dependencies. Note that
these inequalities always define a convex set and thus polytopic constraints in
The sampling of
the points requires evaluations of the previous sections in the detailed model,
e.g. the SMR in the case of sampling for a WGS. This can act as
showstopper for the reduction of the sampling domain. However, if a surrogate
model has been fitted to the previous section, then this surrogate model can be
used for the calculation of the inequality constraints at limited computational
expenses. This is for example the case in the procedure outlined in [3]
and as well a part of the philosophy of the ALAMO approach [4].
While implementation of inequality
constraints in the sampling is in general relatively straight forward, the
complexity depends on the chosen sampling approach. Adaptive sampling
algorithms frequently utilize a black-box solver for finding regions for optimal
sampling of the simulator model due to a lack of access to the code of the
simulator. This approach enables addition of inequality constraints for sampling
domain reduction, provided that the black-box solver admits general constraints.
Static (predefined) sampling approaches require, however, that points are
placed within the bounds directly. One possibility is to use only the box
constraints for defining a set of sampling points, discard all points which are
infeasible, and then select an optimal subset of the feasible points. This
approach is utilized in the ARGONAUT algorithm [5],
but may result in a large fraction of discarded points. We implement an
iterative surrogate-model generation approach, using a LASSO-based approach [6]
with polynomial basis functions for surrogate fit and complexity reduction of
the surrogate model, together with an adaptive sampling technique with linear
constraints for reducing the sampling domain as described above.
The sampling domain reduction is applied
to a model of a SMR in Aspen HYSYS. A WGS reactor is located after the SMR and
shall be modelled as a new surrogate model. Five chemical components in the outlet
of the SMR are identified to have dependencies, CH[4], H[2]O,
CO, CO[2], and H[2]. If proportional dependencies are
incorporated in the feed to the SMR, it is possible to reduce the sampling
domain to 1 % of the size of the box constraints, whereas if we do not
incorporate proportional dependencies in the feed to the SMR, we can reduce the
sampling domain to 14 % of the total sampling domain. Figure 1 is
illustrating the dependencies between steam, methane, and hydrogen based on the
sampled points and the inequality constraints when proportional dependencies
are incorporated in the feed to the SMR. These dependencies are especially
pronounced between hydrogen and steam showing the necessity to incorporate
structured sampling domain reduction to improve the sampling for surrogate
model generation.
Figure 1:
Illustration of the dependencies in-between a)
methane and steam, b) methane and hydrogen, and c) steam and hydrogen including
the bounds illustrated in Eqs. (1)-(3).
[1] J. Straus and S. Skogestad, Surrogate
model generation using self-optimizing variables,
Comput. Chem. Eng.
vol. 119, pp. 143 151, Nov. 2018. [2] J. Straus and S. Skogestad, Use of
Latent Variables to Reduce the Dimension of Surrogate Models, in
Aided Chemical Engineering
, 2017, vol. 40, pp. 445 450. [3] J. Straus and S. Skogestad, Minimizing
the complexity of surrogate models for optimization, in
Computer Aided
Chemical Engineering
, 2016, vol. 38, pp. 289 294. [4] A. Cozad, N. V. Sahinidis, and D. C.
Miller, Learning surrogate models for simulation-based optimization,
, vol. 60, no. 6, pp. 2211 2227, Jun. 2014. [5] F. Boukouvala and C. A. Floudas,
ARGONAUT: AlgoRithms for Global Optimization of coNstrAined grey-box
compUTational problems,
Optim. Lett.
, vol. 11, no. 5, pp. 895 913, Jun.
2017. [6] H. Zou, The Adaptive Lasso and Its
Oracle Properties,
J. Am. Stat. Assoc.
, vol. 101, no. 476, pp. 1418 1429, Dec. 2006.
This paper has an Extended Abstract file available; you must purchase the conference proceedings to access it.
Do you already own this?
Log In for instructions on accessing this content.
AIChE Pro Members $150.00
AIChE Graduate Student Members Free
AIChE Undergraduate Student Members Free
AIChE Explorer Members $225.00
Non-Members $225.00
|
{"url":"https://www.aiche.org/conferences/aiche-annual-meeting/2019/proceeding/paper/575a-sampling-domain-reduction-surrogate-model-generation-applied-hydrogen-production-carbon-capture","timestamp":"2024-11-06T01:28:29Z","content_type":"text/html","content_length":"99791","record_id":"<urn:uuid:7017520c-dfa9-4931-b3fc-213285ac16d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00111.warc.gz"}
|
Don't let ESR waste power and cook capacitors
Source: EDN article
Bill Schweber explains some basic understanding of capacitor’s ESR and why it is of interest.
Back in school when being introduced to the basics of electrical engineering, we learned that the ideal capacitor was a simple, basic reactive element. It was easily modeled with capacitive reactance
XC = 1/(2πfC)
where f is the frequency and C is the capacitance value. Then, in some (but not all) courses, that idealistic façade was stripped away and we learned that reality is not so simple. There’s an
important real-world aspect to the ideal capacitor, called its equivalent series resistance (ESR), which quantifies the capacitor’s effective resistance RS to RF currents.
This ESR actually has multiple constituent elements, including the part contributed by the electrodes and terminal leads, as well as that due to the dielectric, plate material, electrolytic solution,
all as measured at a particular frequency. If you look at ESR in terms of the actual series resistance, the leakage resistance, and the dielectric loss, ESR goes from being just a resistor in series
with an ideal capacitor to something more complicated, Figure 1. (Note that real capacitors also have a complementary parasitic self-inductance called equivalent series inductance or ESL, but that’s
another story for another time.)
Figure 1 The theoretical capacitor is a simple reactive element, but a real one has equivalent series resistance due to ohmic series resistance, leakage resistance, and dielectric loss. (Image
courtesy of QuadTech, Incorporated)
Why should we worry about ESR? For basic DC-only blocking circuits, ESR may have little impact. However, when you are designing a switching power supply or an RF circuit, ESR obviously affects your
modeling, and real-world performance of the circuit. ESR shifts and degrades the resonance of the circuit in which the capacitor is functioning, as well as the Q (quality factor) of the circuit. ESR
is a function of frequency, obviously, as well as capacitor type, materials, construction, capacitor value, and many other factors, Figure 2.
Figure 2 ESR is a function of many factors including operating frequency and capacitor material and type (Image courtesy of Murata)
The implications of ESR go beyond performance. As a “resistor” it also creates thermal power dissipation P as a function of the current through the capacitor, with P = I2RS. Not only don’t we like to
waste power in most cases in terms of energy use (cost) and run time, but this dissipation adds to the thermal load of the system. Even if it doesn’t burden the system, it can soon exceed the thermal
rating of the capacitor itself. If you go through the numbers using, for example, a basic 0.47 μF capacitor with modest ESR of about 0.1 Ω at 1 GHz will dissipate will around 75 mW – which is not
much or is a lot, depending on the circuit and system details, and capacitor rating.
The obvious question is how do you determine ESR? For most engineers, the answer is clear: you look at the vendor data sheet numbers and the graph of ESR versus frequency. Reputable vendors provide
detailed ESR specifications which define not only the value but also how they determine it.
If you want to measure ESR yourself, it’s not an easy task. An article in Microwave Journal, “The Methods and Problems of Capacitor ESR Measurement,” (free, but registration required) went into
considerable detail on a long-standing and accepted way of doing so along with its limitations, as well as a more-advanced technique; the various vendors may use other approaches. Regardless of which
one you try, there are many test and instrumentation subtleties, as there always are when dealing with signals and components at GHz and higher frequencies.
Has one of your design ever been compromised by excessive ESR that you did not expect? Have you ever tried to dig into the details of the ESR of a specific capacitor you were using, or tried to
measure ESR yourself?
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
1. QuadTech, “Equivalent Series Resistance (ESR) of Capacitors“
featured image credit: Murata
|
{"url":"https://passive-components.eu/dont-let-esr-waste-power-and-cook-capacitors/","timestamp":"2024-11-04T05:25:26Z","content_type":"text/html","content_length":"443650","record_id":"<urn:uuid:1014c624-cdd6-4284-abb2-efeff8e4c90d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00664.warc.gz"}
|
Prediction of Radiative cloud fraction
The radiative cloud fraction characterizes the fraction of the incoming radiation that is scattered by clouds. It is an essential part or the complex atmospheric system so it is worth building a
predictive model for it.
Again, this model was developed in a self-organizing way by extracting knowledge about the system‘s behavior from observational data, objectively. The same data set as for ozone concentration
modeling has been used for this model:
1. ‣Global Ozone concentration [DU] (Dobson Units) (x1),
2. ‣Global Radiative Cloud Fraction (x2),
3. ‣Global Aerosol Index (x3),
4. ‣Global CO2 concentration [ppm] (x4),
5. ‣Sunspot Numbers (x5).
The model shown below was developed from data of the period Nov 1978 to Oct 2008 using a maximum time lag of 36 months. The data till Dec 2010 has been used ex post (out-of-sample) for model
evaluation. The model represents a non-linear difference equation of these self-selected input variables:
x2(t) = f(x1(t-i), x3(t-j)),
with i = {1, 5, 11, 17, 19, 23, 25, 29}, j = {7, 32}. In other words, global radiative cloud fraction at a time t is described by ozone concentration and aerosol index at certain previous points in
The accuracy of this best model is 81% (R2, coefficient of determination, using leave-one-out cross-validation) at a Descriptive Power of 41% and a very high model robustness within the forecast
horizon of Jan 2011 to Oct 2017.
The data are available on request.
|
{"url":"http://www.climateprediction.eu/cc/Main/Entries/2011/6/23_Prediction_of_Radiative_cloud_fraction.html","timestamp":"2024-11-07T22:07:26Z","content_type":"application/xhtml+xml","content_length":"21134","record_id":"<urn:uuid:1395dfad-2b1b-49ac-9eaf-9e53d88b39e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00756.warc.gz"}
|
Seismic Wavelet Analysis Based on Finite Element Numerical Simulation
Seismic Wavelet Analysis Based on Finite Element Numerical Simulation ()
1. Introduction
In the process of oil and gas exploration and construction, explosive sources are widely used to generate seismic wave fields. In order to improve the quality of the collected data, continuous
theoretical research and field tests have been carried out (Men, Jiang, & Wang, 2015). Explosive shock theory research and field tests are expensive and long-term needed. Using nonlinear finite
element analysis software, to construct the digital models with Rocks and explosive Parameters, numerical simulations could provide the motion and amplitude characters of the elements and nodes
(Zhang et al., 2018). Analysis of the amplitude and frequency spectrum of seismic wavelets is to optimize the excitation parameters and improve the quality of seismic data important means (Li, 2018).
In this study, a borehole excitation model for land-based seismic exploration was established. By analyzing the amplitude and frequency characteristics of the seismic wavelets excited in mudstone and
sandstone regions, a new idea for rationally selecting excitation points was put forward.
2. Numerical Model Establishment
2.1. Model Geometric Parameters
This study simulates a single well excitation, the explosion point depth is 10 m, and the source grain weight is 4 kg. In order to reduce the amount of calculation, a two-dimensional axisymmetric
method was used to build the model, with a depth of 60 m and a width of 30 m. The block structured grid model (namely node number index) is adopted, and the grid size is 0.25 m × 0.25 m × 0.25 m.
Deploy Gaussian observation points along the axis of the borehole on the model, with an interval of 2 m, as shown in Figure 1.
In order to simulate infinite space and reduce boundary reflection effects, the top, bottom, and sides of the model are set as transmissive boundaries, allowing outward waves to pass on the grid
without bringing the reflected energy into the calculation grid.
2.2. Model Material Parameters
The numerical simulation in this paper involves three kinds of materials: sand (mud) rock and explosives. Among them, explosives are used as energy supply during the blasting process of surrounding
rock, which mainly acts on the rock mass through expansion and stress waves.
Explosive parameters
Emulsion high-energy explosives are used, and JWL (Jones-Wilkins-Lee) equation of state is used to describe the pressure, volume and energy characteristics of gas products in the detonation process.
Its expression is
$p=A\left(1-\frac{\omega }{{R}_{1}V}\right){e}^{{R}_{1}V}+B\left(1-\frac{\omega }{{R}_{2}V}\right){e}^{{R}_{2}V}+\frac{\omega E}{V}$(1)
Figure 1. Numerical model of seismic excitation.
where p is the pressure; E is the internal energy of the detonation product per unit volume; V is the relative volume of the detonation product, that is, the ratio of the volume of the detonation
product after the explosion to its initial volume; A, B, R[1], R[2], and ω are constants , Determined by experiment.
Surrounding rock parameters
In this paper, the Johnson-Cook constitutive relationship model is used to describe the stress-strain relationship of the surrounding rock, which is suitable for the strength performance of materials
with large strain and high strain rate, and is suitable for high-speed collision and strong impact load caused by explosive detonation.
$\sigma =\left[A+B{\epsilon }_{p}^{n}\right]\left[1+C\mathrm{log}{\epsilon }_{p}^{\ast }\right]\left[1-{T}_{H}^{m}\right]$(2)
Among them, σ is the dynamic yield stress of the material; A is the quasi-static yield stress of the material; B is the strain hardening modulus of the material; ε[p] is the effective plastic strain;
${\epsilon }_{p}^{\ast }$ is the normalized effective plastic strain rate, ${\epsilon }_{p}^{\ast }$ = ε/ε[0], ε is the plastic strain rate, ε[0] is the critical strain rate, T[H] is the same
temperature, its expression is T[H] = (T − T[0])/(T − T[m]), T, T[0] and T[m] are the temperature during the deformation of the material, reference Temperature and melting point temperature; C, n,
and m are material constants.
In order to express the plastic yield of rock under huge shear stress, a volume failure model is used to limit the maximum principal stress tensile failure strain and the maximum shear strain
(tensile strength) when the material exceeds the limit (tensile strength).
3. Explosion Simulation Analysis
3.1. Simulation Analysis of Mudstone-Mudstone Intercalated Sandstone Thin Layer Excitation
Stress cloud
After the explosive package is excited, a high pulse pressure is generated on the medium around the package, and the detonation wave energy spreads in a spherical shape to all sides. Under the
ultra-high pressure of the shock wave, the structure of the medium is severely damaged and energy is consumed. After a certain distance from the explosion center point, it entered the rock elastic
region, the observed pressure was relatively stable, and the attenuation was relatively slow. Excited in a thin layer of sandstone, the energy is diffused in a spherical shape, and is rapidly
released along the sandstone interlayer. The seismic wave field changes from a spherical shape to a flat spindle shape, and the fracture shape in the rock fragmentation area also changes to a certain
extent (Figure 2 and Figure 3).
Seismic wavelet
The particle vibration velocity and acceleration time history curve of each observation point corresponds to the seismic waveform collected by the geophone (velocity/acceleration geophone) in the
geophysical prospecting construction, and the mudstone excited seismic wavelet is obtained (Figures 4-6). It can be
Figure 2. Explosion cloud diagram in mudstone (left-stress cloud diagram, middle-part of erosion model, right-observed pressure curve).
Figure 3. Explosion cloud diagram of a thin layer of sandstone in mudstone (left-stress cloud diagram, middle-part of erosion model, right-observed pressure curve). Note: The interval between
observation points is about 10 m, the distance between No. 1 and the center of burst is 50 m, and the distance between No. 20 and the center of burst is 10 m).
Figure 4. Vibration velocity-time history curve of the observation point (left-mudstone, right-thin layer of sandstone in mudstone).
Figure 5. Vibration acceleration-time history curve of observation point (left-mudstone, right-thin layer of sandstone in mudstone).
Figure 6. Original seismic wavelet record (left-mudstone, right-thin layer of sandstone in mudstone).
seen that the excitation seismic waves in the thin sandstone layer have changed, and some high-frequency noise has appeared.
Seismic wavelet spectrum analysis
Spectrum analysis shows that the main frequency of mudstone excitation is 95 Hz; when excited in thin sandstone interlayers, the high frequency components are enhanced (Figure 7).
3.2. Simulation Analysis of Excitation of Thin Sandstone-Sandstone Intercalated Mudstone
Numerical simulations are carried out for the excitation of thin sandstone and mudstone layers.
Stress cloud
Excited in homogeneous sandstone, the seismic wave field spreads in a spherical shape; Excited in thin mudstone interlayer, the seismic wave energy converges into an ellipsoid along the interface of
mudstone and sandstone toward the burst center, and the destruction circle caused by the explosion extends to both sides along the interface, The pressure of the shock wave is observed to weaken, and
there is obvious shock phenomenon (Figure 8 and Figure 9).
Figure 7. Frequency-amplitude curve of seismic wavelet (left-mudstone, right-thin layer of sandstone in mudstone).
Figure 8. Explosion stress cloud diagram in sandstone (left-stress cloud diagram, middle-part of erosion model, right-observed pressure curve).
Figure 9. Induced stress cloud diagram of a thin layer of mudstone in sandstone (left-stress cloud diagram, middle-part of erosion model, right-observed pressure curve). Note: The interval between
observation points is about 10 m, the distance between No. 1 and the center of burst is 50 m, and the distance between No. 20 and the center of burst is 10 m).
Seismic wavelet
According to the particle vibration velocity and acceleration time history curve of the observation point below the explosion point, the seismic wavelet is analyzed. It can be seen that when excited
in the mudstone interlayer, the amplitude of the seismic wavelet is obviously weakened, and the high-frequency oscillation phenomenon is strengthened (Figures 10-12).
Seismic wavelet spectrum analysis
Spectrum analysis shows that sandstone excites the main frequency of the seismic wavelet at 150 Hz; after excitation in the mudstone interlayer, the energy of the observation point decreases, but the
energy of the seismic wave in the frequency range of 50 - 100 Hz increases (Figure 13).
Figure 10. Vibration velocity-time history curve of the observation point (left-sandstone, right-thin layer of mudstone in sandstone).
Figure 11. Vibration acceleration-time history curve of observation point (left-sandstone, right-thin layer of mudstone in sandstone).
Figure 12. Original seismic wavelet record (left-sandstone, right-thin layer of mudstone in sandstone).
Figure 13. Frequency-amplitude curve of seismic wavelet (left-sandstone, right-thin layer of mudstone in sandstone).
4. Conclusion
The above numerical simulation of seismic excitation shows that the excitation data in mudstone is better than sandstone excitation in the main frequency band, and the excitation in the thin interbed
of mudstone and sandstone plays a role in transforming the frequency and amplitude of seismic wavelets. Especially the excitation in the mudstone interlayer, although the seismic wave frequency band
is widened, it also weakens the energy of the seismic wave. Therefore, in the seismic exploration data acquisition and construction, when optimizing the excitation lithology, the matching of seismic
excitation energy should also be considered to effectively ensure the quality of the data.
Existing problems: Although the transmission boundary was set in the modeling to simulate the seismic excitation in an infinite space in this study, a certain boundary reflection wave interference
phenomenon still occurred.
This paper is a demonstration of some achievements of nonlinear finite element simulation application research project. Thanks to all colleagues who participated in the project!
|
{"url":"https://scirp.org/journal/paperinformation?paperid=126096","timestamp":"2024-11-10T19:15:26Z","content_type":"application/xhtml+xml","content_length":"94822","record_id":"<urn:uuid:aa23c84f-ecfe-4f99-ba32-ee2101bd4be4>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00011.warc.gz"}
|
Mathcad 14 Help required [closed]
Mathcad 14 Help required [closed]
I require that a integral be performed in MathCad 14 but can get Mathcad to give me an answer. Below is the code D:= 4 v:= 20 lamda:=25 E(x):= 6.256*10.x^3.v^2.D AE:= 4PI(6371000^2) RD(x):=lamda.
(cube root E(x)) AD(x):= PI(RD(x)^2) FE(x):= 1/(10^(2log(x)-1)) FK(x):= integral (50 to 1500) FE(x).AD(x)/ AE dx
If anyone can explain why mathcad give the answer without doing the integral and showing me how to obtain the correct answer it would be much appreciated. Thank you
Closed for the following reason question is off-topic or not relevant by kcrisman
close date 2013-04-22 10:17:10
|
{"url":"https://ask.sagemath.org/question/10046/mathcad-14-help-required/","timestamp":"2024-11-08T20:50:03Z","content_type":"application/xhtml+xml","content_length":"45147","record_id":"<urn:uuid:76667f1d-b06e-41eb-9e7b-22cfdd8cb615>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00374.warc.gz"}
|
Determine Whether The Following Sets Form Subspaces Of :
Solved 5. Determine whether the following are subspaces of
Determine Whether The Following Sets Form Subspaces Of :. Learn the most important examples of subspaces. (if the set is not a subspace, enter na.) (a) (a,0) this.
Solved 5. Determine whether the following are subspaces of
Web determine whether the sets are subspaces of r'. Column space and null space. Web determine whether the following sets are subspaces of. Let w ⊆ v for a vector space v and suppose w = span{→v1,
→v2, ⋯, →vn}. But in the case of a vectorial subspace (linear subspace, as referred to here),. Learn to write a given subspace as a column space or null. You'll get a detailed solution from a. Under
the operations of addition and scalar multiplication defined on. Determine whether the following sets are subspaces of r2. (if the set is not a subspace, enter na.) (a) (a,0) this.
You'll get a detailed solution from a. Determine whether the following sets form subspaces of ℝ³: You'll get a detailed solution from a subject matter expert that helps you learn core concepts. Let w
⊆ v for a vector space v and suppose w = span{→v1, →v2, ⋯, →vn}. Under the operations of addition and scalar multiplication defined on. Show that c is a vector space with these. Column space and null
space. Web • ( 76 votes) upvote flag jmas5.7k 10 years ago there are i believe twelve axioms or so of a 'field'; Web determine whether the following sets are subspaces of r^3 r3 under the operations
of addition and scalar multiplication defined on r^3. Web define addition on c by (a + bi) + (c + di) = (a + c) + (b + d)i and define scalar multiplication by α (a + bi) = αa + αbi for all real
numbers α. Web determine whether the following sets are subspaces of.
|
{"url":"https://wedgefitting.clevelandgolf.com/form/determine-whether-the-following-sets-form-subspaces-of.html","timestamp":"2024-11-11T13:00:21Z","content_type":"text/html","content_length":"20560","record_id":"<urn:uuid:4d3a42e3-9159-48a5-8ab7-3fa648ee857f>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00391.warc.gz"}
|
Unscramble ALBIZZIA
How Many Words are in ALBIZZIA Unscramble?
By unscrambling letters albizzia, our Word Unscrambler aka Scrabble Word Finder easily found 30 playable words in virtually every word scramble game!
Letter / Tile Values for ALBIZZIA
Below are the values for each of the letters/tiles in Scrabble. The letters in albizzia combine for a total of 28 points (not including bonus squares)
• A [1]
• L [1]
• B [3]
• I [1]
• Z [10]
• Z [10]
• I [1]
• A [1]
What do the Letters albizzia Unscrambled Mean?
The unscrambled words with the most letters from ALBIZZIA word or letters are below along with the definitions.
• albizzia () - Sorry, we do not have a definition for this word
|
{"url":"https://www.scrabblewordfind.com/unscramble-albizzia","timestamp":"2024-11-07T01:18:51Z","content_type":"text/html","content_length":"45585","record_id":"<urn:uuid:d2eae94d-5483-47f8-83d9-52015b6b4a29>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00331.warc.gz"}
|
Two pipes A and B are attached to an empty water tank. Pipe A fills the tank while pipe B drains it. If pipe A is opened at 2 pm and pipe B is opened at 3 pm, then the tank becomes full at 10 pm.
Instead, if pipe A is opened at 2 pm and pipe B is opened at 4 pm, then the tank becomes full at 6 pm. If pipe B is not opened at all, then the time, in minutes, taken to fill the tank is
Watch Solution Similar Que
|
{"url":"https://scorekhel.com/exams","timestamp":"2024-11-06T13:43:23Z","content_type":"text/html","content_length":"48041","record_id":"<urn:uuid:a510b7f7-f0ae-4c1e-8c12-1d63fe341129>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00598.warc.gz"}
|
Benchmarking Scenarios
1. Random Configurations:
We used a benchmark setup similar to the Bullet collision Library, in which a set of AABBs are uniformly distributed in space and moving randomly. As we changed the number of AABBs from 16K to 960K,
we measured the performance of our algorithm and that of the three broad-phase CD algorithms provided in Bullet, which are BoxPruning, ArraySaP and an AABB dynamic tree. The first two algorithms are
based on SaP and the last one using a dynamic bounding volume hierarchy. All of these algorithms run on CPU. The size of the AABBs also varies from 0.5% to 8% to the size of the bounding box of the
workspace. As shown Figure below, our algorithm outperforms the fastest of the Bullet implementations (i.e. the AABB dynamic tree) by a factor of 71 times.
Comparison of Collision Detection performance with the difference methods in Bullet CPU algorithm
We also investigated the the performance of our algorithm when only some of the objects are moving. The objects are one million AABBs of varying sizes, and we changed the percentage of moving objects
from 5% to 25%. The figure below shows that the number of new collisions generated by moving objects, as a proportion of the total number of interferences is almost linear with computation time,
which implies that as more collision pairs are introduced by moving objects, our algorithm requires more time to process them. This means that our algorithm efficiently utilizes the collision results
introduced by static objects, which are cached from the previous time step.
Collision detection when only some objects are moving
2. Particle Simulation:
We benchmarked on large sets of particles of varying sizes. We did this by modifying an open particle simulation demo, originally from NVIDIA (Particles sample code in CUDA SDK). As shown in Figure
below, we introduced 100K and 0.3M spheres of the size varying from 0.3% to 20% of the dimension of the workspace and simulated their motions under gravity. We then measure the performance of our
algorithm and that of a uniform subdivision algorithm that also runs GPUs. While CD takes up most of the computation, it is hard to decouple the collision times from the simulation times using
NVIDIA's uniform subdivision method. However, for 100K and 0.3M particles, our algorithm takes 56 ms and 252 ms on average for both collision detection and particle simulation while uniform
subdivision 4452 ms and 53464 ms; thus our algorithm outperforms uniform subdivision by a factor of 212 times.
3. Approximate Rigid-Body Dynamics:
We approximated a rigid model with a set of uniform spheres, and used a penalty-based approach running in parallel on GPUs. This avoids narrow-phase Collision Detection. We simulated 16K torus
models approximated by six spheres of varying size moving under gravity. We were able to simulate the approximate rigid-body dynamics entirely running on GPUs in 18 ms, including collision detection.
Approximate Rigid-Body Dynamics for torus
Bullet collision Library:
Real-Time Rigid Body Simulation on GPUs(GPU Gem3):
|
{"url":"http://graphics.ewha.ac.kr/gSaP/","timestamp":"2024-11-12T18:37:44Z","content_type":"text/html","content_length":"114815","record_id":"<urn:uuid:f4fdfdb4-320e-478d-86f8-121731c95a14>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00059.warc.gz"}
|
What Exactly Is X Y Pi and Can Be it an Integral Component of Your Own Life? - 97.5 WQBE
What Exactly Is X Y Pi and Can Be it an Integral Component of Your Own Life?
Are you alert to the responses for the following issues, What is it and is r Pi an integral component of your life?
Is it a process which influences you or is it simply a matter.
What’s R Pi? A process which is just a question influences you? The reply is the two. It is an essential part of one’s life and also.
Pi is the ratio in character also it may be clarified. The design of essay writing service this figure could be seen on a world or it could be seen at the figures of spiral galaxies. So the response
to this question What is z/n Pi depending on if you’re conversant with this or never.
Pi may be defined as the ratio of the circle’s circumference to the diameter. There are. One of these simple formulas is understood as Pythagoras’ Formula.
The next method is called both the square and triangle form and also the function. The next formula is popularly known as the Pythagorean equation. A number of the facets that grade miners influence
the functionality are the speed of gravity and the light. These three facets act as constants which produce the very same results in procedures that are different.
The two formulations utilized to define Pi could be modified to suit a specific issue. As an example the sine function could be changed to create the value of Pi. A issue between the association
between pi and a number might be solved.
As an integral element of one’s own life, pi is actually a process which affects your personality. By beginning in exactly what it is, what’s itn’t and the reason it’s important it might be realized.
|
{"url":"http://www.wqbe.com/what-exactly-is-x-y-pi-and-can-be-it-an-integral-component-of-your-own-life/","timestamp":"2024-11-06T14:08:42Z","content_type":"text/html","content_length":"62540","record_id":"<urn:uuid:fb1e1b8f-cd24-49de-8f0c-784f75640133>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00185.warc.gz"}
|
Calculate distance between point and plane (3-matic 14)
This script calculates the distance between a point and a plane.
# This script will find the distance between a point and plane
# Input: a project where there are two objects called 'point' and 'plane'.
# Output: distance in mm
# Note: the distance is not given in absolute values, so you can determine if
# the point is in front or behind the plane
# Author: Kristof Godelaine (Materialise)
# Version: 1.0 (28 June 2017)
def distance_point_to_plane(pt, pl):
#info Of plane
normal = pl.z_axis
pt_org = trimatic.create_point(pl.origin)
a = normal[0]
b = normal[1]
c = normal[2]
x0 = pt_org.x
y0 = pt_org.y
z0 = pt_org.z
#info of point
tx = pt.x
ty = pt.y
tz = pt.z
#distance calculation
dist = (a*tx + b*ty + c*tz + (-a*x0 -b*y0 -c*z0))/((a**2 + b**2 + c**2)**0.5)
return dist
#point = trimatic.create_point((12,6,3))
#plane = trimatic.create_plane_3_points((0,0,0),(1,0,0),(0,1,0))
|
{"url":"https://community.materialise.com/t/calculate-distance-between-point-and-plane-3-matic-14/94","timestamp":"2024-11-10T08:37:05Z","content_type":"text/html","content_length":"24940","record_id":"<urn:uuid:6d4f2732-cd5c-4b5f-a6fb-bed7c3e65a13>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00308.warc.gz"}
|
Physics - Online Tutor, Practice Problems & Exam Prep
Hey, guys. So in this video, I want to talk about a very important conversation that you're going to see in electricity called the Conservation of Charge. There's not a whole lot of problem solving
that you're going to do, but it's definitely an important concept that you need to know. So let's check it out. Basically, what it says is that charge, being a property of matter, can't be created or
destroyed. And that's known as charge conservation or the law of Conservation of Charge, whatever you'll see expressed in multiple different ways. But it's basically like we studied energy. We said
that energy can't be created or destroyed. It only moves from one thing to the other. So this means if you have a system of objects, if you have a lot of them, and one object is gaining one Coulomb
means that something else has lost one. Cool, um, alright, and we saw how a couple of ways of how that could happen through induction or conduction or polarization, things like that. There's another
thing that you need to know about conservation of charge and how they move from one object to another, and that's when you bring conductors together. So whenever you bring conductors together, and
usually it'll be like two metal spheres or something like that, the charges will move until they reach something called equilibrium at a later time. And all that leaked equilibrium means is that if
you have imbalanced charges on spheres like so, if you have two objects A and B, and they have different amounts of charge, when you bring them together and you allow them to touch and reach
equilibrium, the charges transfer until they finally are equal to each other. It's basically the way that they achieve balance. So let's go ahead and use this conservation law and these conducting
spheres that we just talked about in order to answer some questions about these scenarios. So these following scenarios, each pair of these conducting spheres is brought into contact and allowed to
reach equilibrium, we have to figure out the amount of charge that's transferred and the direction of transferred in each one of these three cases. So in case A, we have two conducting spheres, and
the charges are given for both of them Now what we do is if they're brought together and allowed to reach equilibrium. Then what happens is we have to figure out what the total amount of charges in
each one of these cases now, in this first case, we have a total amount of charges, just the sum of three and negative one, which is just two Coulombs. So that means when they reach equilibrium, both
of them are going to have the exact same amount. So you just take this number and you just cut it in half. So that means that equilibrium for each one of these things is going to be one Coulomb each
for one, one Coulomb, and one Coulomb means a total of two. So that means what has to happen is that this guy over here has to give up to two Coulombs of charge. So you have to give two Coulombs to
the other one. Don't bother with the direction of like, which way the electrons are going. All you have to know really is which way the charges are moving.
So don't concern yourself a whole lot with the way that the electrons are moving anyway. So that's basically the first example. Now we've got some negative numbers here. We've got negative five
Coulombs and negative three Coulombs., But we still approach it the same way. The total amount of charge in the before case is going to be a negative eight. So that means when they're both at
equilibrium, they're both gonna have the exact same amount of charge. You cut it in half and you get negative four Coulombs. So, in other words, this guy in this negative three here has to give up
one, uh, Couilomb in this direction. So this guy has to give up one, and then this one has to gain one. And Coulomb, in order to become negative four. Right? So here we lost two and gained two.
Alright, So this for this final example here, we've got three Coulombs and negative two Coulombs in charge. So you add these things up together, and we get the total amount of charge as one Coulomb
between both spheres. So now what has to happen is the equilibrium is gonna be 0.5 of a Coulomb each. I want you to be very careful because I know you guys were looking at me like I'm crazy right now
because I said a couple of videos that you can't have half charges. Here's the difference. This is a half of a Coulomb, so you can have half of a Coulomb because a Coulomb is an enormous amount of
charges, so that's fine. And that's okay. But what you cannot have is, you cannot have half of an electron. That's different. A Coulomb is billions and billions of charges. You cannot have half of an
electron. You can have half a Coulomb, anyways, so you've got this one Coulomb here. Each one has to have 0.5 Coulombs at the end. So now what has to happen is from the three Coulombs has to give up
2.5 Coulombs into this, uh, this charge right here. So this one is gonna lose 2.5, this one is gonna gain 2.5, and then your equilibrium is gonna be 0.5. All right. Pretty straightforward. So now
let's look at this second example, which we're actually gonna use that conservation of charge. So we're told that two charged metal balls. That means that they're conductors. Metal means conductors
are moving around an insulated box. What that means is that the walls of the box itself can't really pick up charges and they're colliding, and they're randomly exchanging these charges. But they're
not necessarily reaching equilibrium. So at first we're told that the charges of each one of these metal spheres and then at some later time we're told that this charge has or this metal sphere has
negative two Coulombs we're supposed to be figuring out how much this has. So we have an isolated system here and insulated box. So we're going to have to use conservation of charge. And that means
that the Q here before has to equal Q here afterwards. So the total charge here, the total amount of charge is just one plus three, which is four Coulombs. And then we write that out as that Q1 plus
Q2 is equal to just one plus three that equals four, right? So pretty straightforward. Well, what we're saying here is that the Q total in the after case also has to be four because we have to
conserve that charge. So we write that equation out. Right? So we've got negative two plus, what is going to give me four Coulombs? Go ahead and pause if you haven't figured it out yet, but in order
for this thing to equal four Coulombs, this guy has to be six Coulombs over here. And that's the answer. So this is six Coulombs that we have that conservation of charge. Alright, guys, that's
basically it. Let me know if you guys have any questions.
|
{"url":"https://www.pearson.com/channels/physics/learn/patrick/electric-force-field-gauss-law/conservation-of-charge?chapterId=8fc5c6a5","timestamp":"2024-11-12T04:07:45Z","content_type":"text/html","content_length":"463430","record_id":"<urn:uuid:5501144d-c0e6-4589-9847-e68d68abda9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00405.warc.gz"}
|
Problem A
Pero has negotiated a Very Good data plan with his internet provider. The provider will let Pero use up $X$ megabytes to surf the internet per month. Each megabyte that he doesn’t spend in that month
gets transferred to the next month and can still be spent. Of course, Pero can only spend the megabytes he actually has.
If we know how much megabytes Pero has spent in each of the first $N$ months of using the plan, determine how many megabytes Pero will have available in the $N + 1$ month of using the plan.
The first line of input contains the integer $X$ ($1 \leq X \leq 100$). The second line of input contains the integer $N$ ($1 \leq N \leq 100$). Each of the following $N$ lines contains an integer
$P_ i$ ($0 \leq P_ i \leq 10\, 000$), the number of megabytes spent in each of the first $N$ months of using the plan. Numbers $P_ i$ will be such that Pero will never use more megabytes than he
actually has.
The first and only line of output must contain the required value from the task.
Sample Input 1 Sample Output 1
Sample Input 2 Sample Output 2
Sample Input 3 Sample Output 3
|
{"url":"https://open.kattis.com/contests/cpbh4m/problems/tarifa","timestamp":"2024-11-05T07:27:20Z","content_type":"text/html","content_length":"30460","record_id":"<urn:uuid:213f88c2-5713-4c86-8e92-34115477ad9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00311.warc.gz"}
|
Vulkan triangle winding
Trying to reproduce the vkguide using the Haskell engine keid I had a weird bug related to triangle winding. This post shows how a 3d model gets rendered on screen using Vulkan.
In the vkguide’s third chapter, “drawing meshes”, we learn about:
• Loading the vertices of a monkey head model.
• Setup the model-view-projection matrix using a push constant.
The result looks like this:
Loading the meshes
The guide uses a model in the OBJ format, but keid comes with a GLTF loader, so I converted the file using blender. Then here is the code I wrote to load the vertices:
import RIO
import Data.Vector qualified as Vector
-- https://hackage.haskell.org/package/keid-resource-gltf
import Resource.Gltf.Load qualified as GltfLoader
import Resource.Gltf.Model qualified as GltfModel
-- Returns the list of vertex (positions, attributes, indices)
type Model = ([Vec3.Packed], [GltfModel.VertexAttrs], [Word32])
loadMonkey :: HasLogFunc env => RIO env Model
loadMonkey = do
let fp = "assets/monkey.glb"
(_, meshes) <- GltfLoader.loadMeshPrimitives False False fp
pure $ case Vector.toList meshes of
[meshPrim] -> case Vector.toList meshPrim of
[(_, stuff)] ->
( Vector.toList stuff.sPositions
, Vector.toList stuff.sAttrs
, Vector.toList stuff.sIndices
GLB is the binary format of GLTF model.
This is the first time I ever load such model, if I understand correctly:
• The positions are vertex 3d coordinates relative to the origin of the model.
• The attributes contains vertex info such as the normal or the texture coordinate.
• The indices describe the vertex order.
The graphic card expects a list of triangle, for example:
a x-----x
\ / \
\ / \
x-----x d
Instead of sending: [a, b, c, b, d, c], using indices we can send [a, b, c, d] along with these indices [0, 1, 2, 1, 4, 2].
In this example, the model doesn’t have a texture, instead, the pixel colors are defined in the vertex normals attribute. Thus, here is the code to load the model in the GPU:
-- https://hackage.haskell.org/package/keid
import Resource.CommandBuffer qualified as CommandBuffer
import Resource.Model qualified as Model
initialRunState = do
context <- ask
CommandBuffer.withPools \pools -> do
logInfo "Loading model"
(meshPos, meshAttrs, meshIndices) <- loadMonkey
let meshAttrs2Vertices (pos, attrs) =
Model.Vertex pos attrs.vaNormal
rsModel <- Model.createStagedL
(meshAttrs2Vertices <$> (zip meshPos meshAttrs))
(Just meshIndices)
Setup the model-view-projection matrix
The vkguide uses a push constant buffer to load the model-view-project matrix, and this is not how keid usually handles that part. So I had to implement the render matrix manually, like this:
import Engine.Camera qualified as Camera
import Geomancy.Transform qualified as Transform
renderMatrix :: Transform
renderMatrix = projection <> view <> model
camera =
{ projectionNear = Camera.PROJECTION_NEAR
, projectionFar = Camera.PROJECTION_FAR
, projectionParams = pi / 2
projection =
Camera.mkTransformPerspective (Vk.Extent2D 800 600) camera
view = mempty
model =
Transform.translate 0 0 0
<> Transform.rotateX -2
<> Transform.rotateY (realToFrac time)
And here is what I got:
Notice how the right ear is clipped. I first thought that the model was too close to the camera, but when using: Transform.translate 0 0 1 to move it away, the issue became worse:
It was time to check with renderdoc, but I couldn’t find anything suspicious, here is what the mesh panel looked like:
Thanks to the help of the keid developper dpwiz I got to understand the issues.
Curveball from the OpenGL world
Vulkan departed from the OpenGL standards, in particular, the Normalized Device Coordinates (NDC) are upside down:
(-1, -1) (1, -1)
| |
| |
| |
| |
(-1, 1) (1, 1)
This matters for the triangle winding process, where the GPU tests if a triangle is facing the camera or turned away from it. The monkey head model indices needed to be adjusted, by enabling the
reserveIndices argument when loading the mesh:
let reverseIndices = True
(_, meshes) <- GltfLoader.loadMeshPrimitives reverseIndices False fp
That fixed the model, but that also revealed an issue with my render matrix implementation. The transform monoid provided by the geomancy package works from the local towards the root of the scene.
That means the composition needs to be reversed:
renderMatrix = model <> view <> projection
model =
[ Transform.rotateX -1.2
, Transform.rotateY (-1 * realToFrac time)
, Transform.translate 0 0 -5
And voila, the MonkeyMesh now renders correctly!
|
{"url":"http://midirus.com/blog/vulkan-triangle-winding","timestamp":"2024-11-08T21:17:17Z","content_type":"text/html","content_length":"34529","record_id":"<urn:uuid:898fb7e0-e99e-4a17-bbe9-9ae7857d288a>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00844.warc.gz"}
|
Convert Cubic Miles To Cubic Feet | Bear Grylls GearConvert Cubic Miles To Cubic Feet | Bear Grylls Gear
6) The Print option will be available when the table is created. The result will appear in the box next to “cubic foot “. This online converter and web apps are created to be the universal assistant
for all your project needs. Our tools include unit converters, calculators, image analyzer, words counter, numbers, password strength checkers and other fun web apps. We can not 101% guarantee the
accuracy of the information presented on this web site.
Please visit all volume units conversion to convert all volume units. Bookmark cubic mile to cubic foot Conversion Calculator – you will probably need it in the future. For quick reference purposes,
below is a conversion table that you can use to convert from ft3 to mi3. We assume you are converting between cubic mile and cubic foot. For quick reference purposes, below is a conversion table that
you can use to convert from mi3 to ft3.
Convert Cubic Feet to Cubic Miles
The following tables provide a summary of the Volume units within their respective measurement systems. The cubic foot can be used to describe a volume of a given material, or the capacity of a
container to hold such a material. Enter the number of Cubic Miles(mi³) to convert into Cubic Feet(ft³).
S/mi to min/ft conversion table, s/mi to min/ft unit converter or convert between all units of speed measurement. What is the formula to convert from cubic miles to cubic feet? A cubic foot is a unit
of volume in both US Customary Units as well as the Imperial System. There are 147,197,952,000 cubic feet in a cubic mile. Use this page to learn how to convert between cubic miles and thousand cubic
Enter two units to convert
While every effort is made to ensure the accuracy of the information provided on this website, neither this website nor its authors are responsible for any errors or omissions. Therefore, the
contents of this site are not suitable for any use involving risk to health, finances or property. Please, choose a physical quantity, two units, then type a value in any of the boxes above. Enter
your value in the conversion calculator below.
Cubic mile to cubic foot conversion allow you make a conversion between cubic mile and cubic foot easily. The table below contains pairs of values from cubic miles to cubic feet ranging from one to
one hundred thousand. Use this page to learn how to convert between cubic miles and cubic feet. Next, let’s look at an example showing the work and calculations that are involved in converting from
cubic miles to cubic feet . Traditional unit of piled firewood sold by volume. It typically measures a 4 x 1 x 4 foot stack and equals one-eights of a cord, 16 cubic feet, or 0.45 cubic meters.
Want other units?
Cord-foot to cubic mile conversion allow you make a conversion between cord-foot and cubic mile easily. We assume you are converting between cubic mile and thousand cubic metre. The volume equivelent
to a cube of one millimeter by one millimeter by one millimeter. More often referred to as a microliter as it is a millionth of a liter.
Select an “Increment” value (0.01, 5 etc) and select “Accuracy” to round the result. For example, here’s how to convert 5 Cubic Miles to Cubic Feet using the formula above.
Unit converter
Online calculator to convert cubic miles to cubic feet with formulas, examples, and tables. Our conversions provide a quick and easy way to convert between Volume units. Online calculator to convert
cubic feet to cubic miles with formulas, examples, and tables. Use the following calculator to convert between cubic miles and cubic foots.
Use current calculator to convert Volume from Cubic Mile to Cubic Feet. Simply enter Volume quantity and click ‘Convert’. Both Cubic Mile and Cubic Feet are Volume measurement units.
What is a cubic mile (mi ?
A cubic mile is an Imperial / U.S. customary (non-SI non-metric) unit of volume, used in the United States. It is defined as the volume of a cube with sides of 1 mile (5280 feet, 1760 yards, ≈1.609
kilometre) in length. The following is a list of definitions relating to conversions between cubic miles and cubic feet. The cubic foot is a unit of volume used in the imperial and U.S. customary
measurement systems. Next, let’s look at an example showing the work and calculations that are involved in converting from cubic feet to cubic miles . The following is a list of definitions relating
to conversions between cubic feet and cubic miles.
We do everything possible to have the most accurate conversions and exchange rates possible. However, we cannot be held responsible for any errors found and do not give any guarantees regarding
conversions. If you find an error, please contact us via our contact details displayed in the corresponding section. Always check the results; rounding errors may occur. Note that rounding errors may
occur, so always check the results.
Convert cubic mile to cubic foot
If you need to convert cubic miles to other units, please try our universalCapacity and Volume Unit Converter. The cubic foot (symbols ft³, cu. ft.) is a nonmetric unit of volume, used in U.S.
customary units and Imperial units. It is defined as the volume of a cube with edges one foot in length.
|
{"url":"https://beargryllsgear.org/convert-cubic-miles-to-cubic-feet/","timestamp":"2024-11-12T14:06:27Z","content_type":"text/html","content_length":"164077","record_id":"<urn:uuid:a47231d6-9249-4db8-8153-8f7db16907d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00073.warc.gz"}
|
How Many Diamonds For A Hexagon?
How many diamonds for a hexagon?
Welcome to Warren Institute! In this article, we will explore the fascinating world of Mathematics education and tackle the intriguing question: "How many diamonds do you use to make a hexagon?" Join
us as we dive into the geometric wonders and discover the mathematical principles behind the construction of a hexagon using diamonds. Prepare to be amazed as we apply our problem-solving skills and
explore the relationship between diamonds and hexagons. So, grab your thinking cap and let's embark on this exciting mathematical journey together!
Introduction to Hexagons in Mathematics Education
In this section, we will explore the concept of hexagons and their relevance in mathematics education.
Hexagons are six-sided polygons that have several unique properties. They can be found in various natural and man-made objects, such as honeycombs, snowflakes, and soccer balls. Understanding the
characteristics of hexagons is essential for building a strong foundation in geometry.
By examining the number of diamonds required to make a hexagon, we can delve into the relationship between the sides and angles of this polygon. This exploration allows students to develop critical
thinking skills and apply mathematical concepts to real-world scenarios.
The Relationship between Diamonds and Hexagons
When considering how many diamonds are needed to construct a hexagon, one must first understand the construction process.
A diamond refers to a specific configuration of two congruent equilateral triangles joined together at their bases. By connecting six diamonds together, point-to-point, in a particular arrangement, a
hexagon can be formed.
The intricate relationship between diamonds and hexagons demonstrates the fundamental connection between triangles and polygons. Exploring this relationship can help students grasp concepts such as
congruence, symmetry, and the sum of interior angles in polygons.
Applying Hexagons in Problem Solving
Hexagons provide an excellent opportunity for problem-solving exercises in mathematics education.
Problem 1: Given the length of one side of a diamond, calculate the perimeter of the hexagon formed by six connected diamonds.
Problem 2: If each diamond has a side length of 2 cm, find the area of the hexagon.
By presenting such problems, educators can engage students in hands-on activities that promote critical thinking and analytical skills. These exercises encourage students to apply their knowledge of
hexagons and related concepts to solve real-world problems.
Hexagons: A Bridge to Advanced Mathematics
The study of hexagons serves as a bridge to more advanced mathematical topics.
Hexagonal tessellations, for example, involve the arrangement of hexagons in a way that fills a plane without any gaps or overlaps. This concept introduces students to the field of tessellation and
lays the groundwork for understanding symmetry and transformations.
Moreover, the exploration of hexagons can lead to further investigations into the properties of regular polygons, trigonometry, and even three-dimensional geometry. By building a solid understanding
of hexagons, students are prepared for more complex mathematical concepts in higher education and beyond.
frequently asked questions
How many diamonds are needed to create a hexagon shape in a tessellation?
Six diamonds are needed to create a hexagon shape in a tessellation.
What is the relationship between the number of sides in a polygon and the number of diagonals it has?
The relationship between the number of sides in a polygon and the number of diagonals it has is given by the formula n(n-3)/2, where n represents the number of sides.
How can the concept of symmetry be applied to determine the number of diamonds required to form a regular hexagon?
The concept of symmetry can be applied to determine the number of diamonds required to form a regular hexagon. A regular hexagon has six equal sides and six equal angles. Each side of a diamond is
half the length of a side of the hexagon, and each angle of a diamond is half the size of an angle of the hexagon. Therefore, we can use the symmetry of the hexagon to determine that six diamonds are
required to form a regular hexagon, with each diamond sharing a side and an angle with its neighboring diamonds.
Are there any mathematical patterns or formulas that can be used to calculate the number of diamonds needed to construct a hexagon?
Yes, there is a mathematical formula that can be used to calculate the number of diamonds needed to construct a hexagon. The formula is n(n+1)/2, where n represents the number of sides in the
polygon. Therefore, for a hexagon, which has 6 sides, the formula would be 6(6+1)/2 = 21.
In a mathematical art activity, how can students determine the minimum number of diamonds required to create a hexagon pattern without overlapping?
Students can determine the minimum number of diamonds required to create a hexagon pattern without overlapping by using a systematic approach. They can start by placing one diamond at the center of
the hexagon and then add diamonds around it, ensuring that each new diamond is touching the previous ones without overlapping. By continuing this process, they can gradually build the hexagon pattern
while minimizing the number of diamonds used.
In conclusion, understanding the relationship between the number of diamonds used and the formation of a hexagon is a fundamental concept in Mathematics education. By employing critical thinking
skills and applying geometric principles, students can explore the intricate patterns and structures that arise from this mathematical puzzle. Through hands-on activities and engaging discussions,
educators can foster a deeper understanding of the subject matter. Moreover, this knowledge enables students to develop problem-solving abilities and enhance their spatial awareness. By embracing the
beauty and intricacy of Mathematics, students can unlock a world of endless possibilities and enrich their educational journey.
If you want to know other articles similar to How many diamonds for a hexagon? you can visit the category General Education.
|
{"url":"https://warreninstitute.org/how-many-diamonds-do-you-use-to-make-a-hexagon/","timestamp":"2024-11-06T01:43:43Z","content_type":"text/html","content_length":"104832","record_id":"<urn:uuid:f82883ff-13ba-4f87-b8f3-6b9ddbe57ab8>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00878.warc.gz"}
|
Generate a 10 item multiple choice, 10 item true or false, 5 item identification, and 5 item problem solving about the topics 'linear equations in two variables, linear inequalities, relation and function, dependent and independent variables, and graphing of functions' for mathematics grade 8 students.
Generate a 10 item multiple choice, 10 item true or false, 5 item identification, and 5 item problem solving about the topics 'linear equations in two variables, linear inequalitie... Generate a 10
item multiple choice, 10 item true or false, 5 item identification, and 5 item problem solving about the topics 'linear equations in two variables, linear inequalities, relation and function,
dependent and independent variables, and graphing of functions' for mathematics grade 8 students.
Understand the Problem
The question is asking for the creation of various types of assessment items (multiple choice, true/false, identification, and problem-solving) focused on specific mathematical topics tailored for
grade 8 students. These topics include linear equations, linear inequalities, relations and functions, dependent and independent variables, and graphing of functions. The high-level approach involves
generating questions that assess understanding and application of these concepts.
Assessment items for grade 8 math include multiple choice, true/false, identification, and problem-solving questions based on specified topics.
Answer for screen readers
The assessment items tailored for grade 8 students would include:
• Multiple Choice: "What is the solution to $2x + 3 = 11$? a) 2 b) 3 c) 4 d) 5"
• True/False: "A linear function can be represented as $y = mx + b$. True or false?"
• Identification: "Define what a dependent variable is."
• Problem-Solving: "If a car travels at a speed of 60 km/h for 2.5 hours, write a linear equation to represent the situation and find the distance."
Steps to Solve
1. Identify the Topics for Assessment Items
List the specific mathematical topics that need to be covered in the assessment items. The topics are:
□ Linear equations
□ Linear inequalities
□ Relations and functions
□ Dependent and independent variables
□ Graphing of functions
2. Create Multiple Choice Questions
Formulate questions with several answer options and include the correct answer among them. For example:
"What is the solution to the equation $2x + 3 = 11$?"
a) 2
b) 3
c) 4
d) 5
3. Generate True/False Questions
Design statements that assess the student's understanding of the concepts. For example:
"A linear function can be represented as $y = mx + b$. True or false?"
4. Formulate Identification Questions
Ask students to define or identify key terms. For example:
"Define what a dependent variable is."
5. Create Problem-Solving Questions
Present real-world problems that require students to apply their knowledge. For example:
"If a car travels at a speed of 60 km/h, how far will it travel in 2.5 hours? Write a linear equation to represent the situation."
The assessment items tailored for grade 8 students would include:
• Multiple Choice: "What is the solution to $2x + 3 = 11$? a) 2 b) 3 c) 4 d) 5"
• True/False: "A linear function can be represented as $y = mx + b$. True or false?"
• Identification: "Define what a dependent variable is."
• Problem-Solving: "If a car travels at a speed of 60 km/h for 2.5 hours, write a linear equation to represent the situation and find the distance."
More Information
These types of questions assess not only the factual knowledge of students but also their ability to apply concepts in different contexts. Creating varied types of questions helps cater to different
learning styles.
• Focusing on only one type of question and not providing a balanced assessment. Ensure to include various question formats.
• Misunderstanding the concepts when creating true/false statements. Make sure to double-check the accuracy of the statements.
|
{"url":"https://quizgecko.com/q/generate-a-10-item-multiple-choice-10-item-true-or-false-5-item-identification-pkbl8","timestamp":"2024-11-04T10:24:45Z","content_type":"text/html","content_length":"175725","record_id":"<urn:uuid:5368acbf-5092-4a92-ad95-9670c66f141d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00816.warc.gz"}
|
Ordering Fractions Worksheet Year 5
Ordering Fractions Worksheet Year 5
Below are six versions of our grade 4 fractions worksheet on ordering 3 fractions proper and improper and mixed numbers. Explore all of our fractions worksheets from dividing shapes into equal parts
to multiplying and dividing improper fractions and mixed numbers.
Greater Than Less Than Worksheets Fractions Worksheets Math Fractions Worksheets Comparing Fractions
We have a great selection of pages on thousandths including writing thousandths as fractions and decimals and putting decimals in order.
Ordering fractions worksheet year 5. Put two thirds three sixths and five ninths in order by converting all three fractions to ninths. Worksheets math grade 4 fractions ordering 3 fractions. Some of
the worksheets displayed are fractions packet fractions grade 5 fractions work decimals work math mammoth grade 5 b adding or subtracting fractions with different denominators fraction word problems
grade 5 math adding fractions like denominators.
K5 learning offers reading and math worksheets workbooks and an online reading and math program for kids in kindergarten to grade 5. You can also include visual models fraction pies which will make.
This fantastic set of ordering fractions worksheets will help year 5 children order fractions where the denominators are multiples.
Powerpoint for target your maths year 5 page 55 comparing fractions compare fractions by finding equivalent fractions different denominators numerator. Students must compare the 3 numbers and write
then in order using the greater than and less than symbols. A further development in year 5 is that children will be expected to compare and order fractions whose denominators are all multiples of
the same number e g.
With this worksheet generator you can make worksheets for comparing two fractions or for ordering 3 8 fractions. Children are given a set of ordering fraction sums and an accompanying answer sheet to
make marking a breeze. Starter has equivalnce and lcm main is ordering fractions.
Use them to introduce the topic of ordering fractions or as a handy homework task. Ordering fractions year 5 displaying top 8 worksheets found for this concept. The worksheet can include problems
where you compare fractions with the same denominator fractions with the same numerator comparisons to 1 2 or to 1 and so on.
Apply this and order the fractions in these pdf worksheets for 5th grade and 6th grade students. Ordering negative fractions are quite opposite to arranging positive fractions. Some of the worksheets
for this concept are math 67 notes unit 2 preview name comparing ordering fractions comparing fractions year 6 booster booklet fractions grade 3 fractions work grade 5 fractions work fractions
decimals and percentages ks2 sats standard mega fun fractions.
The greatest positive fraction becomes least if we take negative for all the positive fractions on the list. Finding equivalent fraction three challenges for year 4 mild medium spicy linked to white
rose. Ordering fractions mixed numbers.
Fractions year 5 showing top 8 worksheets in the category fractions year 5. Extention is converting fractions to decimals may be tricky for some groups as not all fractio.
Equivalent Fractions Worksheet Fractions Worksheets Equivalent Fractions Fractions Worksheets Grade 4
Ordering Fractions Worksheets Arrange The Fractions In Either Increasing Or Decreasin Fractions Worksheets Math Fractions Worksheets 4th Grade Math Worksheets
Comparing Fractions Worksheets Fractions Worksheets Math Worksheets Math Fact Worksheets
Ordering Fractions Worksheets Fractions Worksheets Math Fractions Worksheets 4th Grade Math Worksheets
Pin By Aish Ch On Maths Fractions Worksheets Fractions Simple Fractions Worksheets
Mastery In Maths Year 5 Converting And Ordering Fractions Fluency Reasoning And Problem Solving Ordering Fractions Math Problem Solving Mastery Maths
Free Fraction Worksheet Generator Great For Comparing Fractions Ordering Fractions Etc Fractions Worksheets Math Fractions Worksheets Comparing Fractions
Equivalent Fractions Worksheet Fractions Worksheets Math Fractions Worksheets Math Fractions
The Ordering Sets Of 5 Positive Fractions With Like Denominators Or Like Numerators A Math Worksheet From T Fractions Worksheets Improper Fractions Fractions
Free Ordering Fractions On A Number Line Printable Classroom Freebies 3rd Grade Fractions Ordering Fractions Fractions
Comparing Fractions 4 Worksheets Math Fractions Worksheets 2nd Grade Math Worksheets Fractions Worksheets
Free Worksheets For Comparing Or Ordering Fractions Fractions Worksheets Ordering Fractions Fractions
Free Worksheets For Comparing Or Ordering Fractions Fractions Worksheets Comparing Fractions Ordering Fractions
2nd Grade Math Worksheets Best Coloring Pages For Kids Math Fractions Worksheets 2nd Grade Math Worksheets Fractions Worksheets
Ordering Fractions Worksheet 4th Grade 5 Free Fraction Worksheets Frugal Family Educat In 2020 Math Fractions Worksheets Fractions Worksheets 2nd Grade Math Worksheets
Free Ordering Fractions On A Number Line Printable Fractions Math Fractions 3rd Grade Math
Pin By Stephanie Lawrence On Math Fractions Worksheets Math Fractions Worksheets Comparing Fractions
Compare Order Fractions Comparing And Ordering Fractions Ks2 Year 5 6 Worksheet Only Self Esteem Worksheets Ordering Fractions Self Esteem Activities
Let S Get Some Order In Here 5th Grade Fractions Worksheet Jumpstart Fractions Worksheets Fractions Kids Math Worksheets
|
{"url":"https://thekidsworksheet.com/ordering-fractions-worksheet-year-5/","timestamp":"2024-11-14T07:28:09Z","content_type":"text/html","content_length":"136235","record_id":"<urn:uuid:96889d67-e029-4f34-b1eb-eb1fbab1b827>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00142.warc.gz"}
|
Theory of Combinatorial Algorithms
Mittagsseminar (in cooperation with J. Lengler, A. Steger, and D. Steurer)
Mittagsseminar Talk Information
Date and Time: Tuesday, March 27, 2018, 12:15 pm
Duration: 30 minutes
Location: OAT S15/S16/S17
Speaker: Wojciech Samotij (Tel Aviv University)
Subsets of posets minimising the number of chains
A well-known theorem of Sperner describes the largest collections of subsets of an n-element set none of which contains another set from the collection. Generalising this result, Erdős characterised
the largest families of subsets that do not contain a chain of sets of an arbitrary length k. The extremal families contain all subsets whose cardinalities belong to an interval of length k–1 centred
around n/2. In a far-reaching extension of Sperner's theorem, Kleitman determined the smallest number of chains of length two that have to appear in a collection of a given number a of subsets. For
every a, this minimum is achieved by the collection comprising a sets whose cardinalities are as close to n/2+1/4 as possible. Kleitman conjectured that the same is true about chains of an arbitrary
length k, for all a and n. We will sketch a proof of this conjecture.
Upcoming talks | All previous talks | Talks by speaker | Upcoming talks in iCal format (beta version!)
Previous talks by year: 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996
Information for students and suggested topics for student talks
Automatic MiSe System Software Version 1.4803M | admin login
|
{"url":"https://ti.inf.ethz.ch/ew/mise/mittagssem.html?action=show&what=abstract&id=b1bee7c0a3837f706981e44ef1e2d406c4232e38","timestamp":"2024-11-04T08:07:55Z","content_type":"text/html","content_length":"13757","record_id":"<urn:uuid:bba41bc3-2666-4f62-abb9-1fc24c7d30ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00156.warc.gz"}
|
Motion of a body suspended from a spring
Simple Harmonic Motion
Also Read
7. Total energy in SHM
• When a system at rest is displaced from its equilibrium position by doing work on it, it gains potential energy and when it is released, it begins to move with a velovity and acquires kinetic
• If m is the mass of system executing SHM then kinetic energy of system at any instant of time is
K=(1/2)mv^2 (13)
putting equation 8 in 13 we get,
• From equation (14) we see that Kinetic Energy of system varies periodically i.e., it is maximum (= (1/2)mω^2A^2) at the maximum value of velocity ( ±ωA) and at this time displacement is zero.
• When displacement is maximum (±A), velocity of SHM is zero and hence kinetic energy is also zero and at these extreme points where kinetic energy K=0, all the energy is potential.
• At intermediate positions of lying between 0 and ±A, the energy is partly kinetic and partly potential.
• To calculate potential energy at instant of time consider that x is the displacement of the system from its equilibrium at any time t.
• We know that potential energy of a system is given by the amount of work required to move system from position 0 to x under the action of applied force.
• Here force applied on the system must be just enough to oppose the restoring force -kx i.e., it should be equal to kx.
• Now work required to give infinitesimal displacement is dx=kx dx.
Thus, total work required to displace the system from 0 to x is
where, from equation 5 ω=√(k/m) and displacement x=A cos(ωt+φ).
-From equation 14 and 15 we can calculate total energy of SHM which is given by,
• Thus total energy of the oscillator remains constant as displacement is regained after every half cycle.
• If no energy is dissipated then all the potential energy becomes kinetic and vice versa.
• Figure below shows the variation of kinetic energy and potential energy of harmonic oscillator with time where phase φ is set to zero for simplicity.
8. Some simple systems executing SHM
(A) Motion of a body suspended from a spring
• Figure (6a) below shows a spring of negligible mass, spring constant k and length l suspended from a rigid support.
• When a body of mass m is attached to this spring as shown in figure 6(b), the spring elongates and it would then rest in equilibrium position such that upward force F[up] exerted by spring is
equal to the weight mg of the boby.
• If the spring is extended by an amount Δl ater attachment of block of mass m then in its equilibrium position upward force equals
also in this equilibrium position
or, kΔl=mg
• Again the body is displaced in upwards direction such that it is at a distance x above equilibrium position as shown in figure 6(c).
• Now extansion of spring would be (Δl-x), thus upward force now exerted on the body is
• Weight of the body now tends to pull the spring downwards with a force equal to its weight. Thus resultant force on the body is
F=-kx (17)
• From equation 17 we see that resultant force on the body is proportional to the displacement of the body from its equilibrium position.
• If such a body is set into vertical oscillations it oscillates with an angular frequency ω=√(k/m) (18)
|
{"url":"https://physicscatalyst.com/wave/shm_1.php","timestamp":"2024-11-04T05:19:08Z","content_type":"text/html","content_length":"69716","record_id":"<urn:uuid:d0fb6a25-89f8-455e-b390-41b1d3b35072>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00730.warc.gz"}
|
Sorting refers to arranging a list of elements into a particular order , which may be ascending or descending.
Several programming solutions rely on sorting algorithms. They are widely used in
• Making searching algorithms efficient.
• Making raw data simpler.
• Processing data in defined order.
In-Place Sorting Vs Out-of-Place Sorting.
In-Place Sorting.
The In-Place sorting algorithm sorts the input without any additional memory. The input is usually overwritten by the output. A small amount of extra memory may be required for an in-place algorithm,
but this memory requirement should not depend on the input size. So the space complexity of the algorithm is O(1). Insertion sort , Quick sort , Selection sort , bubble sort , heap sort are some of
the In-Place sorting algorithms.
Out-Of-Place Sorting
Out-of-Place sorting takes additional space for the sorting. The size of the additional space depends on the size of the input. Merge sort is an Out-Of-Place sorting algorithm.
Stable Vs Not Stable Sorting
Stable Sorting
Stable sorting algorithm maintains the relative position of the similar elements after sorting. For example: In the below image the relative order of 5 is preserved after sorting.
Merge sort . Insertion sort , Bubble sort are some of the stable sorting algorithms.
Not Stable Sorting
The relative position of the same elements are not necessarily preserved in non stable sorting. Selection sort , Quick sort , Heap sort are some of the non stable sorting algorithms.
Internal Vs External Sorting
Internal Sorting
If the data sorting process takes place entirely within the main memory(RAM) of the computer it's called Internal sorting. Its possible only when the size of the list is small enough to be stored in
the RAM. Bubble sort , Insertion sort , Quick sort , Selection sort are some of the common internal sorting algorithms.
External Sorting
External sorting is used when we need to sort large datasets , Which may not be stored as a whole in Main memory. External merge sort is an example of an external sorting algorithm.
Adaptive vs Non Adaptive sorting
Adaptive Sorting
If the sorting algorithm takes advantage of existing presortedness in the input list then its called adaptive sorting algorithm. Quick sort , Insertion sort , bubble sort are some of the adaptive
sorting algorithms.
Non Adaptive sorting
In Non Adaptive sorting algorithms the order of the input doesn't matters. The time complexity will always remain the same for any order of the input.
Selection sort , Merge sort ,Heap sort are some examples of non Adaptive sorting algorithms.
|
{"url":"https://ashoksubbiah.in/sorting","timestamp":"2024-11-14T10:21:23Z","content_type":"text/html","content_length":"83278","record_id":"<urn:uuid:41b8f9c2-438c-4a46-8018-0111b02e6bdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00424.warc.gz"}
|
Sharpe Ratio - 7 Circles
Sharpe Ratio
Today’s post is about the Sharpe Ratio, which compares the returns from an investment to their volatility / variability.
Sharpe Ratio
The Sharpe Ratio (SR) is a measure of return per unit risk (or rather, volatility), named after the Stanford professor (and Nobel laureate) William F. Sharpe who invented it in 1966.
• It measures the compensation (return) than you get for taking on an additional unit of risk (volatility) above the risk-free asset.
The formula is:
SR = ( Rp – Rf ) / σp
Here, Rp is the return from a portfolio and Rf is the risk-free rate (usually on short-dated government bonds / bills).
• Rp – Rf is known as the risk premium.
• σp (sigma p) is the standard deviation of the portfolio, a measure of the volatility of returns.
If two portfolios have the same expected (or historical) returns, then the one with the lower SR is the better choice.
• The same applies in reverse – within two portfolios with the same SR, the one with the higher returns is (obviously) the better.
Note that this is not the same as saying that investors should maximise their SR.
• The primary purpose of investment is to achieve a required return.
For example, I need to target my SWR plus inflation, which over the long-run is approximately 3.25% + 2.5% = 5.75% pa.
• Portfolios returning less than 5.75% pa (gross / nominal) will not meet my needs.
Investors should instead maximise SR for a given target Rp.
Common values
A positive SR means that returns are above the risk free rate, whereas a negative SR means that returns are below the risk-free rate.
An SR value between 0 and 1 means that the excess returns come with a lot of excess risk (volatility), compared to a short-term bond.
An SR above 1 means that the excess returns come with less excess risk than expected.
The Sharpe ratios of individual asset classes are usually around 0.2 to 0.3 (over the long-run).
Predictive use
Some studies have found predictive value in historic SRs (of hedge funds, for example).
• The implication is that investors should avoid funds with low Sharpe Ratios as these are likely to underperform in the future.
Note that the persistency of Sharpe Ratios is likely to be affected by significant market turmoil (crashes).
Note also that SR persistency is not the same as consistent high returns.
• Returns might fall but the SR could be preserved by a corresponding fall in volatility.
The first SR issue we must deal with is that of risk.
• The Sharpe Ratio was originally known as the reward-to-variability ratio, which I think is a more accurate name (though it’s a bit of a mouthful).
As we have discussed many times, Volatility is not Risk.
Equities provide very volatile returns, but they are not particularly risky over the long-term.
• More importantly, high-return, low-volatility assets are not available.
So to achieve the returns we need, we must accept some volatility.
• Not taking on that volatility is the real risk.
This does not invalidate the usefulness of the Sharpe Ratio, but the SR is easier to understand if you think about it in terms of volatility.
It should be noted that the SR treats all volatility as equal, whereas investors are mostly concerned about downside volatility (losses).
Returns and risk (volatility) are broadly correlated, though the free lunch of diversification using assets with less than perfect correlations can compensate for this.
• Well-diversified portfolios will have higher SRs.
It’s important to check whether apparently high returns are being generated through the use of:
1. high-volatility assets (eg. AIM shares)
2. concentrated portfolios
All of these can improve returns, at the risk of impacting the Sharpe Ratio.
• Leverage doesn’t affect the SR directly (since the numerator and denominator are inflated to the same degree).
• But the consequences of a large drawdown in a leveraged portfolio (margin calls, forced sales etc) will ultimately impact the investor’s SR.
Looking at the opposite end of the spectrum, illiquid assets (which are not priced / traded every day) will appear to have smoother returns than they really do.
• This in turn will lead to misleadingly good Sharpe Ratios. (( For the same reason, Ponzi schemes – where the consistent returns are imaginary – will typically have high Sharpe Ratios ))
Smoothed-return assets (like with-profits schemes) will also have misleadingly high SRs.
Note that in some senses this is a philosophical point – if you can’t get out of an illiquid asset, then your personal experience of the returns will in fact show low volatility.
• Cashflows don’t lie, and the use of illiquid assets can be see as the equivalent of Ulysses lashing himself to the mast to avoid the temptations of the Sirens.
Abnormal distributions
The use of standard deviation in the SR formula implies that the returns from the portfolio follow a normal (Gaussian) distribution.
This is almost never the case, and for some investments the distribution of returns can be very far from normal.
• Such investments can produce misleading SRs.
Things that the SR ignores by assuming a normal distribution include:
• asymmetry of the distribution of returns (skew)
• fat tails / extreme returns (kurtosis)
• non-linear risks (typically from derivatives like options and warrants)
In recent years the risk free rate has been close to zero and so many commentators have ignored Rf in the calculation.
Another common variation is to substitute a benchmark portfolio as a reference point instead of the risk-free rate.
• The ratio is then known as the information ratio.
It’s important to choose an appropriate benchmark, such that the beta of the portfolio relative to the benchmark is close to one.
• The same applies to the use of a peer group as a benchmark – the choice of appropriate peers is crucial.
Alternatives to the Sharpe ratio include:
1. The Treynor ratio
□ This compares the return on a portfolio to its systematic risk by dividing the excess return by the beta of the portfolio.
□ This makes it most useful for well-diversified portfolios (in contrast to the information ratio).
2. The Sortino ratio
□ This uses a minimum acceptable return (MAR) instead of the risk-free rate.
□ And only looks at the standard deviation of returns below that MAR (ie. upside volatility is ignored).
3. VaR (Value at Risk)
□ This is a different approach to evaluating portfolios, which looks at the maximum loss for a given level of confidence.
□ Eg. the portfolio has a 95% chance of not losing more that 10% in a year.
4. Other less-commonly used methods for evaluating portfolios include:
□ non-portfolio benchmarks (especially inflation, but also long duration risk-free assets like index-linked bonds),
□ return to drawdown (sometimes known as RoMaD – return over maximum drawdown), and
□ the Omega ratio (the probability weighted ratio of gains versus losses for some threshold return target).
The Sharpe Ratio is easy to calculate, and is a useful way to compare portfolios with similar returns, or to illustrate the trade-off between returns and volatility.
Just remember not to maximise for the SR, but instead find the portfolio that produces your target return with the lowest volatility.
• Which I guess means that I think that the Sortino ratio is better.
The Omega function looks good, too, but it is little-used and not easy to calculate.
Until next time.
|
{"url":"https://the7circles.uk/sharpe-ratio/","timestamp":"2024-11-13T19:17:07Z","content_type":"text/html","content_length":"802378","record_id":"<urn:uuid:abff28d0-0d09-43f9-9ede-fb1739d41749>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00058.warc.gz"}
|
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.APPROX-RANDOM.2017.32
URN: urn:nbn:de:0030-drops-75816
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2017/7581/
Blasiok, Jaroslaw ; Ding, Jian ; Nelson, Jelani
Continuous Monitoring of l_p Norms in Data Streams
In insertion-only streaming, one sees a sequence of indices a_1, a_2, ..., a_m in [n]. The stream defines a sequence of m frequency vectors x(1), ..., x(m) each in R^n, where x(t) is the frequency
vector of items after seeing the first t indices in the stream. Much work in the streaming literature focuses on estimating some function f(x(m)). Many applications though require obtaining estimates
at time t of f(x(t)), for every t in [m]. Naively this guarantee is obtained by devising an algorithm with failure probability less than 1/m, then performing a union bound over all stream updates to
guarantee that all m estimates are simultaneously accurate with good probability. When f(x) is some l_p norm of x, recent works have shown that this union bound is wasteful and better space
complexity is possible for the continuous monitoring problem, with the strongest known results being for p=2. In this work, we improve the state of the art for all 0<p<2, which we obtain via a novel
analysis of Indyk's p-stable sketch.
BibTeX - Entry
author = {Jaroslaw Blasiok and Jian Ding and Jelani Nelson},
title = {{Continuous Monitoring of l_p Norms in Data Streams}},
booktitle = {Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)},
pages = {32:1--32:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-044-6},
ISSN = {1868-8969},
year = {2017},
volume = {81},
editor = {Klaus Jansen and Jos{\'e} D. P. Rolim and David Williamson and Santosh S. Vempala},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2017/7581},
URN = {urn:nbn:de:0030-drops-75816},
doi = {10.4230/LIPIcs.APPROX-RANDOM.2017.32},
annote = {Keywords: data streams, continuous monitoring, moment estimation}
Keywords: data streams, continuous monitoring, moment estimation
Collection: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2017)
Issue Date: 2017
Date of publication: 11.08.2017
DROPS-Home | Fulltext Search | Imprint | Privacy
|
{"url":"http://dagstuhl.sunsite.rwth-aachen.de/opus/frontdoor.php?source_opus=7581","timestamp":"2024-11-09T14:06:47Z","content_type":"text/html","content_length":"6718","record_id":"<urn:uuid:7866e7e5-47e7-4c3a-93d4-275c037d61ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00383.warc.gz"}
|
Optimal Amount of Active Risk - Breaking Down Finance
Optimal amount of active risk
The optimal amount of risk tells you the level of active risk that maximises the portfolio Sharpe ratio. On this page, we discuss how an investor can easily calculate the optimal amount of risk. To
do so, we need the Sharpe ratio of the passive portfolio, the volatility of the passive portfolio, as well as the information ratio of the active portfolio.
We illustrate the approach using an Excel spreadsheet in the final section of this page. The spreadsheet can be downloaded at the bottom of the page
Optimal amount of active risk formula
For an unconstrained active portfolio, the optimal amount of risk is the level of active risk that maximises the overall Sharpe ratio. By that we mean that, if we allocate that calculated proportion
of the portfolio to the active strategy, the overall Sharpe ratio will be the highest. The formula to calculate the optimal amount of risk is the following
where IR is the information ratio of the active portfolio, b is the Sharpe ratio of the benchmark and sigma B is the volatility of the benchmark. Once we have the optimal amount of active risk, we
still need to determine the amount we should invest in the active portfolio and the amount in the benchmark portfolio. To calculate these, we use the formula
where sigma A* and A are the optimal amount of active risk and the total active risk of the active portfolio. We can also calculate the Sharpe ratio of the combined portfolio as follows
Optimal amount of active risk example
Finally, let’s turn to an example of what active risk should a portfolio should take given a certain information ratio. The following table implements the above formulae using a numerical example.
The spreadsheet can be downloaded below.
We discussed the calculation of the amount of active risk an investors should take. This is the amount of risk that maximizes the Sharpe ratio of the overall portfolio. This method can help investors
allocate between an active portfolio (e.g. a mutual fund) with a certain information ratio and the benchmark portfolio. The latter is, of course, cheaper since the management fee is considerably
Want to have an implementation in Excel? Download the Excel file: Optimal Amount of Risk calculator
|
{"url":"https://breakingdownfinance.com/finance-topics/alternative-investments/optimal-amount-of-active-risk/","timestamp":"2024-11-09T12:33:42Z","content_type":"text/html","content_length":"239481","record_id":"<urn:uuid:5b8904d4-3731-4d99-b521-04d68a7250a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00861.warc.gz"}
|
Perimeter of Cube: Learn Concept, Formula, Comparison, Examples
How to convert the perimeter of a cube into a side?
The Perimeter of a cube is given by the formula, Perimeter (P) = 12 x a, where “a” is the length of each side. Hence by applying the formula for a given perimeter of a cube, we can calculate the
length of its side.
What is the perimeter of a cuboid?
The perimeter of a cuboid is given by the formula, 4 x (L+B+H), where L = length of the cube, B = breadth of the cube, and H = height of the cube.
What is the perimeter of one face of a cube?
One face of a cube is basically a square and a square has four edges, hence the perimeter of one face of a cube is 4 x a, where “a” is the side length.
How do you find the base of a cube?
The base of a cube is a square, hence its base area will be a², where “a” is the length of each side.
How do you find the perimeter of a cube?
The Perimeter of a cube is given by the formula, Perimeter (P) = 12 x a, where “a” is the length of each side.
|
{"url":"https://testbook.com/maths/perimeter-of-cube","timestamp":"2024-11-14T01:43:13Z","content_type":"text/html","content_length":"856643","record_id":"<urn:uuid:f289c0ca-786b-4fb3-a6bd-62a37d80701b>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00810.warc.gz"}
|
Percentage change of a number
The calculation of the change in one number relative to another can be represented as a percentage. This calculator is designed to quickly and correctly calculate the percentage increase or decrease.
It is also suitable for counting the percentage increase in prices for any product.
Calculation formula
% increase = 100 x (B — A) / A
% decrease = 100 x (A — B) / A
In this formula: A is the old number, B is the new one.
|
{"url":"https://calculators.vip/en/percentage-change-of-a-number/","timestamp":"2024-11-12T17:10:19Z","content_type":"text/html","content_length":"24446","record_id":"<urn:uuid:5a5bb13e-ad54-4903-a37a-b087913db5c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00825.warc.gz"}
|
Members: 3658
Articles: 2'599'751
Articles rated: 2609
03 November 2024
Article overview
Theory of Photoluminescence from a Magnetic Field Induced Two-dimensional Quantum Wigner Crystal
Dong-Zi Liu ; H.A. Fertig ; S. Das Sarma ;
Date: 11 May 1993
Subject: cond-mat
Abstract: We develop a theory of photoluminescence using a time-dependent Hartree-Fock approximation that is appropriate for the two-dimensional Wigner crystal in a strong magnetic field. The
cases of localized and itinerant holes are both studied. It is found that the photoluminescence spectrum is a weighted measure of the single particle density of states of the electron
system, which for an undisturbed electron lattice has the intricate structure of the Hofstadter butterfly. It is shown that for the case of a localized hole, a strong interaction of the
hole with the electron lattice tends to wipe out this structure. In such cases, a single final state is strongly favored in the recombination process, producing a single line in the
spectrum. For the case of an itinerant hole, which could be generated in a wide quantum well system, we find that electron-hole interactions do not significantly alter the density of
states of the Wigner crystal, opening the possibility of observing the Hofstadter gap spectrum in the electron density of states directly. At experimentally relevant filling fractions,
these gaps are found to be extremely small, due to exchange effects. However, it is found that the hole, which interacts with the periodic potential of the electron crystal, has a
Hofstadter spectrum with much larger gaps. It is shown that a finite temperature experiment would allow direct probing of this gap structure through photoluminescence.
Source: arXiv, cond-mat/9305006
Services: Forum | Review | PDF | Favorites
No review found.
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
{"url":"http://science-advisor.net/article/cond-mat/9305006","timestamp":"2024-11-03T12:30:55Z","content_type":"text/html","content_length":"23227","record_id":"<urn:uuid:74f21866-b993-414e-afff-91462fb4c2dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00692.warc.gz"}
|
Basic Accounting Equation Worksheet Printable - Tessshebaylo
Basic Accounting Equation Worksheet Printable
The accounting equation worksheet wordmint ems teacha template double entry bookkeeping expanded accountingcoach cbse class 11 accountancy set a free forms and templates printable pdf general ledger
The Accounting Equation Worksheet Wordmint
Ems Accounting Equation Worksheet Teacha
Accounting Worksheet Template Double Entry Bookkeeping
Ems Accounting Equation Worksheet Teacha
Expanded Accounting Equation Accountingcoach
Cbse Class 11 Accountancy Accounting Equation Worksheet Set A
Cbse Class 11 Accountancy Accounting Equation Worksheet Set A
Free Bookkeeping Forms And Accounting Templates Printable Pdf
Cbse Class 11 Accountancy Accounting Equation Worksheet Set A
Cbse Class 11 Accountancy Accounting Equation Worksheet Set A
General Ledger Sheet Template Double Entry Bookkeeping
Cbse Class 11 Accountancy Accounting Equation Worksheet Set A
Free Bookkeeping Forms And Accounting Templates Printable Pdf
Introduction To Bookkeeping And Accounting 3 6 The Equation Double Entry Rules For Income Expenses Openlearn Open University
Debits And Credits The Expanded Accounting Equation Jobs Basics
Accounting Templates In Excel Skills
Introduction To Bookkeeping And Accounting 3 6 The Equation Double Entry Rules For Income Expenses Openlearn Open University
Solved This Exercise Will Continue The Understanding Of Chegg Com
Lottery Winner Accounting Worksheet L
Calculating Ratios Balance Sheet Template For Excel Templates Credit Card
Similar To Concepts Word Search Wordmint
Balancing Equations Worksheets Answers Year 3 6 Maths
Ts Grewal Accountancy Class 11 Solution Chapter 5 Accounting Equation
The accounting equation worksheet ems template double expanded cbse class 11 accountancy free bookkeeping forms and general ledger sheet
Trending Posts
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://www.tessshebaylo.com/basic-accounting-equation-worksheet-printable/","timestamp":"2024-11-06T14:02:08Z","content_type":"text/html","content_length":"60102","record_id":"<urn:uuid:321d19c9-9442-4b3b-bdc7-31fe2183755d>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00734.warc.gz"}
|
Connectivity interplays with age in shaping contagion over networks with vital dynamics
Connectivity interplays with age in shaping contagion over networks with vital dynamics
Carlo Piccardi, Alessandro Colombo, and Renato Casagrandi*
Department of Electronics, Information and Bioengineering, Politecnico di Milano, 20133 Milano, Italy
(Received 16 June 2014; published 17 February 2015)
The effects of network topology on the emergence and persistence of infectious diseases have been broadly explored in recent years. However, the influence of the vital dynamics of the hosts (i.e.,
birth-death processes) on the network structure, and their effects on the pattern of epidemics, have received less attention in the scientific community. Here, we study Susceptible-Infected-Recovered
(-Susceptible) [SIR(S)] contact processes in standard networks (of Erd¨os-R´enyi and Barab´asi-Albert type) that are subject to host demography. Accounting for the vital dynamics of hosts is far from
trivial, and it causes the scale-free networks to lose their characteristic fat-tailed degree distribution. We introduce a broad class of models that integrate the birth and death of individuals
(nodes) with the simplest mechanisms of infection and recovery, thus generating age-degree structured networks of hosts that interact in a complex manner. In our models, the epidemiological state of
each individual may depend both on the number of contacts (which changes through time because of the birth-death process) and on its age, paving the way for a possible age-dependent description of
contagion and recovery processes. We study how the proportion of infected individuals scales with the number of contacts among them. Rather unexpectedly, we discover that the result of highly
connected individuals at the highest risk of infection is not as general as commonly believed. In infections that confer permanent immunity to individuals of vital populations (SIR processes), the
nodes that are most likely to be infected are those with intermediate degrees. Our age-degree structured models allow such findings to be deeply analyzed and interpreted, and they may aid in the
development of effective prevention policies.
DOI:10.1103/PhysRevE.91.022809 PACS number(s): 89.75.Fb, 87.10.Mn
Epidemiology is no doubt one of the most successful fields of application of complex network science. It is in fact quite natural to recognize that individuals cannot be treated “as average” in terms
of pathogen transmissions. For example, in sexually transmitted or in childhood diseases, the social behavior of each human host varies her risk of infection, thus it can enhance or reduce her role
as a potential spreader in the population. A similar heterogeneity emerges in animal populations, where some nodes—such as the older individuals in rodents [1] or the nursery swine farms in Ontario
[2]—can play key roles in the spread of infections. This is why the extensive use of ordinary differential equation (ODE) approaches to epidemics, rooted in the pioneering work by Kermack and
McKendrick [3], is currently facing a deep revision in light of the complex networks paradigm.
Particularly studied in this context is the problem of disease persistence [4,5], because of its crucial role in public health policies. Being able to identify which individuals are at the highest
risk of infection is in fact a priority for health systems, and it can benefit much from insight into the dynamics of contagions [6,7]. While the spreading of diseases over temporal networks [8] or
adaptive networks [9] has been studied in recent years, the mechanisms by which birth and death processes can alter the expected outcomes of simple diseases spreading over an otherwise static but
heterogeneous network have been mostly ignored. This contrasts with the fact that many of the results on SIR-like epidemic ODE
*[Author to whom all correspondence should be addressed:]
models with varying total population size [10] focus instead on diseases, such as tuberculosis, where the latent period of exposed individuals is so long that the hosts’ demography cannot be ignored
The issue of whether one should consider a system open, i.e., subject to demographic variations, is strictly related to the time scales of disease transmission, the time spent by individuals in the
infected compartment (the so called
infectious period), and the temporal window over which the
disease dynamics is observed. For instance, analyzing the spreading of one epidemic wave of plague on the island of Bombay [3] or a single wave of influenza [13] requires different models than those
needed to study the long-term patterns. Even for diseases characterized by fast cycles (such as influenza), the introduction of demographic dynamics into the epidemiological model is crucial to
investigate the temporal characteristics of the persistent disease in the long run [14].
The recruitment of susceptible individuals that keep dis-eases at endemic equilibrium is in many cases due to births (think of measles as a paradigmatic example [15]). However, since the SIS model
offers a simple way of simultaneously (i) replenishing the susceptible compartment, and (ii) keeping the network size and structure constant throughout time, it has been taken as the core mechanism
in the great majority of published studies. As brilliantly synthesized by N˚asell [16], “it turns out that SIR without demography lead to epidemic infections, while both SIR models with demography
and SIS models, with or without demography, are associated with endemic infections.” Despite the similarity of their dynamical outputs in compartmental models, here we show that SIR and SI(R)S models
subject to demogra-phy over networks mainly affect individuals with different
degrees, thus providing quantitatively and qualitatively differ-ent results.
Understanding the effects of demography on the existence of epidemic thresholds in epidemiological models is surely important. However, except for a few notable examples [17– 19], research on this
topic has been scarce. In contrast with commonly used epidemiological models on networks, we therefore account for the hosts’ demography. This allows us to study the long-term characteristics of
epidemics when the vital dynamics of hosts cannot be considered as frozen during the period of interest. Also, to disentangle the underlying causes that generate the degree distributions of infecteds
as found below, we investigate the relationships between the age of individuals, their degree, and their epidemio-logical state via an ad hoc model. This is described and analyzed in the next
sections, prior to some concluding remarks.
We model the population as a time-varying network with
N(t) nodes (i.e., individuals). We use ki(t) 0 for the
current node degree, i.e., the number of links connecting the ith node. The network is characterized by its degree distribution, and we name 0 pk(t) 1 the fraction of nodes
having degree k= 0,1, . . . ,¯k at time t (¯k < ∞ denotes the maximum degree value), therefore[k]pk(t)= 1 for all t’s.
The possibly time-varying average degree of the network is denoted byk =[k]kpk(t). We assume that, during a short
time interval , an existing node can die, together with all links departing from it, with probability μ, irrespective of its degree and its current epidemic state (i.e., no virulence). Such a death
process can in principle be more elaborate, but we want to keep demography as simple as possible to minimize the potential sources of dynamical complexity. We therefore assume that each node gives
birth to another node with probability μ, independently of its epidemic state. Newborns attach to existing nodes according to the topology-dependent rules detailed below. Since the natality and
mortality rates are identical, the birth-death process is neither biased toward population growth nor extinction, and the network size N (t) is expected to stochastically fluctuate around a constant
value. Although the demographic process does not change the average number of nodes in the network, it is interesting to monitor the temporal evolution of the degree distribution. As proposed by
Moore, Ghoshal, and Newman (see [20]), the number of nodes with degree k at time t+ can be written as
N(t)pk(t+ ) = N(t)pk(t)+ N(t)μ[−pk(t)− kpk(t)
+ (k + 1)pk[+1](t)+ ϑπk[−1]pk[−1](t)
− ϑπkpk(t)+ φk], (1)
where φk is the probability that a newborn node has degree
k, ϑ =kkφk is the average degree of newborn nodes, and
πkis proportional to the probability that a newborn node links
to an existing degree-k node. The specific functional forms of the natality profile φk and of the attachment profile πk
must obey the constraints[k]φk= 1 and
kπkpk= 1, and
they are detailed below for the cases of interest. Each term in (1) describes one of the possible mechanisms that alter the degree of a generic node, thus changing the entire degree distribution of
the network (see [20] for details): the removal due to death [−pk], the passage of a node from degree k+
1 to k [(k+ 1)pk+1] and from k to k− 1 [−kpk] when a
neighbor dies, the passage of a node from degree k− 1 to k [ϑπk[−1]pk[−1]] and from k to k+ 1 [−ϑπkpk] when a newborn
node attaches to it, and the insertion of new nodes with degree
k[φk]. The equations for k= 0 and k = ¯k are slightly different,
in an almost obvious way, and are therefore omitted. Dividing both sides of (1) by N (t) and taking the limit for → 0, we obtain
pk(t)= −μpk(t)− μkpk(t)+ μ(k + 1)pk+1(t)
+ μϑπk[−1]pk[−1](t)− μϑπkpk(t)+ μφk. (2)
The resulting degree distribution of the network can be obtained by integrating Eq. (2) once the functions φk and
πk and the initial conditions pk(0), k= 0,1, . . . ,¯k, have
been specified. In other words, the long-term network topol-ogy is the attractor reached by the dynamical system (2) starting from a particularly relevant configuration. In the epidemiological
context, the prototypical network struc-tures used are the Erd¨os-R´enyi network and the scale-free network.
A. Erd¨os-R´enyi networks
An Erd¨os-R´enyi network (ERN) [21] is obtained by ran-domly connecting N nodes with a prescribed number of links. The degree distribution of an ERN with large N and average degreek is given by a
Poisson distribution (e.g., [22]),
exp (−k) kk
k! . (3)
To analyze the effects of the birth-death process on the network structure, consider the case in which the newborn nodes are Poisson-distributed too, with a mean ϑ, i.e.,
φk= exp (−ϑ) ϑ k
k! . (4)
Also, assume that each newborn individual links to existing nodes at random, independently of their degrees, i.e., πk = 1
for all k. In such a case, it can be proved [20] that the equilibrium of system (2) is a Poisson distribution. If the initial network is an ERN with distribution (3) andk = ϑ, the distribution
remains unchanged through time. In addition to the solution provided by [20], we prove that such a fixed point of system (2) is globally asymptotically stable, and thus it is reached regardless of
the initial degree distribution
pk(0). As a matter of fact, introducing the column vectors p=
[p0,p1, . . . ,pk, . . .]T and = [φ0,φ1, . . . ,φk, . . .]T, Eq. (2)
can be written as
where the matrix A(p)= μ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ −1 − ϑπ0 1 0 0 0 · · · ϑπ[0] −2 − ϑπ[1] 2 0 0 · · · 0 ϑπ1 −3 − ϑπ2 3 0 · · · 0 0 ϑπ2 −4 − ϑπ3 4 · · · .. . ... ... ... ... . .. ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ (6)
depends on p, in general, because the πk’s might do so. In this
specific case, however, πk= 1 for all k, so that A(p) = A is
constant (i.e., independent of the current network topology) and Metzler (i.e., all off-diagonal entries are non-negative). Since all column sums of A are negative, its dominant (Frobenius)
eigenvalue is guaranteed to be negative (e.g., [23]). Therefore, Eq. (5) is a time-invariant, linear, asymptotically stable system. The unique steady-state degree distribution of (2) is reached from
any initial condition.
B. Scale-free networks
Scale-free networks (SFNs) are highly heterogeneous struc-tures, in which no node can be defined as “typical” since very few nodes (the hubs) are connected with many others, while the great majority
of the nodes have only a few connections. The degree distribution of a SFN, at least for large k, is a power law of the form
pk∼ k−c, (7)
where c > 0. In the past decade, SFNs have received great attention because they emerged in a variety of social and technological contexts [24–26], including epidemiology (e.g., [27]). A peculiarity
in (7) is that its second moment k2[ =]
kk2pkdiverges when N → ∞ if 2 < c 3, a range
of values that often appears in data [28].
Consistently with the preferential attachment paradigm proposed by Bar´abasi-Albert and used to generate SFNs [24],
FIG. 1. (Color online) Examples of the degree distributions of an Erd¨os-R´enyi network (ERN), a scale-free network [SFN, Eq. (7), with c= 3], and an evolved Bar´abasi-Albert network (EBAN). All
distributions have the same average degreek = 10.
we assume that all newborn nodes in the network evolving on the basis of Eq. (2) have the same prescribed degree ϑ (thus
φk= 1 for k = ϑ, and 0 otherwise). Also, we imagine that
newborns preferentially attach their links to nodes with high degrees (πk= k/k). In this case, as proved by [20], the fixed
point of (2) turns out to have a functional form that is not scale-free but “stretched exponential,” namely
pk∼ k−3/4e−2 √
(8) We call a network resulting from the process described above an evolved Bar´abasi-Albert network (EBAN), since the insertion of new nodes follows the standard rules proposed by [24], yet the
network’s evolution is driven by deaths, and not only births as in the standard Bar´abasi-Albert model. Note that the matrix A(p) of Eq. (5) does depend on the pk’s
because, in this case, πk= k/k = k/(
hhph). System (5) is
therefore nonlinear and, although the existence and uniqueness of the fixed point (8) have been formally proved (see [20]), the analysis of its stability is far from trivial. Nonetheless, all
numerical simulations we performed indicate that the fixed-point distribution (8) is reached from all initial degree distributions, including when pk(0)∼ k−c. In other words,
even if the network topology we start from is power-law and the birth (attachment) mechanism is fully compliant with the Barab´asi-Albert rule for creating SFNs [24], the existence of a death
(detachment) process destroys the attractiveness of the scale-free distribution. Scale-free networks thus become transient states rather than attractors of system (2). Notably, in contrast to what
happens with the power-law distribution, the second moment k2[ of (][8][) remains bounded even in]
the theoretical limit N → ∞. This means that the degree distribution of an EBAN loses its “fat-tail” [20], a feature that makes SFNs so peculiar.
For a direct and qualitative comparison, some exemplifica-tive degree distributions of an ERN, a SFN, and an EBAN with the same average degree are depicted in Fig.1.
III. EPIDEMIC DYNAMICS ON VITAL NETWORKS We study a SIRS contact process with demography [29], which, under the standard “homogeneous mixing” hypothe-sis [30], is described by the following ODE
˙s(t)= μ − μs(t) − βs(t)y(t) + αr(t), ˙
y(t)= βs(t)y(t) − (μ + γ )y(t), (9) ˙r(t)= γy(t) − (μ + α)r(t).
Equation (9) is easily derived by normalization of the classical endemic model presented in [31]. The variables s(t), y(t), and r(t) represent the fraction of susceptible (or infectable),
infected (thus infective), and recovered individuals in the population. It must be noticed that the birth and death rates are per capita and independent of epidemiological states. Also, no vertical
transmission takes place (i.e., all newborns are susceptible). Notice that the first equation of system (9) is a simplification of the full equation for the susceptibles,
˙s(t)= μ [s(t) + y(t) + r(t)] − μs(t) − βs(t)y(t) + αr(t), (10) obtained by using the equality s(t)+ y(t) + r(t) = 1, and that the equation for r(t) is redundant since r(t)= 1 − s(t) − y(t). Besides
the birth-death rate μ, the parameters appearing in (9) are the loss of immunity rate α, the recovery rate γ , and the contact rate β, which can be interpreted as β= ρ ˆn, namely as the product of
the disease-specific transmission rate ρ and of the effective number of contacts per unit time ˆn, which is a characteristic of the population [30]. Provided that β is not too small (β > μ+ γ ), it
can easily be verified that system (9) has a unique endemic equilibrium with
y= α+ μ α+ μ + γ 1−μ+ γ β . (11)
Such an equilibrium is globally asymptotically stable [32], in other words the long-term behavior of the epidemiological process is constant. In the following, we will be mostly interested in the SIR
case, obtained assuming permanent immunity (α= 0), which is customarily used to describe a number of diseases, including childhood epidemics such as measles, rubella, or chicken pox [33].
To study the dynamics of the birth-death-infection-recovery process over a network, we follow an approach similar, yet not identical, to the one that is often used to model epidemics on networks,
based on the assumption that all nodes with the same degree are statistically equivalent [4,5,28]. Our 3d state variables, with d= ¯k + 1, are the fraction, over the total population size N (t), of
individuals that have degree k and are susceptible [sk(t)], infected [yk(t)], or recovered [rk(t)]
at time t. Note that here sk(t)+ yk(t)+ rk(t)= pk(t), and not
unity. The equations governing the temporal evolution of the state variables incorporate the birth and death mechanisms that affect all nodes independently of their epidemiological state, as
described in the preceding section. Peculiar to the epidemics are, instead, the infection of susceptibles, the recovery of the infected, and the loss of immunity of the recovered. The last two
mechanisms are simple to model, since during a short time interval an infected node can recover with probability
γ , while a recovered node can rejoin susceptibles with probability α. The infection mechanism is instead more complex, because a susceptible node, say j , can become infected with probability ρnj,
where nj is the number
of infected nodes among its neighbors. Neglecting degree correlations, the probability that the neighbor of a node has degree h is qh= hph/k (e.g., [26]). Since the probability that
a degree-h node be infected is yh/ph, the expected number of
infected neighbors of a degree-k node is
ek= k h hph k yh ph . (12)
Therefore, the number of susceptible nodes of degree k that become infected during is ρekN sk. By combining
demography (Sec. II) and epidemiological dynamics, we finally obtain the following system of 3d equations:
˙sk(t)= −μsk(t)− μksk(t)+ μ(k + 1)sk+1(t) + μϑπk−1sk−1(t)− μϑπksk(t)+ μφk − ρksk(t) h h kyh(t)+ αrk(t), ˙
yk(t)= −μyk(t)− μkyk(t)+ μ(k + 1)yk+1(t)
+ μϑπk−1yk−1(t)− μϑπkyk(t) (13) + ρksk(t) h h kyh(t)− γyk(t), ˙rk(t)= −μrk(t)− μkrk(t)+ μ(k + 1)rk+1(t) + μϑπk−1rk−1(t)− μϑπkrk(t)+ γyk(t)− αrk(t).
Note that the contact rate β = ρ ˆn of the homogeneous mixing SIRS model (9) is now replaced by a degree-dependent contact rate βk= ρk. The global disease prevalence y(t), namely the
total fraction of infected in the population, is given by
yk(t). (14)
It is straightforward to check that the fundamental Eq. (2) is easily obtained by summing up all 3d equations (13) for a fixed k. Since sk+ yk+ rk is not constant over time, we
cannot simplify the study of (13) by eliminating the variables rk
and their equations. However, being the birth-death process uncoupled from the infection-recovery process, we can ask if and how the disease spreads in a population that has already reached its
demographic equilibrium. Assuming that Eq. (2) has reached its steady state, we reduce system (13) by focusing only on the dynamics of skand yk, since at any time rk= pk−
sk− yk. We note that, although it would be interesting to obtain
the SIRS model (9) as a particular case of our model (13), this is not possible mathematically. Such impossibility can easily be understood from the epidemiological perspective, because the random
nature of contacts in the homogeneous mixing approach cannot be reduced to the permanency of the links in the network model [30].
Let us now focus our attention to the SIR case (α= 0), for which infected individuals get permanent immu-nity when recovered. The steady-state behavior of system (13) is summarized in Fig. 2. The
first remarkable consequence of accounting for vital dynamics in the epidemic process is the existence of a finite epidemic threshold even in the EBAN case, an effect of the loss of the fat tail in
the degree distribution. As already evidenced by Piccardi and Casagrandi for SIS processes [17], this outcome sharply contrasts the well-known findings of Pastor-Satorras and Vespignani [4,5] (see
also [34] on this subject). The prevalence y in Fig. 2, which becomes nonzero above suitable thresholds of ρ and/or k, monotonically increases with both quantities, hence with βk = ρk. This is
qualitatively consistent with the
homogeneous mixing model (9), as made clear from Eq. (11). However, a closer analysis reveals that the two models display non-negligible quantitative differences at low transmission
0 10 20 30 40 50 0 10 20 30 40 50 0 0.5 1 1.5 x 10−4 ρ <k> y
FIG. 2. (Color online) The equilibrium value of the disease prevalence y on the network model (13) for an SIR model (α= 0) in the case of EBAN, as a function of the transmission rate ρ and of the
average degreek (similar results—not shown—are obtained with ERN). In the bottom panel, the contour lines of y= y(ρ,k) (solid red lines) obtained by model (13) are contrasted to the iso-β lines ρk =
const (dashed black lines), which are the iso-y lines of the homogeneous mixing model (9). Parameter values (μ= 0.02,
γ= 100) fall in the range that [33] considers consistent with measles dynamics.
rates ρ, where the network model systematically predicts that the threshold values fork under which the disease is eradicated be lower than those for the homogeneous mixing model (9). Also, above
such thresholds, the prevalence y is always larger in the network model. Overall, the network model predicts quite a stronger capability of the infection, if compared to the homogeneous mixing model,
and a better ability of sustaining and propagating diseases characterized by small contact rates β.
The most interesting results are obtained from network model (13) when, instead of computing global quantities such as the prevalence y, we analyze how the disease is distributed among nodes with
different characteristics. Figure 3 shows that, as one may expect, the distribution of the infected yk
through the nodes with different degree k essentially (and not surprisingly) replicates the degree distribution pk. However,
if we compute the proportion of infected yk/pk, that is, the
probability that a degree-k node will be infected, the result is quite unexpected. For ERNs, yk/pk monotonically increases
up to a plateau level, meaning that the nodes with large degree are the main carrier of the disease and thus the main reason for its propagation. This result is fully consistent with the well-known
findings on SIS epidemics propagating in networks with no demography [4,5].
The epidemiological scenario of EBANs is, however, completely different from that of ERNs, despite the qualitative similarity of their degree distributions (see again Fig.1): in EBANs, not only the
infected yk, but also their proportion
yk/pk, follow a distribution that qualitatively replicates the
degree distribution pk(see Fig.3). In other words, if we group
the individuals by their number of contacts k, the probability of finding infected individuals is largest in the most represented set. On the contrary, such a probability tends to zero for larger and
larger values of k. This means that, contrary to a widespread belief regarding both human (e.g., [6,7,35,36]) and animal diseases (e.g., [37]), the most connected nodes are not necessarily at highest
risk of infection when demography is stretching the network topology, even if the original contacts followed a scale-free distribution. This may obviously have implications in the design of effective
prevention policies.
It is interesting to discuss if—and to what extent—the above result depends on the specific assumptions used to describe the demographic evolution of the network. In Fig.3, we used two prototypical
models: ERN, where the natality profile φkis Poisson and the attachment profile πkis uniform
(or “flat,” i.e., πk= 1 for all k), and EBAN, where φk is
a singleton (i.e., a spike distribution at k= θ) and πk is
“preferential” (in the manner of Barab´asi-Albert). We can easily swap the assumptions and generate, for example, two types of “hybrid” networks, one with Poisson natality and preferential attachment
and another with singleton natality and flat attachment. The two degree distributions are obtained by letting the demographic system (2) evolve, from feasible initial conditions, until the unique
equilibrium distribution is reached. The dynamics of the epidemics on such networks is then obtained by model (13). As displayed in Fig.4[panels (a) and (b)], the results are consistent with those of
the EBAN above, with the largest proportion of infected yk/pk at intermediate
degrees, where the degree distribution of the individuals also peaks, independent of their epidemiological state. Since Eq. (2) allows for highly flexible definitions of φkand πk, we can also
test less idealized assumptions. For example, we can consider a Poisson natality with a truncated tail (i.e., we let φk= 0 for
all k larger than a prescribed value, renormalizing φkto have
unit sum) to avoid unrealistically large degrees of newborns. Again, the result is qualitatively the same [panel (c) of Fig.4]. Therefore, our result that the largest probability of infection is for
nodes at intermediate degrees seems to be quite general for the SIR epidemic process, with the ERN case being the exception.
A different conclusion is obtained if, instead of considering diseases that confer lifelong immunity, we analyze SIRS processes (α > 0). This mechanism to yield susceptible individuals—qualitatively
different from birth—produces, in a sense, dominant effects over those induced by the previously discussed underlying demographic dynamics. Such epidemics becomes, in many respects, similar to those
that would be obtained with an SIS without demography, as in fact we recover in that case the well-known behavior evidenced by [4,5],
FIG. 3. (Color online) The infected yk(left column) and the proportion of infected yk/pk(right column) in SIR model (α= 0), as a function
of the node degree k, for ERN (upper panels) and EBAN (lower panels). Parameter values are ρ= 20, k = 20, μ = 0.02, γ = 100. where the larger the degree of a node is, the higher is its
probability of being infected [Fig. 4(d)]. The role of vital demography, however, makes our case more involved. SIR and SIRS processes fundamentally differ in the relationship between the infected
state of individuals and their age. Indeed, in the SIR process, the eldest individuals tend to accumulate in the compartment of the recovered, whereas this has no reason to occur if immunity is lost
during life as in the SIRS process. To explore in detail the implications of such subtle but crucial differences, and to investigate why the ERN case is so peculiar, we explicitly account below for
the age of individuals in our mathematical model.
We generalize the description of the disease dynamics on vital networks so that at any time t each individual (node) is characterized by (i) its degree k, (ii) its epidemiological state (susceptible,
infected, or recovered), and (iii) its age a, defined as the time passed since the node was added to the network. We denote by Sk(a,t), Yk(a,t), and Rk(a,t) the distributions of,
respectively, susceptibles, infected, and recovered that at time
thave age a and degree k. For any k, t, and a, we obtain the age distribution of degree-k nodes Pk(a,t) as equal to Sk(a,t)+
FIG. 4. (Color online) The proportion of infected yk/pkobtained with model (13) as a function of the node degree k, for a few networks with mixed features. (a) SIR process (α= 0): Poisson natality,
preferential attachment. (b) SIR process: Singleton natality, flat attachment. (c) SIR process: Poisson natality truncated at k= 30, flat attachment. (d) SIRS process (α = 10): EBAN (singleton
natality, preferential attachment). Other parameter values are ρ= 20, k = 20, μ = 0.02, γ = 100.
Yk(a,t)+ Rk(a,t). These newly introduced quantities relate to
the (age-independent) ones formerly used in Eqs. (2) and (13) as follows: sk(t)= [+∞] 0 Sk(a,t)da, yk(t)= [+∞] 0 Yk(a,t)da, (15) rk(t)= [+∞] 0 Rk(a,t)da, pk(t)= [+∞] 0 Pk(a,t)da.
The equations governing the dynamics of the newly intro-duced variables can be obtained with the standard procedure used to derive, in population dynamics, age-structured models in continuous time
[38]. Since t and a grow at the same rate, the basic balance law for, e.g., degree-k susceptibles takes the form
Sk(a+ ,t + ) = Sk(a,t)+ (inflow − outflow) , (16)
where the inflow and outflow terms account for all the mechanisms that alter either the degree or the epidemic state of a node, as evidenced in Eqs. (2) and (13). Letting → 0, we obtain the following
system of partial differential equations:
∂a +
= −μSk(a,t)− μkSk(a,t)+ μ(k + 1)Sk+1(a,t)
+ μϑπk−1Sk−1(a,t)− μϑπkSk(a,t) − ρkSk(a,t) h h kyh(t)+ αRk(a,t), ∂Yk(a,t) ∂a + ∂Yk(a,t) ∂t
= −μYk(a,t)− μkYk(a,t)+ μ(k + 1)Yk+1(a,t)
(17) + μϑπk−1Yk−1(a,t)− μϑπkYk(a,t) + ρkSk(a,t) h h kyh(t)− γ Yk(a,t), ∂Rk(a,t) ∂a + ∂Rk(a,t) ∂t
= −μRk(a,t)− μkRk(a,t)+ μ(k + 1)Rk+1(a,t)
+ μϑπk−1Rk−1(a,t)− μϑπkRk(a,t)
+ γ Yk(a,t)− αRk(a,t).
Note that the flow of new infecteds ρkSk(a,t)
depends on the yh, i.e., the age-independent distribution of
infecteds over the node degrees. Indeed, the probability that a susceptible with age a will become infected is obviously independent of the age of the infected with whom she comes into contact.
Because of this specific term, the above model is actually an integrodifferential model, with yh depending
on Yh(a,t) as specified in (15). In the case of no vertical
transmission, i.e., when all newborns are susceptibles, a plausible set of boundary conditions is
Sk(0,t)= μφk, Yk(0,t)= 0, Rk(0,t)= 0, ∀ t 0.
(18) Model (17) describes the spread of a SIRS process over a network with vital demography, and it accounts both for the
number of contacts of individuals and for their age. The model could be refined along many directions. The biological param-eters α, γ , and ρ could, for example, be made age-dependent. The same
holds for the mortality μ, with the only caveat being to constrain its mean value to equal the natality rate in order the keep the population constant in demographic terms.
Here our main goal, however, is to gain insight into the solutions of the age-independent model (13) discussed in the preceding section. For that, we will restrict our attention to steady-state
solutions. In such conditions, we first derive the distribution of infected yk from (13) and plug it into (17).
Being at equilibrium, time derivatives are nullified. We obtain the following system of ordinary differential equations:
da = −μSk(a)− μkSk(a)+ μ(k + 1)Sk+1(a)
+ μϑπk[−1]Sk[−1](a)− μϑπkSk(a) − ρkSk(a) h h kyh+ αRk(a), dYk(a)
da = −μYk(a)− μkYk(a)+ μ(k + 1)Yk+1(a)
+ μϑπk−1Yk−1(a)− μϑπkYk(a) (19) + ρkSk(a) h h kyh− γ Yk(a), dRk(a)
da = −μRk(a)− μkRk(a)+ μ(k + 1)Rk+1(a)
+ μϑπk−1Rk−1(a)− μϑπkRk(a)
+ γ Yk(a)− αRk(a).
Solving Eq. (19) with initial conditions Sk(0)= μφk, Yk(0)=
0, Rk(0)= 0 gives the steady-state distribution profiles Sk(a),
Yk(a), and Rk(a) of degree-k susceptibles, infected, and
recov-ered, as a function of age a. A representative visualization of a prototypical and exemplificative solution for the SIR process (α= 0) is displayed in Fig.5, where the age distribution Yk(a)
FIG. 5. (Color online) The steady-state distribution of infected
Yk(a) as a function of age a and degree k of individuals, for the SIR process on ERN. Parameter values: ρ= 20, k = 20, μ = 0.02,
FIG. 6. (Color online) The mean age (a) of the entire population
τk and (b) of the infected τkY as a function of the node degree k.
All curves refer to the SIR process (α= 0) except EBANSIRS, for
which α= 10. The horizontal dashed line is the average lifetime 1/μ. Parameter values: ρ= 20, k = 20, μ = 0.02, γ = 100.
of infecteds is plotted in a range of the degree k. For all degrees, the marginal distribution of infecteds with respect to age is monotonically decreasing. The maximal risk of infection is at birth.
Furthermore, a few measures better complement our under-standing and lend themselves an insightful interpretation of the results. In Fig.6, panel (a) shows the mean age distribution τk
of the population, that is, the mean age of all individuals at the demographic equilibrium as a function of their degree k:
τk= 1 pk [+∞] 0 aPk(a)da, (20)
where Pk(a)= Sk(a)+ Yk(a)+ Rk(a). Notice that τk only
depends on the vital demography (i.e., on the natality and attachment profiles), and not on the peculiarities of the epidemic process. It turns out that, in all cases but one, the highly connected
individuals (i.e., nodes with large
k) are old individuals too, namely their mean age is largely above average. ERN is the only exception, with a flat τkat the
level of the average lifetime 1/μ. Thus, only the combination of the assumptions of Poisson natality and flat attachment yields such a singular case. Such a peculiarity becomes even more pronounced
if we consider the mean age distribution of
infected τkY: τ[k]Y = 1 yk [+∞] 0 aYk(a)da. (21)
In Fig. 6, panel (b) reveals that, in the ERN case, infected nodes with high connectivity (i.e., large k) are even younger that average, whereas in all other cases they are older [similarly to the
entire population, as shown in panel (a)].
Another set of quantities that can be derived from (19) concerns the distribution of age profiles for the fractions of susceptibles, infected, and recovered:
S(a)= kSk(a) kPk(a) , Y(a)= kYk(a) kPk(a) , (22) R(a)= kRk(a) kPk(a) .
Note that S(a)+ Y (a) + R(a) = 1 for all a. Figure7(a)shows that, for the SIR process, the fraction Y (a) of infecteds decays to zero as age a increases. Therefore, there are practically no infecteds
among the oldest part of the population, which, on the other hand, contains the most connected individuals, as discussed above. This explains the result highlighted in the preceding section, i.e.,
the vanishing proportion of infected
yk/pk among the most connected nodes. As already pointed
out, the only exception is the ERN case in which all age classes are represented among the set of nodes with large degree k, which thus [see again Fig. 7(a)] contains a non-negligible fraction of
infected. Finally, if we consider the SIRS process (α > 0), we immediately note from curve EBANSIRS
in Fig. 7(b)that the loss of immunity has the effect that the infecteds spread over all age classes and, consequently, over all degree classes as well. For that reason, we find a nonvanishing
FIG. 7. (Color online) The fraction Y (a) of infected as a function of age a. Panel (a) refers to the SIR process (α= 0) (the four curves are visually indistinguishable because practically
coincident); panel (b) refers to the SIRS process (α= 10) on EBAN. The vertical dashed line is the average lifetime 1/μ. Parameter values: ρ= 20, k = 20, μ = 0.02, γ = 100.
FIG. 8. (Color online) A sample of the results of the simulations of the SIR process with individual-based probabilistic cellular automata, on ERN (top panels) and EBAN (bottom panels). Left panels:
the mean age of the entire population τk(blue circles) and of the infected τkY(red
triangles) as a function of the node degree k. Right panels: the proportion of infected yk/pkas a function of the node degree k. In all panels, the solid black lines are the outcome of the
differential equation models (13) and (19). In the left panels, the horizontal dashed line is the average lifetime 1/μ.
fraction yk/pk of infected at large degrees k in this case too,
as evidenced in the preceding section.
We finally mention that, in order to rule out possible artifacts induced by approximating the time evolution of networks and epidemic processes by differential equations (13) and (19), the above
findings have been validated by means of individual-based simulations individual-based on probabilistic cellular automata. All the above-defined network configurations (i.e., natality and attachment
kernels) have been considered, and SIR and SIRS epidemic processes have been simulated on networks of different size. A sample of the results, which highlights some of the most peculiar results
discussed in the paper, is reported in Fig.8[39]. In all instances, we found very good agreement, both qualitative and quantitative, between the results of the simulations on probabilistic cellular
automata and the output of the differential equation models.
Incorporating birth and death processes into simple in-fection mechanisms (of both SIR and SIRS types) over homogeneous and heterogeneous networks can qualitatively change the epidemiological
outcomes. As Moore and coau-thors [20] have shown, demographic dynamics alone destroys the degree distribution structure in scale-free networks, even if the newborn nodes are added as in the
Barab´asi-Albert algorithm (i.e., preferential attachment of links) and the dying nodes are detached at random. The emerging distribution [here called the evolved Bar´abasi-Albert network (EBAN)] is
not fat-tailed anymore, so not surprisingly we find that SIR and SIRS processes can persist if and only if their contact rates are above a finite threshold. Less evident is the fact that the fraction
of infected individuals in SIR processes over vital networks does not necessarily grow with the node degree—as is the case for SIS processes with or without demography
on networks. For SIR processes, that peculiar result occurs only in the case of ERNs, i.e., Erd¨os-R´enyi networks subject to Poisson births and homogeneous (i.e., degree-independent) attachment. In
all other cases that we analyzed, such a pro-portion of infected nodes peaks at intermediate node degrees, exactly where the degree distribution of the total population does. The mechanism of
immunity loss (SIRS instead of SIR process) changes the picture completely because, independent of the network structure, the fraction of infecteds increases monotonically with the node degree.
To understand why qualitatively different results are ob-tained by two apparently similar mechanisms of feeding the network with susceptible individuals (i.e., birth or immunity loss), we developed a
model that also accounts for the age of the nodes (i.e., the time since their first appearance in the network). The age distributions of network individuals in different epidemiological states reveal
that susceptibles in SIR models can only be the youngest, i.e., those who have never been in contact with the disease. In contrast, if a loss of immunity is accounted for (as in SIRS models),
susceptible individuals can have entered the population either via birth (for younger nodes) or via a complete loss of immunity (for recovered nodes that were infected even long before). These two
different kinds of susceptible individuals are characterized by different degrees and are not interchangeable in terms of epidemic spread. As a consequence, control strategies for SIR epidemics that
are mainly based on degree distributions can fail, despite their proven efficacy when applied to SIRS processes over static networks. Identifying which individuals are at maximum risk of infection is
therefore dependent in an articulate manner on (i) the network structure, (ii) the epidemiological state of the individual, and (iii) the path followed by the individual to join the susceptibles
compartment of the population (either by birth or by immunity loss).
In this paper, the proposed age-degree model has been stud-ied only to disentangle the underlying causes of the different outcomes obtained for SIR and SIRS processes. There are of course many
diseases (ranging from pertussis to tuberculosis,
just to restrict our attention to humans) for which the age of the hosts influences both the demographic and the epidemiological parameters. In all these cases, our age-degree model can become a very
useful tool for researchers and health managers.
[1] J. Deter, K. Berthier, Y. Chaval, J. F. Cosson, S. Morand, and N. Charbonnel,Parasitology 132,595(2006).
[2] S. Dorjee, C. Revie, Z. Poljak, W. McNab, and J. Sanchez, Prev. Vet. Med. 112,118(2013).
[3] W. O. Kermack and A. G. McKendrick,Proc. R. Soc. A 115, 700(1927).
[4] R. Pastor-Satorras and A. Vespignani,Phys. Rev. E 63,066117 (2001).
[5] R. Pastor-Satorras and A. Vespignani,Phys. Rev. Lett. 86,3200 (2001).
[6] R. Cohen, S. Havlin, and D. ben-Avraham,Phys. Rev. Lett. 91, 247901(2003).
[7] R. Christley, G. Pinchbeck, R. Bowers, D. Clancy, N. French, R. Bennett, and J. Turner,Am. J. Epidemiol. 162,1024(2005). [8] P. Holme and J. Saram¨aki,Phys. Rep. 519,97(2012). [9] Adaptive
Networks: Theory, Models and Applications, edited
by T. Gross and H. Sayama, Understanding Complex Systems (Springer, New York, 2009).
[10] M. Li, J. Graef, L. Wang, and J. Karsai,Math. Biosci. 160,191 (1999).
[11] B. Murphy, B. Singer, S. Anderson, and D. Kirschner,Math. Biosci. 180,161(2002).
[12] J. Sanz, L. Mario Floria, and Y. Moreno, Int. J. Bifurcation Chaos 22,1250164(2012).
[13] V. Andreasen, C. Viboud, and L. Simonsen,J. Infect. Dis. 197, 270(2008).
[14] R. Casagrandi, L. Bolzoni, S. A. Levin, and V. Andreasen, Math. Biosci. 200,152(2006).
[15] F. Brauer, in Mathematical Epidemiology, edited by F. Brauer, P. van den Driessche, and J. Wu (Springer, 2008), pp. 19–80. [16] I. Nasell, in Mathematical Approaches for Emerging and
Reemerging Infectious Diseases: An Introduction, edited by
C. Castillo-Chavez, S. Blower, P. Van den Driessche, D. Kirschner, and A.-A. Yakubu (Springer, 2002), pp. 199–228. [17] C. Piccardi and R. Casagrandi, in Modelling, Estimation
and Control of Networked Complex Systems, Understanding
Complex Systems—Springer Complexity, edited by A. Chiuso, L. Fortuna, M. Frasca, A. Rizzo, L. Schenato, and S. Zampieri (Springer-Verlag, Berlin, 2009), pp. 77–89.
[18] J. Sanz, L. M. Floria, and Y. Moreno,Phys. Rev. E 81,056108 (2010).
[19] G. Demirel and T. Gross,arXiv:1209.2541[physics.soc-ph]. [20] C. Moore, G. Ghoshal, and M. E. J. Newman,Phys. Rev. E 74,
[21] P. Erd¨os and A. R´enyi, Publ. Math. 6, 290 (1959).
[22] M. E. J. Newman, Networks: An Introduction (Oxford University Press, Oxford, 2010).
[23] L. Farina and S. Rinaldi, Positive Linear Systems, Theory and
Applications (Wiley, 2000).
[24] A. L. Barab´asi and R. Albert,Science 286,509(1999). [25] M. E. J. Newman,SIAM Rev. 45,167(2003).
[26] S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and D. H. Hwang,Phys. Rep. 424,175(2006).
[27] F. Liljeros, C. R. Edling, L. A. N. Amaral, H. E. Stanley, and Y. Aberg,Nature (London) 411,907(2001).
[28] A. Barrat, M. Barth´elemy, and A. Vespignani, Dynamical
Processes on Complex Networks (Cambridge University Press,
Cambridge, 2008).
[29] R. M. Anderson and R. M. May, Infectious Diseases of Humans,
Dynamics and Control (Oxford University Press, Oxford, UK,
[30] M. J. Keeling and K. T. D. Eames,J. R. Soc. Int. 2,295(2005). [31] H. Hethcote,SIAM Rev. 42,599(2000).
[32] M. Y. Li and J. S. Muldowney,Math. Biosci. 125,155(1995). [33] L. F. Olsen, G. L. Truty, and W. M. Schaffer,Theor. Popul. Biol.
[34] C. Piccardi and R. Casagrandi,Phys. Rev. E 77,026113(2008). [35] R. Pastor-Satorras and A. Vespignani,Phys. Rev. E 65,036104
[36] N. A. Christakis and J. H. Fowler,PLoS One 5,e12948(2010). [37] G. Fourni´e, J. Guitian, S. Desvaux, V. C. Cuong, D. H. Dung, D. U. Pfeiffer, P. Mangtani, and A. C. Ghani,Proc. Natl. Acad. Sci.
(USA) 110,9177(2013).
[38] F. Hoppensteadt, Mathematical Theories of Populations,
Demo-graphics, Genetics and Epidemics (SIAM, Philadelphia, 1975).
[39] See Supplemental Material athttp://link.aps.org/supplemental/ 10.1103/PhysRevE.91.022809for the full set of experiments, with a detailed description of the simulation procedure.
|
{"url":"https://123dok.org/document/nq7n3ddz-connectivity-interplays-age-shaping-contagion-networks-vital-dynamics.html","timestamp":"2024-11-11T04:59:24Z","content_type":"text/html","content_length":"186315","record_id":"<urn:uuid:8b2ffbc1-1ce5-47fa-9816-9ee9f1cf7103>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00597.warc.gz"}
|
(PDF) Evolutionary topology optimization of periodic composites for extremal magnetic permeability and electrical permittivity
... To address these issues, a hybrid optimization approach combining GA and Kriging has been introduced (Woo et al. 2012). The development of bidirectional evolutionary structure optimization (BESO)
has also enabled the microstructure design of two-phase composites with remarkable electromagnetic permeability and permittivity (Huang et al. 2012). However, most of the studies mentioned above are
limited to using isotropic materials. ...
|
{"url":"https://www.researchgate.net/publication/234027704_Evolutionary_topology_optimization_of_periodic_composites_for_extremal_magnetic_permeability_and_electrical_permittivity?_iepl%5BgeneralViewId%5D=155PsT4rv0jrOjy0MTmTIDyMsYBlk0icgElA&_iepl%5Bcontexts%5D%5B0%5D=searchReact&_iepl%5BviewId%5D=3vsM4zuY6Qot7J5UStiottDCP2QmyxuN1m0C&_iepl%5BsearchType%5D=publication&_iepl%5Bdata%5D%5BcountLessEqual20%5D=1&_iepl%5Bdata%5D%5BinteractedWithPosition1%5D=1&_iepl%5Bdata%5D%5BwithoutEnrichment%5D=1&_iepl%5Bposition%5D=1&_iepl%5BrgKey%5D=PB%3A234027704&_iepl%5BtargetEntityId%5D=PB%3A234027704&_iepl%5BinteractionType%5D=publicationTitle","timestamp":"2024-11-02T06:21:40Z","content_type":"text/html","content_length":"947549","record_id":"<urn:uuid:881e796e-a383-4b27-906d-1d6a72a993c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00858.warc.gz"}
|
Parallelograms, Squares, Rhombuses, and Rectangleses Flashcards | Knowt
opposites sides are congruent, consecutive angles are supplementary, opposite sides are congruent, opposite sides are parallel, diagonals bisect
all parallelogram facts, diagonals are perpendicular, congruent sides, diagonals bisect the opposite angles
all parallelogram facts, made up of right angles, diagonals are congruent
|
{"url":"https://knowt.com/flashcards/1040883e-d55e-4659-8cbe-2493d169cd38","timestamp":"2024-11-11T17:06:05Z","content_type":"text/html","content_length":"335425","record_id":"<urn:uuid:3c85aaf7-cf39-4b54-a1e8-805df27b5f9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00055.warc.gz"}
|
Show that \[\frac{\sin\theta_1}{v_1} = \frac{\sin\theta_2}{v_2}\] with $\theta_1$ and $\theta_2$ defined as in the previous answer. Some of you may recognise this as Snell’s law, which describes the
path that light takes when travelling from air to glass or water, for example (refraction). This is because light travels more slowly in denser materials and likes to take the quickest path (Fermat’s
principle). Show hint
We want to choose $x_1$ so that the time is minimised, so write an equation for the time, and differentiate it with respect to $x_1$, or $\theta_1$ if you prefer. Note that $y_1$, $y_2$, $v_1$,
$v_2$, and $x_1 + x_2$ are constants.
The time taken to reach the boat is $t = s_1/v_1 + s_2/v_2$. In order to minimise this, we must set its derivative to zero, so we need to know the derivatives of $s_1$ and $s_2$. \begin{align*} s_1^2
&= x_1^2 + y_1^2\\ 2s_1 \frac{\dif s_1}{\dif x_1} &= 2x_1 + 0\\ \frac{\dif s_1}{\dif x_1} &= \frac{x_1}{s_1}\\ &= \sin\theta_1 \end{align*} To find the derivative of $s_2$ we need to know $\frac{\dif
x_2}{\dif x_1}$. Since $x_1 + x_2$ is the east-west distance between the start to the boat, it is constant, so: \begin{align*} \frac{\dif }{\dif x_1}(x_1 + x_2) &= 0\\ 1 + \frac{\dif x_2}{\dif x_1} &
= 0\\ \frac{\dif x_2}{\dif x_1} &= -1. \end{align*} \begin{align*} s_2^2 &= x_2^2 + y_2^2\\ 2s_2 \frac{\dif s_2}{\dif x_1} &= 2x_2\frac{\dif x_2}{\dif x_1} + 0\\ &= -2x_2\\ \frac{\dif s_2}{\dif x_1}
&= -\frac{x_2}{s_2}\\ &= -\sin\theta_2 \end{align*} \begin{align*} \frac{\dif t}{\dif x_1} &= \frac{1}{v_1} \frac{\dif s_1}{\dif x_1} + \frac{1}{v_2}\frac{\dif s_2}{\dif x_1}\\ 0 &= \frac{\sin\
theta_1}{v_1} - \frac{\sin\theta_2}{v_2}\\ \frac{\sin\theta_1}{v_1} &= \frac{\sin\theta_2}{v_2} \end{align*}
You may have chosen to differentiate with respect to $\theta_1$ rather than $x_1$ (or $x_2$ or $\theta_2$ – all valid choices). In this case the constant east-west distance from start to boat is $y_1
\tan\theta_1 + y_2\tan\theta_2$, giving: \begin{align*} \frac{\dif }{\dif \theta_1}(y_1\tan\theta_1 + y_2\tan\theta_2) &= 0\\ y_1\sec^2\theta_1 + y_2\sec^2\theta_2 \frac{\dif \theta_2}{\dif \theta_1}
&= 0\\ \frac{\dif \theta_2}{\dif \theta_1} &= -\frac{y_1\sec^2\theta_1}{y_2\sec^2\theta_2}\\ &= -\frac{y_1\cos^2\theta_2}{y_2\cos^2\theta_1} \end{align*} The time taken to reach the boat is: \begin
{align*} t &= \frac{y_1\sec\theta_1}{v_1} + \frac{y_2\sec\theta_2}{v_2}.\\ \frac{\dif t}{\dif \theta_1} &= \frac{y_1\sin\theta_1}{v_1\cos^2\theta_1} + \frac{y_2\sin\theta_2}{v_2\cos^2\theta_2} \frac
{\dif \theta_2}{\dif \theta_1}\\ 0 &= \frac{y_1\sin\theta_1}{v_1\cos^2\theta_1} + \frac{y_2\sin\theta_2}{v_2\cos^2\theta_2} \left(-\frac{y_1\cos^2\theta_2}{y_2\cos^2\theta_1}\right)\\ &= \frac{y_1\
sin\theta_1}{v_1\cos^2\theta_1} - \frac{y_1\sin\theta_2}{v_2\cos^2\theta_1}\\ &= \frac{y_1}{\cos^2\theta_1}\left(\frac{\sin\theta_1}{v_1} - \frac{\sin\theta_2}{v_2}\right)\\ \frac{\sin\theta_1}{v_1}
&= \frac{\sin\theta_2}{v_2} \end{align*}
|
{"url":"http://thawom.com/q-snell.html","timestamp":"2024-11-07T12:11:18Z","content_type":"text/html","content_length":"8154","record_id":"<urn:uuid:256370c6-3810-4aa7-847c-83eb4c28ac2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00818.warc.gz"}
|
rules for adding and subtracting integers
Addition of Integers | ChiliMath
Adding & Subtracting Integers Notes - MATH IN DEMAND
Adding and Subtracting Integers Rules | Pre-Algebra Add Subtract ...
Rebecca S. Lindsays Tutoring Service
Adding Integers - Rules for Addition of Integers | How to Add ...
Adding and Subtracting Integers Rules | Pre-Algebra Add Subtract ...
Subtracting Integers Rules: Definition and Rules with Examples
Rules for adding integers | TPT
Adding Integers using Rules (solutions, examples, videos ...
Addition and Subtraction of Integers (Rules and Examples)
MATH: Adding & Subtracting Integer Rules - Mini Student Visual | TPT
Integers adding and subtracting | PPT
Adding and Subtracting Integers Rules | Pre-Algebra Add Subtract ...
Scaffolded Math and Science: Integer Rules Visual References for ...
Integer Rules Visual References for Addition and Subtraction
How to Teach Integers: Interactive Ideas & Approach - GeeksforGeeks
Integer Rules Review
Integer Rules by Leopard Land of Math | TPT
Adding and Subtracting Mixed Integers from -10 to 10 (75 Questions ...
My Math Resources - Adding & Subtracting Integers Posters
Subtracting Integers (Directed Numbers) | CK-12 Foundation
Integer Operations | Rules & Examples - Lesson | Study.com
Add & Subtract Integers | Notes & Worksheet - Kraus Math
Adding and Subtracting Integers
Adding & Subtracting Integers Visual Math Lesson {FREE} | Math ...
Adding and Subtracting Integers Posters
Adding and Subtracting Integers: A Step-By-Step Review | How to Add and Subtract Integers
Rules for Adding and Subtracting Integers | Grade 8 Math - Unit 1 - Lesson 1
Integer Rules Poster (teacher made) - Twinkl
|
{"url":"https://worksheets.clipart-library.com/rules-for-adding-and-subtracting-integers.html","timestamp":"2024-11-01T23:49:09Z","content_type":"text/html","content_length":"24677","record_id":"<urn:uuid:f5ba8647-644a-48f3-80dd-118f7d68b01e>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00709.warc.gz"}
|
Colloquium: "On KPZ universality and statistics of Stochastic Flows"
Professor Konstantin Khanin
Department of Mathematics
University of Toronto, Canada
20 May 2024, 12:15
Hall 001, Checkpoint Building
Professor Konstantin Khanin is a Raymond and Beverly Sackler Distinguished Lecturer in Pure Mathematics for the academic year 2023/2024.
We will start by introducing the phenomenon of the KPZ (Kardar-Parisi-Zhang) universalty. KPZ problem was a very active research area in the last 20 years. The field of KPZ is essentially
interdisciplinary. It is related to such areas as probability theory, statistical mechanics, mathematical physics, PDE, SPDE, random dynamics, random matrices and random geometry, to name a few. In
most general form the problem can be formulated in the following way. Consider random geometry on the two-dimensional plane. The main aim is to understand the asymptotic statistical properties of the
length of the geodesic connecting two points, which are far away from each other, in the limit as distance between the endpoints tends to zero. One also wants to study the geometry of random
geodesics, in particular how much they deviate from a straight line. It turn out that the limiting statistics for both the length and the deviation is universal, that is it does not depend on the
details of the random geometry. Moreover, many limiting probability distributions can be found explicitly.
In the second part of the talk we will proceed with discussion of the geometrical approach to the problem of the KPZ universality which provides an even broader point of view on the problem of
universal statistical behavior.
No previous knowledge of the subject will be assumed.
|
{"url":"https://ias.tau.ac.il/node/3599?gid=19","timestamp":"2024-11-12T10:37:41Z","content_type":"text/html","content_length":"61851","record_id":"<urn:uuid:c32c3a4c-64b2-4264-b76a-e3bb301f7b06>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00244.warc.gz"}
|
Journal of the Korean Institute of Illuminating
The AC two-bus system can be understood as the aggregation of the entire power system, in which the bus voltage and the load P,Q can be interpreted as the current operating voltage and the total
consumers’ demand of the system. And the sending-end and receiving-end voltage of two-bus system can be the starting-point for step-by-step voltage calculation of multiple load points in a
distribution line. Calculation of the voltages in two-bus system is meaningful. The voltage-power equation composed of bus voltages and load power P,Q for an AC two-bus system has been presented in
publications. In addition, an explicit formula for calculating the receiving-end voltage of an AC two-bus system with line admittance G-jB recently has been announced. This formula can yield a unique
and practical solution, however, the conductor parameters used in power system computation and provided by manufacturers have been mostly expressed by impedance R+jX, not by admittance G-jB. In this
paper, the formulae for sending-end and receiving-end voltage of an AC two-bus system with line impedance R+jX are derived using ohmic calculation. And an explicit formula for calculating the
receiving-end voltage of AC two-bus system represented by impedance R+jX is also introduced, from which a unique and feasible solution can be obtained. The line impedance data R+jX can be substituted
directly into the proposed formula without any data conversion. Example calculations show that the results by the proposed formulae are exact and the same as those by existing methods.
|
{"url":"http://journal.auric.kr/jieie/XmlViewer/f431357","timestamp":"2024-11-12T00:49:23Z","content_type":"application/xhtml+xml","content_length":"213380","record_id":"<urn:uuid:f190317a-18da-4138-bf43-1fbbaf652641>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00028.warc.gz"}
|
How do you simplify (10 - 4 * 13+ 19) div 23 using order of operations? | HIX Tutor
How do you simplify #(10 - 4 * 13+ 19) div 23# using order of operations?
Answer 1
$\left(10 - 4 \cdot 13 + 19\right) \div 23 = - 1$
= #(10-52+19)-:23#
= #(29-52)-:23#
= #(-23)-:23=-1#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 3
To simplify the expression ((10 - 4 \times 13 + 19) \div 23) using the order of operations (PEMDAS/BODMAS), follow these steps:
1. First, perform the multiplication inside the parentheses: (4 \times 13 = 52).
2. Next, perform the addition and subtraction inside the parentheses: (10 - 52 + 19 = -23).
3. Now, perform the division: (-23 \div 23 = -1).
Therefore, the simplified form of the expression ((10 - 4 \times 13 + 19) \div 23) is (-1).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/how-do-you-simplify-10-4-13-19-div-23-using-order-of-operations-8f9af8d4b2","timestamp":"2024-11-03T18:30:20Z","content_type":"text/html","content_length":"572600","record_id":"<urn:uuid:6faaadc2-5966-42ed-b347-90c165925b2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00056.warc.gz"}
|
Root Sum Squares Explained Graphically, continued (Part 9 / 13)
Rate this article:
^nd derivatives that is initially non-intuitive:
Why is that second term there?
Examine the two curves in Figure 9-1. The first is the line and the second an exponential. The 2^nd derivative of a curve captures^st derivative) remains constant so therefore its 2^nd derivative is
zero. But the slope of the exponential does change across the input value range. The slope initially starts as a large value (steep) and then gradually levels out (not so steep). Its 2^nd derivative
is a negative value as the slope goes down (as opposed to going up) across the value range.
Applying the same normal input variation to both curves results in different output distributions. The exponential distorts the normal curve when projecting output variation (see Figure 9-2). Half of
the input
When we project and transfer the normal input variation through the exponential curve, something different happens. Since the slope changes around the point of interest (it is a greater slope with
the lower half of the normal input variation and a lesser slope with the upper half), it has the effect of distorting the "normal" output curve. The lower half of input variation drives in a wider
range of output variation than the upper half does; it has a longer tail. This has the effect of skewing the distribution and "shifting" the mean output response downwards. By placing the 2^nd
derivative value (which is negative) in the RSS output mean equation, the same effect is capture mathematically. A negative 2^nd derivative value "shifts" the mean downward as the exponential curve
does to our output variation.
Now with visual concepts and understanding around RSS planted in our minds, let us turn the spotlight on the topic of Monte Carlo Analysis.
Please login or register to post comments.
|
{"url":"https://www.crystalballservices.com/Research/Articles-on-Analytics-Risk/root-sum-squares-explained-graphically-continued-part-9-13","timestamp":"2024-11-04T10:58:15Z","content_type":"application/xhtml+xml","content_length":"63874","record_id":"<urn:uuid:4f4208b0-4584-4c03-af88-8afbe5d3514c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00583.warc.gz"}
|
patterns and sequences worksheet pdf
Grade 10 Math Module 1 searching for patterns, sequence and series 1. There are two different ways you will be expected to work out a sequence: A term-to-term rule – each term in the sequence is
calculated by performing a fixed set of operations (such as “multiply by 2 and add 3 ”) to the term(s) before it. Number sequences in a grid, patterns and explanations, odd and even, extend number
sequences. Y 6 wAWlslA wruiPg xhtOs9 3rSeIsoe4rIv Ye0d L.I i 9MOavd Jex AwdiztFhP uIGnvf Si0ngi ot Wes KAYlGgre Kbkr 6av B2U. About this resource. Updated: Jan 12, 2015. pptx, 265 KB . This extensive
collection of series and sequence worksheets is recommended for high school students. This is a math PDF printable activity sheet with several exercises. Sequences. Pattern recognition and prediction
skills lay an important foundation for subjects like math, poetry, and more. In maths, a sequence is a list of numbers, algebraic terms, shapes, or other mathematical objects that follow a pattern or
rule. Learn the Rest of the 6s; This worksheet will help your students practice their six times table and number sequences. These number pattern worksheets deal with addition rules, and you'll find
patterns and rules involve smaller numbers as a well as larger addends. You can do the exercises online or download the worksheet as pdf. Explore various types of sequences and series topics like
arithmetic series, arithmetic sequence, geometric sequence, finite and infinite geometric series, special series, general sequence and series, recursive sequence and partial sum of the series.
Showing top 8 worksheets in the category - Sequence And Pattern. • To apply the knowledge of arithmetic sequences in a variety of contexts Prior Knowledge . These worksheets are approrpriate for 4th
and 5th grade, but might be introduced later if the topic wasn't covered earlier in the curriculum. The topic of Number Patterns and Sequences from the Year 7 book of the Mathematics Enhancement
Program. They are designed as input/output boxes. Must Practice 11 Plus (11+) Patterns and Sequences Past Paper Questions. Free. SIS TOI F 1 3 opiht eai Pattens an ea There are 2 different types of
rules that a number pattern can be based upon: 1 A recursive rule – used to continue the sequence by doing something to the number before it. 1. We have crafted many worksheets covering various
aspects of this topic, number patterns, geometric pattern, and many more. Math Pattern Worksheets. Patterns and Sequence Worksheets. We say geometric sequences have a common ratio. This worksheet is
a supplementary fourth grade resource to help teachers, parents and children at home and in school. Number patterns Online Math Games Number pattern … Picture pattern worksheets contain repeating
pattern, growing pattern, size, shapes and color pattern, equivalent pattern, cut-paste activities and more. Click on the images to view, download, or print them. Our number patterns worksheet is a
great way to challenge children to think about items in the sequence which are not just ‘next’. We hope you find them very useful and interesting. Teaching children to solve number patterns is a
great way to help develop their pattern awareness and learn to recognise patterns, sequences and more. A function rule is a rule based on the position of a number. Sign up here. What is the
Difference Between a Pattern and a Sequence? What you are expected to … Don't get left behind. Below, you will find a wide range of our printable worksheets in chapter Patterns of section Geometry
and Patterns. We start out with basic concepts and slowly progress to more difficult outlets of understanding. Determine which pictures come next in each pattern shown. Skip Counting Worksheets.
Working with mathematical patterns and sequences can be difficult for students. Use this patterns, sequences and series worksheet to practice questions on quadratic patterns to arithmetic sequences
as well as series to geometric sequences and series. Connect with social media. E-mail *. This chapter covers investigating number patterns that involve a common difference and the general term is
linear. Here is a collection of our printable worksheets for topic Number Patterns of chapter Factors and Patterns in section Multiplication and Division. Along with Detailed Answers, Timing, pdf
download. Picture Patterns. These worksheets are appropriate for Third Grade Math. Next. Describing sequences. Introduction to Patterns Aims • Recognise a repeating pattern • Represent patterns with
tables, diagrams and graphs • Generate arithmetic expressions from repeating patterns Prior Knowledge Students should have some prior knowledge of drawing basic linear graphs on the Cartesian plane.
Go over the lessons and have fun in working with the exercises. Maths worksheets and activities. These past paper questions help you to master the 11+ Exam Maths Questions. Patterns worksheet for 4th
grade children. Patterns online worksheet for 5. First Grade Number Pattern and Sequence Worksheets Recognizing patterns is an essential topic for students because it increases their visual-spatial
awareness, and they also reinforce the mathematical concept of predictable relationships between numbers. Before look at the worksheet, if you would like to know the stuff related arithmetic
sequences and series, Please click here. Subjects: Math, Basic Operations, Numbers. Worksheets for teaching students to … Read more. A geometric sequence is a sequence that has a pattern of
multiplying by a constant to determine consecutive terms. Visit now! Preview and details Files included (1) pptx, 265 KB. Arithmetic Sequences and Series Worksheet - Problems. Created: Oct 28, 2011.
Number Patterns Worksheets can help you teach relationships of numbers to create and extend number patterns and connect the idea that addition and subtraction have a relationship to counting. This is
a math PDF printable activity sheet with several exercises. Learn the Rest of the 7s; Test your students knowledge of the seven times tables by … Two terms that we come across very frequently in our
lives is pattern and sequence. Determine the nth term of the sequence and find the sum of the sequence on Math-Exercises.com - Collection of math exercises. Number pattern worksheets contain reading
patterns on number lines, showing the rule, increasing and decreasing pattern, writing the rules, geometric pattern, pattern with two-rules and more. A brief description of the worksheets is on each
of the worksheet widgets. With Number Pattern Worksheets, students will be adding and subtracting 1s, 2s, 5s, 10s, skip counting numbe . These worksheets are similar to number patterns in that
students must find the correct rule. With our patterns worksheets and printables, students of all ages and levels can explore patterns, use their reasoning skills to complete them, and even create
their own! Find more similar flip PDFs like Maths-F1-2.Number Patterns and Sequences. Sequences - patterns. Module 1 Searching for Patterns in Sequences, Arithmetic, Geometric and Others What this
module is all about This module will teach you how to deal with a lot of number patterns. Must Practice 11 Plus (11+) Number Patterns and Sequences Past Paper Questions. growing pattern, arithmetic
sequence, geometric sequence (6.17) Student/Teacher Actions (what students and teachers should be doing to facilitate learning) Note: This lesson will take more than one day to complete. Whether its
bioscience, computer science, mathematics, or daily life, we very commonly use these terms, mostly interchangeably. You'll find patterns of fives, patterns of tens, patterns of fifteens and patterns
of 25 here for practice. December 29, 2015 Mark Weddell Patterns and Sequences. Join thousands of learners improving their maths marks online with Siyavula Practice. Number patterns higher worksheet
for 7th grade children. A series of activities, all of which test pupils’ ability to place shapes in a particular order. It has an answer key attached on the second page. Part A: Arithmetic Sequence
1. Report a problem. Password * ""7 10 13 16 19 "(a) Find the 10th term in this number sequence. Some of the worksheets displayed are Introduction to sequences, Number sequences, Number patterns,
Number patternsmep pupil text 12, Number patterns 10 18 26 34 42, Growing patterns and sequences, Arithmetic sequences date period, Concept 16 arithmetic geometric sequences. Math exercises on
sequences. Sequences - patterns. Maths-F1-2.Number Patterns and Sequences was published by mano2264 on 2014-09-18. Many of the number pattern worksheets on this page deal with increments that are
frequently seen in real-life sequences of numbers, and being able to identify these patterns quickly is a useful skill. "Here are the first five terms in a number sequence. Number patterns including
decimals worksheet pdf; Patterns mixed with whole numbers and decimals worksheet; Mixed number pattern printable with decimals involved; Sequence of numbers in a mixed pattern worksheet; Larger
numbers in a number pattern, find the progression worksheet. KS1, KS2 Lower The Building Game – Ngfl Cymru. 2 A function rule – used to predict any number by applying the rule to the position of the
number. Along with Detailed Answers, Timing, pdf download. Loading... Save for later. Sign in with your email address. Visit now! The patterns are formed by adding or subtracting whole numbers from
mixed numbers or decimals. (2) "(b) Write an expression, in terms of n, for the nth term of this number sequence… Download the fully worked out memorandum. Students have prior knowledge of: •
Patterns • Basic number systems • Sequences • Ability to complete tables • Basic graphs in the co-ordinate plane • Simultaneous equations with 2 unknowns • thThe . Number sequences worksheets pdf
downloads for grade 7. You may want to spend a day on each type of sequence and a third day comparing the two types of sequences. Download Maths-F1-2.Number Patterns and Sequences PDF for free. This
worksheet is a supplementary seventh grade resource to help teachers, parents and children at home and in school. This KS3 activity introduces sequences by looking at shape patterns and how to extend
them and define rules. n term (T n) of an arithmetic sequence Learning Outcomes. sequences and series worksheet pdf, Worksheet given in this section is much useful to the students who would like to
practice problems on arithmetic sequences and series. End of chapter exercises. However, the question that arises is whether these two terms are the same or not. Info. Check Pages 1 - 5 of
Maths-F1-2.Number Patterns and Sequences in the flip PDF version. K Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 2 Name_____ Introduction to Sequences Date_____ Period____ Find the
next three terms in each sequence. We look at various strategies students can use to solve these in our lesson and worksheet series. Challenge students to identify and continue each number pattern in
this math worksheet. Chapter 3: Number patterns. Pupils will also learn to differentiate between colours. These past paper questions help you to master the 11+ Exam Maths Questions. Patterns Workbook
(all teacher worksheets - large PDF) Pattern and Number Sequence Challenge Workbook More Difficult Pattern and Number Sequence Challenge Workbook Included in these questions are exam level questions
which could be used for revision of the number patterns section. It has an answer key attached on the second page. These number patterns are called sequences. The formula is a n = a n-1 r Geometric
Sequences (examples, solutions, worksheets ... With inputs from experts, These printable worksheets are tailor-made for 7th grade, 8th grade, and high school students. All worksheets are free for
individual and non-commercial use. Use these terms, mostly interchangeably included ( 1 ) pptx, 265 KB and many more introduces sequences looking! Geometry and patterns in that students must find the
10th term in this number sequence has answer... Out with basic concepts and slowly progress to more difficult outlets of understanding a variety of contexts Prior.... Various strategies students can
use to solve these in our lives is pattern and a sequence the. Students will be adding and subtracting 1s, 2s, 5s, 10s, skip counting.. Find patterns of tens, patterns of fifteens and patterns of
fifteens and patterns of fives, patterns sequences! Difficult for students of sequence and a third day comparing the two types of sequences or download worksheet. Their six times table and number
sequences in a number sequence more difficult outlets of.! Odd and even, extend number sequences - collection of our printable worksheets for topic number patterns section! Find the sum of the
mathematics Enhancement Program included ( 1 ) pptx, 265.. Them and define rules and Division Ngfl Cymru and in school with basic concepts and slowly progress to more outlets... Be difficult for
students high school students grade resource to help teachers, parents and children at and. A number sequence important foundation for subjects like math, poetry, more. Pdfs like Maths-F1-2.Number
patterns and sequences details Files included ( 1 ) pptx, 265 KB these worksheets are for. And the general term is linear of arithmetic sequences in the flip version. Number by applying the rule to
the position of a number for topic patterns... Topic number patterns of fives, patterns of fifteens and patterns of fives, patterns fives... Worksheets covering various aspects of this topic, number
patterns in that students must find the of! A supplementary seventh grade resource to help teachers, parents and children home... Sequences from the Year 7 book of the worksheets is recommended for
high school.. We come across very frequently in our lesson and worksheet series brief description of the and. You would patterns and sequences worksheet pdf to know the stuff related arithmetic
sequences in a order... Patterns are formed by adding or subtracting whole numbers from mixed numbers or decimals important foundation for subjects math! ) number patterns in section Multiplication
and Division and how to extend them and define rules, students be! Searching for patterns, geometric pattern, growing pattern, and more and more 1 ) pptx 265! Non-Commercial use here for practice is
linear to master the 11+ Exam Maths questions Year 7 book of the.! Past paper questions help you to master the 11+ Exam Maths questions Between a pattern sequence... For high school students arises
is whether these two terms are the same or not is on each the. Can use to solve these in our lives is pattern and a sequence the knowledge arithmetic... Parents and children at home and in school,
265 KB 10th term in this sequence... 10 math Module 1 searching for patterns, sequence and find the sum of 6s. Use these terms, mostly interchangeably and the general term is linear Exam level
questions which be! Are free for individual and non-commercial use 7 10 13 16 19 `` ( a ) find correct... Of our printable worksheets for teaching students to … End of chapter exercises Maths
questions term is.. Detailed Answers, Timing, PDF download stuff related arithmetic sequences in a variety of contexts Prior knowledge questions you. Sequences from the Year 7 book of the worksheet,
if you would like to know the stuff related sequences. `` ( a patterns and sequences worksheet pdf find the sum of the worksheet as PDF to... All of which test pupils ’ ability to place shapes in a
particular order lesson and worksheet series to! You to master the 11+ Exam Maths questions the category - sequence a!, mostly interchangeably Maths marks online with Siyavula practice repeating
pattern, more. And in school useful and interesting the rule to the position of a number description of the on... • to apply the knowledge of arithmetic sequences in a variety of contexts Prior
knowledge ( 1 pptx! 10 math Module 1 searching for patterns, sequence and pattern or daily life, we very commonly use terms! The 10th term in this number sequence, geometric pattern, growing,...
Detailed Answers, Timing, PDF download whether these two terms that we come very., 5s, 10s, skip counting numbe rule to the position of the mathematics Enhancement Program more flip! Several
exercises various strategies students can use to solve these in our lesson and worksheet series come next in pattern. A brief description of the 6s ; this worksheet is a rule based on the position a!
To spend a day on each of the sequence on Math-Exercises.com - collection of series and sequence worksheets is each. Ability to place shapes in a grid, patterns of tens, patterns 25... For students
to know the stuff related arithmetic sequences and series, Please click here 10 math Module 1 for... And have fun in working with the exercises online or download the worksheet.. Sequences can be
difficult for students six times table and number sequences of section Geometry and patterns of and. ( a ) find the 10th term in this number sequence sheet with several exercises of! Terms in a
particular order and explanations, odd and even, extend number sequences very commonly use terms..., computer science, mathematics, or daily life, we very commonly these... The Building Game – Ngfl
Cymru a rule based on the second page more difficult outlets of understanding extensive of!, 10s, skip counting numbe which pictures come next in each pattern shown this number....
|
{"url":"https://www.wheelofwellbeing.org/king-of-paqau/251509-patterns-and-sequences-worksheet-pdf","timestamp":"2024-11-14T18:52:16Z","content_type":"text/html","content_length":"25295","record_id":"<urn:uuid:4c9092fb-7f6f-416f-bcac-fc362f724db4>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00555.warc.gz"}
|
How do you call a shape that is based on a circle, except that not all points are at a distance r from the center? Instead, the distance varies within a set range. I call it a circloid.
My goal with this shape was to create a circular form that feels more organic than a perfect circle, as a base to generate more complex shapes. There are so many circular shapes in nature, but they
are rarely perfectly round, as surrounding conditions influence their final shape. Think of the growth rings of trees, the shape of mushroom caps, or the petals of flowers. A perfect circle doesn't
catch the eye because it's expected. A circloid — a circle that's slightly off-balance — will attract a viewer’s interest.
In the current version, 2/3 of the circloid's circumference is defined by a sine function, while the remaining 1/3 is an arc that closes the shape. Each time a circloid is generated, the starting
point’s angle varies, so the distortion doesn’t always occur in the same direction.
How far a circloid deviates from a perfect circle of the same base radius depends on the relationship between frequency, amplitude, and base radius.
High amplitude and lower frequency distort the circle in one direction.
Low amplitude and higher frequency distort it in multiple directions.
For a smooth circloid, consider the following ratios:
radius / amplitude should be below 7.0.
frequency / amplitude should be below 0.5.
In a future version, I can see adding more parameters or predefined options for users to customize the shape exactly as needed.
|
{"url":"https://www.mirabellensaftpres.se/circloids/","timestamp":"2024-11-05T03:31:23Z","content_type":"text/html","content_length":"8527","record_id":"<urn:uuid:da778964-4889-4c8d-a559-98bdff985248>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00222.warc.gz"}
|
Lesson 10
Comparing Situations by Examining Ratios
Lesson Narrative
In previous lessons, students learned that if two situations involve equivalent ratios, we can say that the situations are described by the same rate. In this lesson, students compare ratios to see
if two situations in familiar contexts involve the same rate. The contexts and questions are:
• Two people run different distances in the same amount of time. Do they run at the same speed?
• Two people pay different amounts for different numbers of concert tickets. Do they pay the same cost per ticket?
• Two recipes for a drink are given. Do they taste the same?
In each case, the numbers are purposely chosen so that reasoning directly with equivalent ratios is a more appealing method than calculating how-many-per-one and then scaling. The reason for this is
to reinforce the concept that equivalent ratios describe the same rate, before formally introducing the notion of unit rate and methods for calculating it. However, students can use any method.
Regardless of their chosen approach, students need to be able to explain their reasoning (MP3) in the context of the problem.
Learning Goals
Teacher Facing
• Choose and create diagrams to help compare two situations and explain whether they happen at the same rate.
• Justify that two situations do not happen at the same rate by finding a ratio to describe each situation where the two ratios share one value but not the other, i.e., $a:b$ and $a:c$, or $x:z$
and $y:z$.
• Recognize that a question asking whether two situations happen “at the same rate” is asking whether the ratios are equivalent.
Student Facing
Let’s use ratios to compare situations.
Student Facing
• I can decide whether or not two situations are happening at the same rate.
• I can explain what it means when two situations happen at the same rate.
• I know some examples of situations where things can happen at the same rate.
Glossary Entries
• same rate
We use the words same rate to describe two situations that have equivalent ratios.
For example, a sink is filling with water at a rate of 2 gallons per minute. If a tub is also filling with water at a rate of 2 gallons per minute, then the sink and the tub are filling at the
same rate.
Print Formatted Materials
Teachers with a valid work email address can click here to register or sign in for free access to Cool Down, Teacher Guide, and PowerPoint materials.
Student Task Statements pdf docx
Cumulative Practice Problem Set pdf docx
Cool Down Log In
Teacher Guide Log In
Teacher Presentation Materials pdf docx
Additional Resources
Google Slides Log In
PowerPoint Slides Log In
|
{"url":"https://im.kendallhunt.com/MS/teachers/1/2/10/preparation.html","timestamp":"2024-11-01T23:35:26Z","content_type":"text/html","content_length":"80227","record_id":"<urn:uuid:9db44d7e-fa15-4da0-9823-e6e3628d9fa6>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00220.warc.gz"}
|
Introductory Chemical Engineering Thermodynamics, 2nd ed.
IE users: Enable compatibility view if voting does not display or work properly.
Rankine Cycle Introduction (LearnChemE.com, 4min) The Carnot cycle becomes impractical for common large scale application, primarily because H2O is the most convenient working fluid for such a
process. When working with H2O, an isentropic turbine could easily take you from a superheated region to a low quality steam condition, essentially forming large rain drops. To understand how this
might be undesirable, imagine yourself riding through a heavy rain storm at 60 mph with your head outside the window. Now imagine doing it 24/7/365 for 10 years; that's how long a high-precision,
maximally efficient turbine should operate to recover its price of investment. Next you might ask why not use a different working fluid that does not condense, like air or CO2. The main problem is
that the heat transfer coefficients of gases like these are about 40 times smaller that those for boiling and condensing H2O. That means that the heat exchangers would need to be roughly 40 times
larger. As it is now, the cooling tower of a nuclear power plant is the main thing that you see on the horizon when approaching from far away. If that heat exchanger was 40 times larger... that would
be large. And then we would need a similar one for the nuclear core. Power cycles based on heating gases do exist, but they are for relatively small power generators.
With this background, it may be helpful to review the relation between the Carnot and Rankine cycles. (LearnChemE.com, 6min) The Carnot cycle is an idealized conceptual process in the sense that
it provides the maximum possible fractional conversion of heat into work (aka. thermal efficiency, ηθ).
Comprehension Questions:
1. Why is the Carnot cycle impractical when it comes to running steam through a turbine? How does the Rankine cycle solve this problem?
2. Why is the Carnot cycle impractical when it comes to running steam through a pump? How does the Rankine cycle solve this problem?
3. It is obvious which temperatures are the "high" and "low" temperatures in the Carnot cycle, but not so much in the Rankine cycle. The "boiler" in a Rankine cycle actually consists of "simple
boiling" where the saturated liquid is converted to saturated vapor, and superheating where the saturated vapor is raised to the temperature entering the turbine. When comparing the thermal
efficiency of a Rankine cycle to the Carnot efficiency, should we substitute the temperature during "simple" boiling, or the temperature entering the turbine into the formula for the Carnot
efficiency? Explain.
Using XSteam Excel (4:46) (msu.edu)
This utility is helpful once you have learned how to interpolate reliably. It saves the tedium.
Using XSteam Matlab (4:20) (msu.edu)
This utility is helpful once you have learned how to interpolate reliably. It saves the tedium.
Thermal Efficiency with a 1-Stage Rankine Cycle. (uakron.edu, 12min) Steam from a boiler enters a turbine at 350C and 1.2MPa and exits at 0.01MPa and saturated vapor; compute the thermal efficiency (
η[θ]) of the Rankine cycle based on this turbine. (Note that this is something quite different from the turbine's "expander" efficiency, η[E].) This kind of calculation is one of the elementary
skills that should come out of any thermodynamics course. Try to pause the video often and work out the answer on your own whenever you think you can. You will learn much more about the kinds of
mistakes you might make if you take your best shot, then use the video to check yourself. Then practice some more by picking out other boiler and condenser conditions and turbine efficiencies. FYI:
the conditions of this problem should look familiar because they are the same as the turbine efficiency example in Chapter 4. That should make it easy for you to take your best shot.
Comprehension Questions:
1. The entropy balance is cited in this video, but never comes into play. Why not?
2. Steam from a boiler enters a turbine at 400C and 2.5 MPa and exits a 100% efficient turbine at 0.025MPa; compute the Rankine efficiency. Comment on the practicality of this process. (Hint: review
Chapter 4 if you need help with turbine efficiency.)
Rankine Example Using Steam.xls (uakron.edu, 15min) High pressure steam (254C,4.2MPa, Saturated vapor) is being considered for application in a Rankine cycle dropping the pressure to 0.1MPa; compute
the Rankine efficiency. This demonstration applies the Steam.xls spreadsheet to get as many properties as possible.
Comprehension Questions:
1. Why does the proposed process turn out to be impractical?
2. What would you need to change in the process to make it work? Assume the high and low temperature limits are the same. Be quantitative.
3. What would be the thermal efficiency of your modified process?
|
{"url":"https://chethermo.net/comment/186","timestamp":"2024-11-07T10:06:31Z","content_type":"text/html","content_length":"34556","record_id":"<urn:uuid:5fd81155-a7cf-4c5f-8221-0cb870e399d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00722.warc.gz"}
|
Logistic Regression in Python: Beginner's Step by Step Guide
• What Is Logistic Regression in Python
• Mathematics Involved in Logistic Regression
• Performance Measuring via Confusion Matrices
• Demonstration of Logistic Regression with Python Code
Logistic Regression is one of the most popular Machine Learning Algorithms, used in the case of predicting various categorical datasets. Categorical Datasets have only two outcomes, either 0/1 or Yes
This article was published as a part of the Data Science Blogathon.
What Is Logistic Regression?
It is a type of Regression Machine Learning Algorithms being deployed to solve Classification Problems/categorical,
Problems having binary outcomes, such as Yes/No, 0/1, True/False, are the ones being called classification problems.
Why Apply Logistic Regression?
Linear regression doesn’t give a good fit line for the problems having only two values(being shown in the figure), It will give less accuracy while prediction because it will fail to cover the
datasets, being linear in nature.
For the best fit of categorical datasets, a Curve is being required which is being possible with the help of Logistic Regression, as it uses a Sigmoid function to make predictions
Mathematics Involved in Logistic Regression
The main reason behind bending of the Logistic Regression curve is because of being calculated using a Sigmoid Function (also known as Logistic Function because being used in logistic regression)
being given below
This the mathematical function which is having the ‘S – Shaped curve’. The value of the Sigmoid Function always lies between 0 and 1, which is why it’s being deployed to solve categorical problems
having two possible values.
Implementation Of Logistic Regression In Making Predictions
Logistic Regression deploys the sigmoid function to make predictions in the case of Categorical values.
It sets a cut-off point value, which is mostly being set as 0.5, which, when being exceeded by the predicted output of the Logistic curve, gives respective predicted output in form of which category
the dataset belongs
For Example,
In the case of the Diabetes prediction Model, if the output exceeds the cutoff point, prediction output will be given as Yes for Diabetes otherwise No, if the value is below the cutoff point
Measuring Performance
For measuring the performance of the model solving classification problems, the Confusion matrix is being used, below is the implementation of the Confusion Matrix.
Key terms:
1. – TN Stands for True Negatives(The predicted(negative) value matches the actual(negative) value)
2. – FP stands for False Positives (The actual value, was negative, but the model predicted a positive value)
3. – FN stands for False Negatives(The actual value, was positive, but the model predicted a negative value)
4. – TP stands for True Positives(The predicted(positive) value matched the actual value(positive))
For a good model, one should not have a high number of False Positive or False Negative
Key Features Of Logistic Regression
1. Logistic regression is one of the most popular Machine Learning algorithms, used in the Supervised Machine Learning technique. It is used for predicting the categorical dependent variable, using a
given set of independent variables.
2. It predicts the output of a categorical variable, which is discrete in nature. It can be either Yes or No, 0 or 1, true or False, etc. but instead of giving the exact value as 0 and 1, it gives
the output as the probability of the dataset which lies between 0 and 1.
3. It is similar to Linear Regression. The only difference is that Linear Regression is used for solving Regression problems, whereas Logistic regression is used for solving the classification
problems/Categorical problems.
4 In Logistic regression, the “S” shaped logistic (sigmoid) function is being used as a fitting curve, which gives output lying between 0 and 1.
Types of Logistic Regression
Binomial Logistic regression deals with those problems with target variables having only two possible values, 0 or 1.
Which can Signify Yes/No, True /False, Dead/Alive, and other categorical values.
Ordinal Logistic Regression Deals with those problems whose target variables can have 3 or more than 3 values, unordered in nature. Those values don’t have any quantitative significance
For Example Type 1 House, Type 3 House, Type 3 House, etc
Multinomial Logistic regression, just Ordinal Logistic Regression, deals with Problems having target values to be more than or equal to3. The main difference lies that unlike Ordinal, those values
are well ordered. The values Hold Quantitative Significance
For Example, Evaluation Of skill as Low, Average, Expert
Python Code Implementation
[ Note: The Datasets Being Taken is The Titanic Dataset]
Importing Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
Importing the Data set
Python Code:
import pandas as pd
import numpy as np
# import matplotlib.pyplot as plt
# import seaborn as sns
titanic_data = pd.read_csv('titanic_train.csv')
Performing Exploratory data analysis:
1. Checking various null entries in the dataset, with the help of heatmap
2.Visualization of various relationships between variables
3. Using Box Plot to Get details about the distribution
sns.heatmap(titanic_data.isnull(), cbar=False)
sns.countplot(x='Survived', data=titanic_data)
sns.countplot(x='Survived', hue='Sex', data=titanic_data)
sns.countplot(x='Survived', hue='Pclass', data=titanic_data)
Age and Cabin Have Null Entries
sns.boxplot(titanic_data[‘Pclass’], titanic_data[‘Age’])
Using function to replace null entries
def input_missing_age(columns):
age = columns[0]
passenger_class = columns[1]
if pd.isnull(age):
if(passenger_class == 1):
return titanic_data[titanic_data['Pclass'] == 1]['Age'].mean()
elif(passenger_class == 2):
return titanic_data[titanic_data['Pclass'] == 2]['Age'].mean()
elif(passenger_class == 3):
return titanic_data[titanic_data['Pclass'] == 3]['Age'].mean()
return age
Filling the missing Age data
titanic_data['Age'] = titanic_data[['Age', 'Pclass']].apply(input_missing_age, axis = 1)
Drop null data
titanic_data.drop('Cabin', axis=1, inplace = True)
titanic_data.dropna(inplace = True)
Create dummy variables for Sex and Embarked columns
sex_data = pd.get_dummies(titanic_data['Sex'], drop_first = True)
embarked_data = pd.get_dummies(titanic_data['Embarked'], drop_first = True)
Add dummy variables to the DataFrame and drop non-numeric data
titanic_data = pd.concat([titanic_data, sex_data, embarked_data], axis = 1)
titanic_data.drop(['Name', 'PassengerId', 'Ticket', 'Sex', 'Embarked'], axis = 1, inplace = True)
Print the finalized data set
Split the data set into x and y data
y_data = titanic_data['Survived']
x_data = titanic_data.drop('Survived', axis = 1)
Split the data set into training data and test data
from sklearn.model_selection import train_test_split
x_training_data, x_test_data, y_training_data, y_test_data = train_test_split(x_data, y_data, test_size = 0.3)
Create the model
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
Train the model and create predictions
model.fit(x_training_data, y_training_data)
predictions = model.predict(x_test_data)
Calculate performance metrics
from sklearn.metrics import classification_report
print(classification_report(y_test_data, predictions))
precision recall f1-score support
0 0.83 0.87 0.85 169
1 0.75 0.68 0.72 98
accuracy 0.80 267
macro avg 0.79 0.78 0.78 267
weighted avg 0.80 0.80 0.80 267
Generate a confusion matrix
from sklearn.metrics import confusion_matrix
[[145 22]
[ 30 70]]
The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.
Responses From Readers
Excellent article Thank you for sharing
Dear sir, I have read blog regarding Regression for biggineers of Google. Heartly thanks for simple wording . I could understand more clearly. But I am getting example of Titanic. Still could
understand in little bit. Requesting you to share more topics regarding machine learning on same blog. Once again thanks. Regards , Deepali Pandit 8975947456
|
{"url":"https://www.analyticsvidhya.com/blog/2021/04/beginners-guide-to-logistic-regression-using-python/","timestamp":"2024-11-08T15:45:28Z","content_type":"text/html","content_length":"381734","record_id":"<urn:uuid:6c03bd14-95e8-4d4b-a7b4-92093a9cdf90>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00746.warc.gz"}
|
Alex C.
What do you want to work on?
About Alex C.
Algebra, Calculus, Algebra 2, Pre-Calculus, Calculus BC
Bachelors in Mathematics, General from University of California-Berkeley
PHD in Physics, General from Utah State University
Career Experience
In addition to passionately tutoring for Tutor.com, I currently work as a research specialist in hidden conformal representations of black holes, quantum information, and particle interactions
(2020-current). Before switching to research full time, I taught a physics lab and worked as a university tutor for one year (2019-2020). Before that, I worked as a chef and a private tutor for a few
years (2013-2019).
I Love Tutoring Because
students are often the best teachers of teachers, and I love learning! The satisfaction of helping someone overcome a problem is immediate and personally wonderful, but I feel especially lucky to be
able to see how students (mis)-understand a problem they've never seen before because it lets me share ``first time" vision with my students. Everyone thinks differently, and truly listening to
someone else's math is a great way to see the universe in a different way (that I would never be able to find on my own); it is incredible how often deep insights into the universe can be found by
(mis-)understanding something "simple".
Other Interests
Math - Calculus
One word: AMAZING.
Math - Algebra
she was awesome thank you so much
Math - Algebra II
Alex was so sweet, so amazing and really supportive! I would 100% recommend them :)
Math - Algebra II
She went above and beyond with patience to help me! Thank you very much!
|
{"url":"https://www.princetonreview.com/academic-tutoring/tutor/alex%20c--10234306?s=calculus%20ab","timestamp":"2024-11-04T01:37:01Z","content_type":"application/xhtml+xml","content_length":"204419","record_id":"<urn:uuid:fb17868d-acb5-4639-bc5d-0b15c036364c>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00211.warc.gz"}
|
HP Forums
I'm interested in any program listing concerning PID stuff. It can be for HP-41C/42S, 71B, 75C, 67, 48xx, 32SII, 28S, 25 I don't care since I have all of them as long as it related to PID.
This describes the classic equation for feedback loop control, normally used in industrial process control. For example, a heating system (like a house) would have temperature as the stimulus and
heat energy as the response. The controller calculates the PID equation using temperature as the variable and the result is the amount of heat to produce at the moment. When started, the heater will
output at 100%, and as the temperature approaches the set point it will then cut back to some small constant output level. The chosen computer must calculate the equation in a loop fast enough to
keep up with the changes in the stimulus. Thus an HP 41 is probably fine for home heating but not for anti lock brakes or a rocket engine.
It's related to industrial automation process. If you want to learn more about this let me know.
You will find a program to compute PID, PI and Pb parameters for the HP67/97 caclulator in a book called "Practical Process Instrumentationa & Control" Volume II. The article is on page 58 and is
titled "Calculator program for new controller-tuning method" and takes 224 steps on an HP67. This means it should run on the following "modern HP's", the HP11 (tight fit), Hp15, Hp32s??, Hp42s, HP41
(almost forgot). with minimal changes. If you list a fax, I may photocopy and fax program listing sometime in next week.
I would REALLY apppriciate if you can do that for me. Email-me privatly, I'll give you my fax number.
|
{"url":"https://archived.hpcalc.org/museumforum/archive/index.php?thread-7289.html","timestamp":"2024-11-14T21:24:30Z","content_type":"application/xhtml+xml","content_length":"6052","record_id":"<urn:uuid:13ee14b8-9232-4d10-9e98-d6a2625e3e64>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00178.warc.gz"}
|
What is the role of the ideal solution model in thermodynamics? | Do My Chemistry Online Exam
What is the role of the ideal solution model in thermodynamics? Is it the equilibrium state of a state that has a finite weight? (Specifically, whether a protein would form when the problem becomes
nonsensitive, to change the conformation of the protein, etc.) Or is it the equilibrium state that would decrease if the binding of molecules increases? (Is protein structure the main force in
thermodynamics? If not, perhaps one could either ask for the global or the average of local thermodynamic parameters?) If so, what is the current role of the ideal solution model in thermodynamics?
If the ideal solution is stable and has no weight a state having a finite weight has a finite density followed by a population change. What if an attractive force of the ideal solution is increased
after the binding of molecules, for instance, but before a population shift is imposed by another attractive protein upon which the binding is initiated? The authors discuss some possible aspects of
this as well as some other possible role of the ideal solution model for thermodynamics — are these effects general or specific? For instance, a change in the binding of go right here protein and the
other proteins leads to a state that has a low binding energy, so that they make less contacts with the other proteins, and so the protein becomes more accessible to the other proteins. In a
thermodynamic system there is such a model. If the system is not S of the ideal solution then one should not anticipate this behavior if one adopts into the E/S model something is not as favorable as
that of the theory. I gave the only connection I can think of to this issue: the “mechanism” is a go to my blog parameter? This parameter is the ideal solution of the program, and the “mechanism”
that the equation is solving for is a set of individual parameters. It raises the possibility that such theoretical problems, with associated constraints, might lead to some solutions that are also
thermodynamically inefficient. Are the ideal solution models inversely as effective to the thermodynamics as the set ofWhat is the role of the ideal solution model in thermodynamics? Can more
intelligent thermostatists replace the ideal one merely by predicting thermodynamics? Many technologies have already found their uses in recent years or are becoming quite popular among
thermodynamics critics – something that is not easy to predict. Since there has been a widespread interest for thermodynamics in recent years (e.g., in different domains, different fields of science
and energy in the name of the universe) recently research has started to focus on this topic But there is more and more evidence in the literature that thermodynamics cannot describe this phenomenon.
Especially the concepts introduced by Newton and Heisenberg – if thermodynamics can be defined as a unit of physical knowledge – should not be applied to thermodynamics too. There are great
differences in the physical laws the theory ofodynamics can predict and in each of these definitions of good thermodynamics it is necessary that some form of realism is proved. In this sense it is
worth making the point that it is not enough to try to predict the thermodynamics only and use the basic concepts in physics, mathematics and chemistry — and the whole concept can be improved and
modified even more so. What is a good, accurate and simple way to predict change in temperature, density or any other parameter? take my pearson mylab test for me good thermodynamics books page 91,
after explaining that there are some simple generalizations of thermodynamics and that are quite mathematical and good – we call them the thermodynamic law of attraction. Now for the thermodynamics
find someone to do my pearson mylab exam if we say we predict change in an individual parameter of heating and lowering temperature (from pressure and chemical) that is in fact proportional to a
change in temperature or temperature per unit mass of matter and material in a reaction, thermodynamics cannot be calculated because in thermodynamics processes and events affect each other
continuously. No one can ever predict thermodynamics the way one predicts it if nothing else is done. This is of course an unrealistic view and it has serious practical side effects for many go right
here of problems. What is the role of the ideal solution model in thermodynamics? It might reflect a thermodynamic distribution. As we will see in the course of our current work, we navigate here
demonstrated that any ideal solution model that is designed to determine the distribution of probability variable can not be found by any method that can.
Hire Someone To Do Your Coursework
We think that we are missing something fundamental in this topic. This may not be how it should be. We had concluded that the ideal solution method used in the literature for its analysis takes
account of the requirement that the distribution of probability variables is a perfect normal, that the distribution of variables is independent of those variables, if any, then it is in fact normal.
The ideal solution method for this problem has no check my site requirement. It can just as well be used with the same distribution that is specified. Our goal is to look for a way to obtain the
behavior of the distribution of probability variables as a function of the probability variable that is assumed to be identical in all possible ways. In the remainder of this paper we consider a
“temporary” one-variable ideal solution model that does not need the assumption that the distribution of variables is have a peek at these guys perfect normal. It is easy to see that all the models
can be found by standard methods (appendix). Whenever the distribution of variables are a perfect normal distribution, in general, but each $z$ is assumed independent of other $z’$’s each function is
also a solution to the process visit this website approximated by the ideal solution model. In the literature studying a $2D$-dimensional ideal solution model we have shown that (i) The ideal
solution can not be present in the statistics literature. We have looked to modern techniques that can be applied to show that the ideal solution is present above the corresponding noise [@Dong2007],
(ii) It is not limited to the data model, but can be either specific to the particular situation or certain cases in which the distribution of variables is a perfect normal. (iii) It is impossible to
|
{"url":"https://chemistryexamhero.com/what-is-the-role-of-the-ideal-solution-model-in-thermodynamics","timestamp":"2024-11-13T08:13:24Z","content_type":"text/html","content_length":"130930","record_id":"<urn:uuid:cf7e5031-95f9-4dd8-9af4-c40b6b09cec6>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00711.warc.gz"}
|
Lecture 07: Matrix Algebra – Jihong Z. - Play Harder and Learn Harder
id SATV SATM
id SATV SATM
When you installed R, R also comes with required matrix algorithm library for you. Two popular are BLAS and LAPACK
LAPACK is written in Fortran 90 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular
value problems. LAPACK routines are written so that as much as possible of the computation is performed by calls to the Basic Linear Algebra Subprograms (BLAS).
LAPACK is written in Fortran 90 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular
value problems.
LAPACK routines are written so that as much as possible of the computation is performed by calls to the Basic Linear Algebra Subprograms (BLAS).
A matrix (denote as capitalized X) is composed of a set of elements
X[2, 1] X[3] # No comma in the bracket will output the element in column-wise order X[2, ] # 2nd row vector X[, 1] # 1st column vector
[1] 3 [1] 5 [1] 3 4 [1] 1 3 5
In statistics, we use x_{ij} to represent one element with the position of ith row and jth column. For a example matrix \mathbf{X} with the size of 1000 rows and 2 columns.
\mathbf{X} = \begin{bmatrix} x_{11} & x_{12}\\ x_{21} & x_{22}\\ \dots & \dots \\ x_{1000, 1} & x_{1000,2} \end{bmatrix}
The name scalar is important: the number “scales” a vector – it can make a vector “longer” or “shorter”.
Matrices can be multiplied by scalar so that each elements are multiplied by this scalar
The transpose of a matrix is a reorganization of the matrix by switching the indices for the rows and columns
\mathbf{X} = \begin{bmatrix} 520 & 580\\ 520 & 550\\ \vdots & \vdots\\ 540 & 660\\ \end{bmatrix}
\mathbf{X}^T = \begin{bmatrix} 520 & 520 & \cdots & 540\\ 580 & 550 & \cdots & 660 \end{bmatrix}
An element x_{ij} in the original matrix \mathbf{X} is now x_{ij} in the transposed matrix \mathbf{X}^T
Transposes are used to align matrices for operations where the sizes of matrices matter (such as matrix multiplication)
Square Matrix: A square matrix has the same number of rows and columns
Diagonal Matrix: A diagonal matrix is a square matrix with non-zero diagonal elements (x_{ij}\neq0 for i=j) and zeros on the off-diagonal elements (x_{ij} =0 for i\neq j):
\mathbf{A} = \begin{bmatrix} 2.758 & 0 & 0 \\ 0 & 1.643 & 0 \\ 0 & 0 & 0.879\\ \end{bmatrix}
Symmetric Matrix: A symmetric matrix is a square matrix where all elements are reflected across the diagonal (x_{ij} = x_{ji})
Addition of a set of vectors (all multiplied by scalars) is called a linear combination:
For all k vectors, the set of all possible linear combinations is called their span
Typically not thought of in most analyses – but when working with things that don’t exist (latent variables) becomes somewhat importnat
Question: Does generalized linear model contains linear combinations? True, link function + a linear combination.
An important concept in vector geometry is that of the inner product of two vectors
x = matrix(c(1, 2), ncol = 1) y = matrix(c(2, 3), ncol = 1) crossprod(x, y) # R function for dot product of x and y t(x) %*% y
This is formally equivalent to (but usually slightly faster than) the call t(x) %*% y (crossprod) or x %*% t(y) (tcrossprod).
A matrix can be thought of as a collection of vectors
Matrix algebra defines a set of operations and entities on matrices
Matrix addition and subtraction are much like vector addition / subtraction
Rules: Matrices must be the same size (rows and columns)
Be careful!! R may not pop up error message when matrice + vector!
Method: the new matrix is constructed of element-by-element addition/subtraction of the previous matrices
Order: the order of the matrices (pre- and post-) does not matter
Rules: Pre-multiplying matrix must have number of columns equaling to the number of rows of the post-multiplying matrix
Method: the elements of the new matrix consist of the inner (dot) product of the row vectors of the pre-multiplying matrix and the column vectors of the post-multiplying matrix
R: use %*% operator or crossprod to perform matrix multiplication
A = matrix(c(1, 2, 3, 4, 5, 6), nrow = 2, byrow = T) B = matrix(c(5, 6, 7, 8, 9, 10), nrow = 3, byrow = T) A
[,1] [,2] [,3] [1,] 1 2 3 [2,] 4 5 6
[,1] [,2] [1,] 5 6 [2,] 7 8 [3,] 9 10
[,1] [,2] [,3] [1,] 29 40 51 [2,] 39 54 69 [3,] 49 68 87
The identity matrix (denoted as \mathbf{I}) is a matrix that pre- and post- multiplied by another matrix results in the original matrix:
The zero and one vector is a column vector of zeros and ones:
When pre- or post- multiplied the matrix (\mathbf{A}) is the zero vector:
For square symmetric matrices, an inverse matrix is a matrix that when pre- or post- multiplied with another matrix produces the identity matrix:
[,1] [,2] [,3] [1,] 1 0 0 [2,] 0 1 0 [3,] 0 0 1
Our data matrix was size (1000\times 2), which is not invertible
However, \mathbf{X^TX} was size (2\times 2) – square and symmetric
To help us throughout, let’s consider the correlation matrix of our SAT data:
For a square matrix \mathbf{A} with p rows/columns, the matrix trace is the sum of the diagonal elements:
In R, we can use tr() in psych package to calculate matrix trace
For our data, the trace of the correlation matrix is 2
For all correlation matrices, the trace is equal to the number of variables
The trace is considered as the total variance in multivariate statistics
A square matrix can be characterized by a scalar value called a determinant:
Manual calculation of the determinant is tedious. In R, we use det() to calculate matrix determinant
If the determinant is positive, the matrix is called positive definite \rightarrow the matrix has an inverse
If the determinant is not positive, the matrix is called non-positive definite \rightarrow the matrix does not have an inverse
Matrices show up nearly anytime multivariate statistics are used, often in the help/manual pages of the package you intend to use for analysis
You don’t have to do matrix algebra, but please do try to understand the concepts underlying matrices
Your working with multivariate statistics will be better off because of even a small amount of understanding
X = as.matrix(dataSAT[,c("SATV", "SATM")]) N = nrow(X) XBAR = matrix(colMeans(X), ncol = 1) ONES = matrix(1, nrow = nrow(X)) S = 1/(N-1) * t(X - ONES%*% t(XBAR)) %*% (X - ONES%*% t(XBAR)) S
Reflecting how much overlapping area (covariance) across variables relative to the total variances occurs in the sample
# If no correlation S_noCorr = S S_noCorr[upper.tri(S_noCorr)] = S_noCorr[lower.tri(S_noCorr)] = 0 S_noCorr
# If correlation = 1 S_PerfCorr = S S_PerfCorr[upper.tri(S_PerfCorr)] = S_PerfCorr[lower.tri(S_PerfCorr)] = prod(diag(S)) S_PerfCorr
The total sample variance is the sum of the variances of each variable in the sample
The total sample variance does not take into consideration the covariances among the variables
Where V represents number of variables and the highlighed is Mahalanobis Distance.
We use MVN(\mathbf{\mu, \Sigma}) to represent a multivariate normal distribution with mean vector as \mathbf{\mu} and covariance matrix as \mathbf{\Sigma}
Similar to squared mean error in univariate distribution, we can calculate squared Mahalanobis Distance for each observable individual in the context of Multivariate Distribution
The multivariate normal distribution has some useful properties that show up in statistical methods
Similar to other distribution functions, we use dmvnorm to get the density given the observations and the parameters (mean vector and covariance matrix). rmvnorm can generate multiple samples given
the distribution
SATV SATM
448.6690 346.5356
547.5522 623.7793
462.0201 405.1241
512.0536 500.8779
569.1587 504.6520
486.1675 474.9578
483.9587 490.9760
583.8711 677.7026
553.1567 628.5565
492.1799 501.6640
522.4085 580.9986
504.5034 524.3015
592.2830 643.1622
519.5650 556.0304
454.2103 498.6606
596.5938 690.5468
543.7172 605.6825
493.2891 530.0512
493.6388 476.8900
479.7672 495.8584
We are now ready to discuss multivariate models and the art/science of multivariate modeling
Matrix algebra was necessary so as to concisely talk about our distributions (which will soon be models)
The multivariate normal distribution will be necessary to understand as it is the most commonly used distribution for estimation of multivariate models
Next class we will get back into data analysis – but for multivariate observations…using R’s lavaan package for path analysis
--- title: "Lecture 07: Matrix Algebra" subtitle: "Matrix Algebra in R" author: "Jihong Zhang*, Ph.D" institute: | Educational Statistics and Research Methods (ESRM) Program* University of Arkansas
date: "2024-10-07" sidebar: false execute: echo: true warning: false output-location: column code-annotations: below format: uark-revealjs: scrollable: true chalkboard: true embed-resources: false
code-fold: false number-sections: false footer: "ESRM 64503 - Lecture 07: Matrix Algebra" slide-number: c/t tbl-colwidths: auto output-file: slides-index.html html: page-layout: full toc: true
toc-depth: 2 toc-expand: true lightbox: true code-fold: false fig-align: center filters: - quarto - line-highlight --- ## Today's Class - Matrix Algebra - Multivariate Normal Distribution -
Multivariate Linear Analysis ## Graduate Certificate in ESRM Program 1. See link [here](https://esrm.uark.edu/certificates/index.php) # An Brief Introduction to Matrix ## Today's Example Data -
Imagine that I collected data SAT test scores for both the Math (SATM) and Verbal (SATV) sections of 1,000 students ```{r} #| output-location: default library(ESRM64503) library(kableExtra)
show_table(head(dataSAT)) show_table(tail(dataSAT)) ``` ```{r} plot(dataSAT$SATV, dataSAT$SATM) ``` ## Background - Matrix operations are fundamental to all modern statistical software. - When you
installed R, R also comes with required matrix algorithm **library** for you. Two popular are **BLAS** and **LAPACK** - Other optimized libraries include OpenBLAS, AtlasBLAS, GotoBLAS, Intel MKL
`{bash}} Matrix products: default LAPACK: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRlapack.dylib` - From the LAPACK [website](https://www.netlib.org/lapack/), > **LAPACK**
is written in Fortran 90 and provides routines for solving systems of simultaneous linear equations, least-squares solutions of linear systems of equations, eigenvalue problems, and singular value
problems. > > LAPACK routines are written so that as much as possible of the computation is performed by calls to the Basic Linear Algebra Subprograms (**BLAS**). ## Matrix Elements - A matrix
(denote as capitalized **X**) is composed of a set of elements - Each element is denote by its position in the matrix (row and column) ```{r} X = matrix(c( 1, 2, 3, 4, 5, 6 ), nrow = 3, byrow = TRUE)
X ``` ```{r} dim(X) # Number of rows and columns ``` - In R, we use `matrix[rowIndex, columnIndex]` to extract the element with the position of rowIndex and columnIndex ```{r} #| results: hold #|
output-location: column X[2, 1] X[3] # No comma in the bracket will output the element in column-wise order X[2, ] # 2nd row vector X[, 1] # 1st column vector ``` - In statistics, we use $x_{ij}$ to
represent one element with the position of *i*th row and *j*th column. For a example matrix $\mathbf{X}$ with the size of 1000 rows and 2 columns. - The first subscript is the index of the rows - The
second subscript is the index of the columns $$ \mathbf{X} = \begin{bmatrix} x_{11} & x_{12}\\ x_{21} & x_{22}\\ \dots & \dots \\ x_{1000, 1} & x_{1000,2} \end{bmatrix} $$ ## Scalars - A scalar is
just a single number - The name scalar is important: the number "scales" a vector – it can make a vector "longer" or "shorter". - Scalars are typically written without boldface: $$ x_{11} = 520 $$ -
Each element of a matrix is a scalar. - Matrices can be multiplied by scalar so that each elements are multiplied by this scalar ```{r} 3 * X ``` ## Matrix Transpose - The transpose of a matrix is a
reorganization of the matrix by switching the indices for the rows and columns $$ \mathbf{X} = \begin{bmatrix} 520 & 580\\ 520 & 550\\ \vdots & \vdots\\ 540 & 660\\ \end{bmatrix} $$ $$ \mathbf{X}^T =
\begin{bmatrix} 520 & 520 & \cdots & 540\\ 580 & 550 & \cdots & 660 \end{bmatrix} $$ - An element $x_{ij}$ in the original matrix $\mathbf{X}$ is now $x_{ij}$ in the transposed matrix $\mathbf{X}^T$
- **Transposes are used to align matrices for operations where the sizes of matrices matter (such as matrix multiplication)** ```{r} t(X) ``` ## Types of Matrices - **Square Matrix:** A square matrix
has the same number of rows and columns - Correlation / covariance matrices are square matrices - **Diagonal Matrix**: A diagonal matrix is a square matrix with non-zero diagonal elements ($x_{ij}\
neq0$ for $i=j$) and zeros on the off-diagonal elements ($x_{ij} =0$ for $i\neq j$): $$ \mathbf{A} = \begin{bmatrix} 2.758 & 0 & 0 \\ 0 & 1.643 & 0 \\ 0 & 0 & 0.879\\ \end{bmatrix} $$ - We will use
diagonal matrices to transform correlation matrices to covariance matrices ```{r} vars = c(2.758, 1.643, 0.879) diag(vars) ``` - **Symmetric Matrix**: A symmetric matrix is a square matrix where all
elements are reflected across the diagonal ($x_{ij} = x_{ji}$) - Correlation and covariance matrices are symmetric matrices - [**Question**: A diagonal matrix is always a symmetric matrix?]
{.underline} [True]{.mohu} ## Linear Combinations - Addition of a set of vectors (all multiplied by scalars) is called a linear combination: $$ \mathbb{y} = a_1x_1 + a_2x_2 + \cdots + a_kx_k $$ -
Here, $\mathbb{y}$ is the linear combination - For all *k* vectors, the set of all possible linear combinations is called their **span** - Typically not thought of in most analyses – but when working
with things that don't exist (latent variables) becomes somewhat importnat - **In Data**, linear combinations happen frequently: - Linear models (i.e., Regression and ANOVA) - Principal components
analysis - **Question**: Does generalized linear model contains linear combinations? [True, link function + a linear combination]{.mohu}. ## Inner (Dot/Cross-) Product of Vectors - An important
concept in vector geometry is that of the inner product of two vectors - The inner product is also called the dot product $$ \mathbf{a} \cdot \mathbf{b} = a_{11}b_{11}+a_{21}b_{21}+\cdots+ a_{N1}b_
{N1} = \sum_{i=1}^N{a_{i1}b_{i1}} $$ ```{r} #| results: hold x = matrix(c(1, 2), ncol = 1) y = matrix(c(2, 3), ncol = 1) crossprod(x, y) # R function for dot product of x and y t(x) %*% y ``` > This
is formally equivalent to (but usually slightly faster than) the call `t(x) %*% y` (`crossprod`) or `x %*% t(y)` (`tcrossprod`). Using our **example data `dataSAT`**, ```{r} crossprod(dataSAT$SATV,
dataSAT$SATM) # x and y could be variables in our data ``` - **In Data**: the angle between vectors is related to the correlation between variables and the projection is related to regression/ANOVA/
linear models # Matrix Algebra ## Moving from Vectors to Matrices - A matrix can be thought of as a collection of vectors - In R, we use `df$[name]` or `matrix[, index]` to extract single vector -
Matrix algebra defines a set of operations and entities on matrices - I will present a version meant to mirror your previous algebra experiences - Definitions: - Identity matrix - Zero vector - Ones
vector - Basic Operations: - Addition - Subtraction - Multiplication - "Division" ## Matrix Addition and Subtraction - Matrix addition and subtraction are much like vector addition / subtraction -
**Rules**: Matrices must be the same size (rows and columns) - [Be careful!! R may not pop up error message when matrice + vector!]{style="color: red"} ```{r} #| output-location: column #| results:
hold A = matrix(c(1, 2, 3, 4), nrow = 2, byrow = T) B = c(1, 2) A B A+B ``` - **Method**: the new matrix is constructed of element-by-element addition/subtraction of the previous matrices -
**Order**: the order of the matrices (pre- and post-) does not matter ```{r} #| error: true #| output-location: default A = matrix(c(1, 2, 3, 4), nrow = 2, byrow = T) B = matrix(c(5, 6, 7, 8), nrow =
2, byrow = T) A B A + B A - B ``` ## Matrix Multiplication - **The new matrix** has the size of same [number of rows of pre-multiplying]{style="color: tomato; font-weight: bold"} matrix and [same
number of columns of post-multiplying]{style="color: royalblue; font-weight: bold"} matrix $$ \mathbf{A}_{(r \times c)} \mathbf{B}_{(c\times k)} = \mathbf{C}_{(r\times k)} $$ - **Rules**:
Pre-multiplying matrix must have number of columns equaling to the number of rows of the post-multiplying matrix - **Method**: the elements of the new matrix consist of the inner (dot) product of
[the row vectors of the pre-multiplying matrix]{style="color: tomato; font-weight: bold"} and [the column vectors of the post-multiplying matrix]{style="color: royalblue; font-weight: bold"} -
**Order**: The order of the matrices matters - **R**: use `%*%` operator or `crossprod` to perform matrix multiplication ```{r} #| output-location: default A = matrix(c(1, 2, 3, 4, 5, 6), nrow = 2,
byrow = T) B = matrix(c(5, 6, 7, 8, 9, 10), nrow = 3, byrow = T) A B A %*% B B %*% A ``` - **Example**: The inner product of A's 1st row vector and B's 1st column vector equal to AB's first element
```{r} #| output-location: default crossprod(A[1, ], B[, 1]) (A%*%B)[1, 1] ``` ## Identity Matrix - The identity matrix (denoted as $\mathbf{I}$) is a matrix that pre- and post- multiplied by another
matrix results in the original matrix: $$ \mathbf{A}\mathbf{I} = \mathbf{A} $$ $$ \mathbf{I}\mathbf{A}=\mathbf{A} $$ - The identity matrix is a square matrix that has: - Diagonal elements = 1 -
Off-diagonal elements = 0 $$ \mathbf{I}_{(3 \times 3)} = \begin{bmatrix} 1&0&0\\ 0&1&0\\ 0&0&1\\ \end{bmatrix} $$ - **R**: we can create a identity matrix using `diag` ```{r} diag(nrow = 3) ``` ##
Zero and One Vector - The zero and one vector is a column vector of zeros and ones: $$ \mathbf{0}_{(3\times 1)} = \begin{bmatrix}0\\0\\0\end{bmatrix} $$ $$ \mathbf{1}_{(3\times 1)} = \begin{bmatrix}1
\\1\\1\end{bmatrix} $$ - When pre- or post- multiplied the matrix ($\mathbf{A}$) is the zero vector: $$ \mathbf{A0=0} $$ $$ \mathbf{0^TA=0} $$ - **R:** ```{r} #| output-location: default zero_vec <-
matrix(0, nrow = 3, ncol = 1) crossprod(B, zero_vec) one_vec <- matrix(1, nrow = 3, ncol = 1) crossprod(B, one_vec) # column-wise sums ``` ## Matrix "Division": The Inverse Matrix - Division from
algebra: - First: $\frac{a}{b} = b^{-1}a$ - Second: $\frac{a}{b}=1$ - "Division" in matrices serves a similar role - For [**square symmetric**]{style="color: tomato; font-weight: bold"} matrices, an
inverse matrix is a matrix that when pre- or post- multiplied with another matrix produces the identity matrix: $$ \mathbf{A^{-1}A=I} $$ $$ \mathbf{AA^{-1}=I} $$ - **R:** use `solve()` to calculate
the matrix inverse ```{r} A <- matrix(rlnorm(9), 3, 3, byrow = T) round(solve(A) %*% A, 3) ``` - **Caution**: Calculation is complicated, even computers have a tough time. Not all matrix can be
inverted: ```{r} #| error: true #| results: hold A <- matrix(2:10, nrow = 3, ncol = 3, byrow = T) A solve(A)%*%A ``` ## Example: the inverse of variance-covaraince matrix - In data: the inverse shows
up constantly in statistics - Models which assume some types of (multivariate) normality need an inverse convariance matrix - Using our SAT example - Our data matrix was size ($1000\times 2$), which
is not invertible - However, $\mathbf{X^TX}$ was size ($2\times 2$) – square and symmetric ```{r} X = as.matrix(dataSAT[, c("SATV", "SATM")]) crossprod(X, X) ``` - The inverse $\mathbf{(X^TX)^{-1}}$
is ```{r} solve(crossprod(X, X)) ``` ## Matrix Algebra Operations ::: columns ::: column - $\mathbf{(A+B)+C=A+(B+C)}$ - $\mathbf{A+B=B+A}$ - $c(\mathbf{A+B})=c\mathbf{A}+c\mathbf{B}$ - $(c+d)\mathbf
{A} = c\mathbf{A} + d\mathbf{A}$ - $\mathbf{(A+B)^T=A^T+B^T}$ - $(cd)\mathbf{A}=c(d\mathbf{A})$ - $(c\mathbf{A})^{T}=c\mathbf{A}^T$ - $c\mathbf{(AB)} = (c\mathbf{A})\mathbf{B}$ - $\mathbf{A(BC) =
(AB)C}$ ::: ::: column - $\mathbf{A(B+C)=AB+AC}$ - $\mathbf{(AB)}^T=\mathbf{B}^T\mathbf{A}^T$ ::: ::: ## Advanced Matrix Functions/Operations - We end our matrix discussion with some advanced topics
- To help us throughout, let's consider the correlation matrix of our SAT data: ```{r} R <- cor(dataSAT[, c("SATV", "SATM")]) R ``` $$ R = \begin{bmatrix}1.00 & 0.78 \\ 0.78 & 1.00\end{bmatrix} $$ ##
Matrix Trace - For a square matrix $\mathbf{A}$ with *p* rows/columns, the matrix trace is the sum of the diagonal elements: $$ tr\mathbf{A} = \sum_{i=1}^{p} a_{ii} $$ - In R, we can use `tr()` in
`psych` package to calculate matrix trace - For our data, the trace of the correlation matrix is 2 - For all correlation matrices, **the trace is equal to the number of variables** ```{r} psych::tr
(R) ``` - The trace is considered as the total variance in multivariate statistics - Used as a target to recover when applying statistical models ## Model Determinants - A square matrix can be
characterized by a scalar value called a determinant: $$ \text{det}\mathbf{A} =|\mathbf{A}| $$ - Manual calculation of the determinant is tedious. In R, we use `det()` to calculate matrix determinant
```{r} det(R) ``` - The determinant is useful in statistics: - Shows up in multivariate statistical distributions - Is a measure of "generalized" variance of multiple variables - If the determinant
is positive, the matrix is called **positive definite** $\rightarrow$ the matrix has an inverse - If the determinant is not positive, the matrix is called **non-positive definite** $\rightarrow$ the
matrix does not have an inverse ## Wrap Up 1. Matrices show up nearly anytime multivariate statistics are used, often in the help/manual pages of the package you intend to use for analysis 2. You
don't have to do matrix algebra, but please do try to understand the concepts underlying matrices 3. Your working with multivariate statistics will be better off because of even a small amount of
understanding # Multivariate Normal Distribution ## Covariance and Correlation in Matrices - The covariance matrix $\mathbf{S}$ is found by: $$ \mathbf{S}=\frac{1}{N-1} \mathbf{(X-1\cdot\bar x^T)^T
(X-1\cdot\bar x^T)} $$ ```{r} X = as.matrix(dataSAT[,c("SATV", "SATM")]) N = nrow(X) XBAR = matrix(colMeans(X), ncol = 1) ONES = matrix(1, nrow = nrow(X)) S = 1/(N-1) * t(X - ONES%*% t(XBAR)) %*% (X
- ONES%*% t(XBAR)) S cov(X) ``` ## From Covariance to Correlation - If we take the SDs (the square root of the diagonal of the covariance matrix) and put them into diagonal matrix $\mathbf{D}$, the
correlation matrix is found by: $$ \mathbf{R = D^{-1}SD^{-1}} $$ $$ \mathbf{S = DRD} $$ ```{r} #| output-location: default S D = sqrt(diag(diag(S))) D R = solve(D) %*% S %*% solve(D) R cor(X) ``` ##
Generalized Variance - The determinant of the covariance matrix is called **generalized variance** $$ \text{Generalized Sample Variance} = |\mathbf{S}| $$ - It is a measure of spread across all
variables - Reflecting how much overlapping area (covariance) across variables relative to the total variances occurs in the sample - Amount of overlap reduces the generalized sample variance ```{r}
#| output-location: default gsv = det(S) gsv # If no correlation S_noCorr = S S_noCorr[upper.tri(S_noCorr)] = S_noCorr[lower.tri(S_noCorr)] = 0 S_noCorr gsv_noCorr <- det(S_noCorr) gsv_noCorr gsv /
gsv_noCorr # If correlation = 1 S_PerfCorr = S S_PerfCorr[upper.tri(S_PerfCorr)] = S_PerfCorr[lower.tri(S_PerfCorr)] = prod(diag(S)) S_PerfCorr gsv_PefCorr <- det(S_PerfCorr) gsv_PefCorr ``` - The
generalized sample variance is: - Largest when variables are uncorrelated - Zero when variables from a linear dependency ## Total Sample Variance - The total sample variance is the sum of the
variances of each variable in the sample - The sum of the diagonal elements of the sample covariance matrix - The trace of the sample covariance matrix $$ \text{Total Sample Variance} = \sum_{v=1}^
{V} s^2_{x_i} = \text{tr}\mathbf{S} $$ Total sample variance for our SAT example: ```{r} sum(diag(S)) ``` - The total sample variance does not take into consideration the covariances among the
variables - Will not equal zero if linearly dependency exists ## Mutlivariate Normal Distribution and Mahalanobis Distance - The PDF of Multivariate Normal Distribution is very similar to univariate
normal distribution $$ f(\mathbf{x}_p) = \frac{1}{(2\pi)^{\frac{V}2}|\mathbf{\Sigma}|^{\frac12}}\exp[-\frac{\color{tomato}{(x_p^T - \mu)^T \mathbf{\Sigma}^{-1}(x_p^T-\mu)}}{2}] $$ Where $V$
represents number of variables and the highlighed is [Mahalanobis Distance]{style="color: tomato"}. - We use $MVN(\mathbf{\mu, \Sigma})$ to represent a multivariate normal distribution with mean
vector as $\mathbf{\mu}$ and covariance matrix as $\mathbf{\Sigma}$ - Similar to squared mean error in univariate distribution, we can calculate squared Mahalanobis Distance for each observable
individual in the context of Multivariate Distribution $$ d^2(x_p) = (x_p^T - \mu)^T \Sigma^{-1}(x_p^T-\mu) $$ - In R, we can use `mahalanobis` followed by data vector (`x`), mean vector (`center`),
and covariance matrix (`cov`) to calculate the **squared Mahalanobis Distance** for one individual ```{r} #| output-location: default x_p <- X[1, ] x_p mahalanobis(x = x_p, center = XBAR, cov = S)
mahalanobis(x = X[2, ], center = XBAR, cov = S) mahalanobis(x = X[3, ], center = XBAR, cov = S) # Alternatively, t(x_p - XBAR) %*% solve(S) %*% (x_p - XBAR) ``` ```{r} mh_dist_all <- apply(X, 1, \(x)
mahalanobis(x, center = XBAR, cov = S)) plot(density(mh_dist_all)) ``` ## Multivariate Normal Properties - The multivariate normal distribution has some useful properties that show up in statistical
methods - If $\mathbf{X}$ is distributed multivariate normally: 1. Linear combinations of $\mathbf{X}$ are normally distributed 2. All subsets of $\mathbf{X}$ are multivariate normally distributed 3.
A zero covariance between a pair of variables of $\mathbf{X}$ implies that the variables are independent 4. Conditional distributions of $\mathbf{X}$ are multivariate normal ## How to use
Multivariate Normal Distribution in R Similar to other distribution functions, we use `dmvnorm` to get the density given the observations and the parameters (mean vector and covariance matrix).
`rmvnorm` can generate multiple samples given the distribution ```{r} #| output-location: default library(mvtnorm) (mu <- colMeans(dataSAT[, 2:3])) S dmvnorm(X[1, ], mean = mu, sigma = S) dmvnorm(X
[2, ], mean = mu, sigma = S) ## Total Log Likelihood LL <- sum(log(apply(X, 1, \(x) dmvnorm(x, mean = mu, sigma = S)))) LL ## Generate samples from MVN rmvnorm(20, mean = mu, sigma = S) |> show_table
() ``` ## Wrapping Up 1. We are now ready to discuss multivariate models and the art/science of multivariate modeling 2. Many of the concepts of univariate models carry over - Maximum likelihood -
Model building via nested models - All of the concepts involve multivariate distributions 3. Matrix algebra was necessary so as to concisely talk about our distributions (which will soon be models)
4. The multivariate normal distribution will be necessary to understand as it is the most commonly used distribution for estimation of multivariate models 5. Next class we will get back into data
analysis – but for multivariate observations…using R’s lavaan package for path analysis
|
{"url":"https://jihongzhang.org/posts/lectures/2024-07-21-applied-multivariate-statistics-esrm64503/lecture07/lecture07","timestamp":"2024-11-02T14:27:23Z","content_type":"application/xhtml+xml","content_length":"230325","record_id":"<urn:uuid:45aae4c2-42d3-48ff-8f91-2100adc027f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00634.warc.gz"}
|
The Derivative of cot^2x - DerivativeIt
The Derivative of cot^2x
The derivative of cot^2x is -2.csc^2(x).cot(x)
How to calculate the derivative of cot^2x
There are two methods that can be used for calculating the derivative of cot^2x.
The first method is by using the product rule for derivatives (since cot^2(x) can be written as cot(x).cot(x)).
The second method is by using the chain rule for differentiation.
Finding the derivative of cot^2x using the product rule
The product rule for differentiation states that the derivative of f(x).g(x) is f’(x)g(x) + f(x).g’(x)
The Product Rule:
For two differentiable functions f(x) and g(x)
If F(x) = f(x).g(x)
Then the derivative of F(x) is F'(x) = f’(x)g(x) + f(x)g'(x)
First, let F(x) = cot^2(x)
Then remember that cot^2(x) is equal to cot(x).cot(x)
So F(x) = cot(x)cot(x)
By setting f(x) and g(x) as cot(x) means that F(x) = f(x).g(x) and we can apply the product rule to find F'(x)
F'(x) = f'(x)g(x) + f(x)g'(x) Product Rule Definition
= f'(x)cot(x) + cot(x)g'(x) f(x) = g(x) = cot(x)
= -csc^2(x)cot(x) + cot(x)(-csc^2(x)) f'(x) = g(‘x) = -csc^2(x)
= -2csc^2(x)cot(x)
Using the product rule, the derivative of cot^2x is -2csc^2(x)cot(x)
Finding the derivative of cot^2x using the chain rule
The chain rule is useful for finding the derivative of a function which could have been differentiated had it been in x, but it is in the form of another expression which could also be differentiated
if it stood on its own.
In this case:
• We know how to differentiate cot(x) (the answer is -csc^2(x))
• We know how to differentiate x^2 (the answer is 2x)
This means the chain rule will allow us to perform the differentiation of the expression cot^2x.
Using the chain rule to find the derivative of cot^2x
Although the expression cot^2x contains no parenthesis, we can still view it as a composite function (a function of a function).
We can write cot^2x as (cot(x))^2.
Now the function is in the form of x^2, except it does not have x as the base, instead it has another function of x (cot(x)) as the base.
Let’s call the function of the base g(x), which means:
g(x) = cot(x)
From this it follows that:
(cot(x))^2 = g(x)^2
So if the function f(x) = x^2 and the function g(x) = cot(x), then the function (cot(x))^2 can be written as a composite function.
f(x) = x^2
f(g(x)) = g(x)^2 (but g(x) = cot(x))
f(g(x)) = (cot(x))^2
Let’s define this composite function as F(x):
F(x) = f(g(x)) = (cot(x))^2
We can find the derivative of cot^2x (F'(x)) by making use of the chain rule.
The Chain Rule:
For two differentiable functions f(x) and g(x)
If F(x) = f(g(x))
Then the derivative of F(x) is F'(x) = f’(g(x)).g’(x)
Now we can just plug f(x) and g(x) into the chain rule.
How to find the derivative of cot^2x using the Chain Rule:
F'(x) = f'(g(x)).g'(x) Chain Rule Definition
= f'(g(x))(-csc^2(x)) g(x) = cot(x) ⇒ g'(x) = -csc^2(x)
= (2.cot(x)).(-csc^2(x)) f(g(x)) = (cot(x))^2 ⇒ f'(g(x)) = 2.cot(x)
= -2csc^2(x)cot(x)
Using the chain rule, the derivative of cot^2x is -2.csc^2(x)cot(x)
Finally, just a note on syntax and notation:cot^2x is sometimes written in the forms below (with the derivative as per the calculations above). Just be aware that not all of the forms below are
mathematically correct.
cot^2x ► Derivative of cot^2x = -2.csc^2(x).cot(x)
cot^2(x) ► Derivative of cot^2(x) = -2.csc^2(x).cot(x)
cot 2 x ► Derivative of cot 2 x = -2.csc^2(x).cot(x)
(cotx)^2 ► Derivative of (cotx)^2 = -2.csc^2(x).cot(x)
cot squared x ► Derivative of cot squared x = -2.csc^2(x).cot(x)
cotx2 ► Derivative of cotx2 = -2.csc^2(x).cot(x)
cot^2 ► Derivative of cot^2 = -2.csc^2(x).cot(x)
The Second Derivative Of cot^2x
To calculate the second derivative of a function, differentiate the first derivative.
From above, we found that the first derivative of cot^2x = -2csc^2(x)cot(x). So to find the second derivative of cot^2x, we need to differentiate -2csc^2(x)cot(x).
We can use the product and chain rules, and then simplify to find the derivative of -2csc^2(x)cot(x) is 4csc^2(x)cot^2(x) + 2csc^4(x)
► The second derivative of cot^2x is 4csc^2(x)cot^2(x) + 2csc^4(x)
Interesting property of the derivative of cot^2x
It is interesting to note that the derivative of cot^2x is equal to the derivative of csc^2x.
The derivative of:
> cot^2x = -2.csc^2(x).cot(x)
> csc^2x = -2.csc^2(x).cot(x)
|
{"url":"https://derivativeit.com/2020/10/05/derivative-of-cot-2x/","timestamp":"2024-11-14T15:25:31Z","content_type":"text/html","content_length":"33197","record_id":"<urn:uuid:fe1658bc-8aa7-4179-990b-0536aba96ba0>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00369.warc.gz"}
|
Effect of Frictional Pressure on ECD while forward circulationEffect of Frictional Pressure on ECD while Forward Circulation
Effect of Frictional Pressure on ECD while Forward Circulation
In this article, we will describe the effect of friction pressure on bottom hole pressure and equivalent circulating density while performing forward circulation.
What is forward circulation?
It is the typical circulating path which is from a mud pump into drill pipe. Mud is pumped down into drill string / BHA and come out of a bit. Then, the mud is flown up the annulus and return back
to surface as you can see in the diagram below (Figure 1).
Under a static condition:
The bottom hole pressure is equal to hydrostatic pressure from the drilling fluid.
Bottom Hole Pressure (BHP) = Hydrostatic Pressure (HP)
Under a dynamic condition:
Stand pipe pressure equates to summation of pressure loss of whole system.
SPP = FrPds + FrPbha + FrPbit + FrPann
SPP = Stand Pipe Pressure
FrPds = Pressure loss in drill string
FrPbha = Pressure loss in BHA
FrPbit= Pressure loss across the bit
FrPann = Pressure loss in Annulus
The friction pressure acts opposite way while fluid is being moved; therefore, if you look at the annulus side, you will be able to determine the bottom hole pressure at the dynamic condition as per
the equation below.
Bottom Hole Pressure (BHP) = Hydrostatic Pressure (HP) + Pressure Loss in Annulus (FrPann)
Under the dynamic condition, the only effect on the bottom hole pressure is the pressure loss in the annulus. This is the reason why the ECD while forward circulation is expressed like this;
• ECD = Equivalent Circulating Density, ppg
• MW = Mud Weight in the well, ppg
• FrPann = Pressure loss in Annulus, psi
• TVD = True Vertical Depth, ft
Cormack, D. (2007). An introduction to well control calculations for drilling operations. 1st ed. Texas: Springer.
Crumpton, H. (2010). Well Control for Completions and Interventions. 1st ed. Texas: Gulf Publishing.
Grace, R. (2003). Blowout and well control handbook [recurso electrónico]. 1st ed. Paises Bajos: Gulf Professional Pub.
Grace, R. and Cudd, B. (1994). Advanced blowout & well control. 1st ed. Houston: Gulf Publishing Company.
Watson, D., Brittenham, T. and Moore, P. (2003). Advanced well control. 1st ed. Richardson, Tex.: Society of Petroleum Engineers.
14 Responses to Effect of Frictional Pressure on ECD while Forward Circulation
1. I have a question, why the pressure loss across the bit does not affect the bottomhole pressure?
If I reference the BHP to tubing site, shouldn’t be like this
BHP=pump pressure+hydrostatic pressure-pressure loss across the bit- pressure loss in the drillstring+pressure loss in BHA?
□ It depends on which side you refer to. For example if you refer to drill pipe side, your equation is
BHP=pump pressure+hydrostatic pressure-pressure loss across the bit- pressure loss in the drillstring-pressure loss in BHA
or if you are reference to annulus side, your equitation will be
BHP = hydrostatic pressure + annular pressure loss.
Both of them are equal in terms of mathematics.
2. sorry i meant, minus pressure loss in BHA
3. Why does the annular pressure increase during the influx when gas moves upwards due to pressure release while the height of mud decreases?
□ The annular pressure increase due to loss of hydrostatic pressure.
BHP = Hydrostatic Pressure + Surface Pressure. Once the gas is moving up, the gas volume expand therefore the well will lose the hydrostatic pressure. In order to balance formation pressure
as stated in the equation, the surface pressure will increase.
4. In UBD Operation
BHP in annuals side will be :
Pbh = Mud hydrostatic + Annular fractional pressure + Choke pressure
is that correct?
Thank you
□ Yes. This is correct.
5. tnx for share us this information
6. Considering the hole size and OD of the pipe is same throughtout the hole, would ECD in annulus be the same at TD and at say, in the middle?
□ It is the same ECD.
7. Your ECD equation does not account for any fluid rheological effects. Does fluid rheology not affect the pressure loss within the annulus (and well)?
□ James,
Yes. Rheology is one of contributing factors to the ECD. In this article, rheology is hinden the annulas pressure loss. We don’t have the details calculation in this article. You can find the
detailed calculations in several posts in our website.
Best Regards,
8. Hi,
If we increase the mud weight, SPP will increase or decrease.
It shoud be increase right.
But some one told me it will decrease. But he doesn’t explain why.
Please advise.
□ Hi AKHI,
Yes, it will increase.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Tagged Equivalent Circulation Density, forward circulation, frictional pressure. Bookmark the permalink.
|
{"url":"https://www.drillingformulas.com/effect-of-frictional-pressure-on-ecd-while-forward-circulation/","timestamp":"2024-11-04T17:26:58Z","content_type":"text/html","content_length":"107053","record_id":"<urn:uuid:6b49730b-ff1a-446c-9317-076376a471c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00726.warc.gz"}
|
Aufbau principle - Mono Mole
Aufbau principle
The Aufbau principle (building up principle) states that an atom in the ground state has electrons filling its orbitals in the order of increasing energy. It was proposed by Niels Bohr and Wolfgang
Pauli in the 1920s and is based on the observation that the lower the energy of a system is, the more stable it is.
Specifically, the principle adopts the $n+l$ rule, which was first suggested by Charles Janet in 1928, in his attempt to construct a version of the periodic table. It was later adopted by Erwin
Madelung in 1936, as a rule on how atomic sub-shells are filled.
The empirical rule states that electrons fill sub-shells in the order of increasing value of $n+l$ where $n$ is the principal quantum number and $l$ is the angular quantum number. It further mentions
that electrons fill sub-shells in the order of increasing value of $n$ for sub-shells with identical values of $n+l$. For example,
Subshell ${\color{Red}&space;n}$ ${\color{Red}&space;l}$ ${\color{Red}&space;n+l}$ Order
1s 1 0 1 1
2s 2 0 2 2
2p 2 1 3 3
3s 3 0 3 4
3p 3 1 4 5
3d 3 2 5 7
4s 4 0 4 6
The order of fill is represented by the diagram above. So, the ground state electron configuration (distribution of electrons) for calcium is 1s^2 2s^2 2p^6 3s^2 3p^6 4s^2 or [Ar]4s^2, where [Ar] is
the electron configuration of argon. The Aufbau principle works well for elements with atomic number $Z\leq&space;20$ but must be applied with a better understanding of orbital energy and electron
repulsion for $Z>20$.
For elements with Z between 1 and 6, calculations show that the energy of the 4s sub-shell is higher than the 3d sub-shell (see diagram above). For Z between 7 and 20, the reverse is true, as a
result of the interplay between increasing nuclear charge and increasing electron repulsion. The relative energy of the s and d sub-shells again changes for $Z>20$, where the 3d sub-shell has a lower
energy than the 4s sub-shell. This is because electrons in 3d orbitals do not shield each other well from nuclear forces, leading to the lowering of their energies.
With that in mind, one may conclude that the electron configurations of scandium and titanium are [Ar]3d^3 and [Ar]3d^4 respectively. However, they are [Ar]3d^14s^2 and [Ar]3d^24s^2. Numerical
solutions of the Schrodinger equation for scandium and titanium not only show that the 3d orbitals have lower energies than the 4s orbitals, but also reveal that the 3d orbitals are smaller in size
compared to the 4s orbitals. Electrons occupying 3d orbitals therefore experience greater repulsions than electrons residing in 4s orbitals, with the order of increasing repulsion being:
where V is the potential energy due to repulsion.
To determine the stability of an atom in the ground state, we need to consider the net effect of the relative energies of 4s/3d orbitals and the repulsion of electrons. In fact, calculations for the
overall energies of scandium are as follows:
Consequently, when a transition metal undergoes ionisation, the electron is removed from the 4s orbital rather than the 3d sub-shell. Despite 3d being lower in energy than 4s for the first row of
transition metals, the $n+l$ rule applies. However, the rule breaks down for chromium and copper, where the ground state electronic configuration of chromium is [Ar]3d^54s^1 instead of [Ar]3d^44s^2
and that of copper is [Ar]3d^104s^1 instead of [Ar]3d^94s^2. This is attributed to Hund’s rule.
|
{"url":"https://monomole.com/aufbau-principle/","timestamp":"2024-11-03T10:33:49Z","content_type":"text/html","content_length":"106816","record_id":"<urn:uuid:8db9e101-58c1-4db2-be4d-ce7b0d88394c>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00042.warc.gz"}
|
How the SHA2() function works in Mariadb?
The SHA2() function in MariaDB is a versatile function that allows you to compute various cryptographic hash values belonging to the SHA-2 family of hash functions.
Posted on
The SHA2() function in MariaDB is a versatile function that allows you to compute various cryptographic hash values belonging to the SHA-2 family of hash functions. The SHA-2 family includes four
different hash functions: SHA-224, SHA-256, SHA-384, and SHA-512, each producing a hash value of different lengths (224, 256, 384, and 512 bits, respectively). These hash functions are widely used
for data integrity checking, digital signatures, and secure password storage.
The syntax for the MariaDB SHA2() function is as follows:
• str: The string for which the SHA-2 hash value is to be computed.
• hash_length: The desired length of the hash value in bits. It can be 224, 256, 384, or 512.
The function returns a hexadecimal string representation of the computed SHA-2 hash value, with a length that corresponds to the specified hash_length parameter.
Example 1: Computing the SHA-256 hash value of a string
In this example, we’ll demonstrate how to use the SHA2() function to compute the SHA-256 hash value of a given string.
SELECT SHA2('Hello, World!', 256);
The following is the output:
The SHA2() function computes the SHA-256 hash value of the string 'Hello, World!' and returns the 64-character hexadecimal representation.
Related Functions
The following are some functions related to the MariaDB SHA2() function:
The SHA2() function in MariaDB provides a flexible way to compute various cryptographic hash values from the SHA-2 family. By specifying the desired hash length, you can choose the appropriate hash
function (SHA-224, SHA-256, SHA-384, or SHA-512) based on your security requirements and performance considerations. The SHA-2 family of hash functions is generally considered more secure than older
hash functions like SHA-1 and MD5, making the SHA2() function a preferred choice for applications that require stronger cryptographic hash capabilities.
|
{"url":"https://www.sqliz.com/posts/how-sha2-works-in-mariadb/","timestamp":"2024-11-13T03:17:26Z","content_type":"text/html","content_length":"14273","record_id":"<urn:uuid:9d494c78-a4d6-490a-ad03-b5d53afcbdc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00070.warc.gz"}
|
About the Aperiodical
The Aperiodical is a meeting-place for people who already know they like maths and would like to know more.
It was begun by Katie Steckles, Christian Perfect and Peter Rowlett as a shared blogging outlet and grew out of our desire to have a place on the web where we could keep up to date with what’s going
on elsewhere, and to share the mathematical things we do.
L-R: Peter Rowlett, Katie Steckles, Christian Lawson-Perfect
Some basic principles:
• Content is aimed at the mathematically literate. This doesn’t mean knowledge of advanced mathematics is required, but we aren’t going to rewrite articles to avoid mentioning mathematical terms
and we won’t go too far out of our way to explain things which are easily googled. We know that nobody is expert in all aspects of mathematics but we aren’t going to be afraid to mention stuff
that doesn’t come up until university.
• We’re not keen on unhelpful attention-grabbing headlines: if the press release says “Alex the Parrot Was A Mathematical Genius” when the story is “Alex the parrot could count to eight”, we’ll use
the latter.
• That said, this is just a place to enjoy maths. Let’s not be grumps.
If you’ve got some maths you’d like to share, you can do it here. To get in contact with us about something else, please email us.
About the Editors
We are:
4 Responses to “About the Aperiodical”
1. Chris
Thank you for doing this!
As a child, I was told I was poor at math(s), and so came to loathe the subject. As an adult. I have come to love it, and bitterly regret not persuing it more during my education. Blogs like this
enrich my life and expand my mind, so again, thank you!
2. Bill Richardson
Came across you after hearing last night of Zeeman’s death. I’m a retired maths teacher and came to know him during his time as President of The Mathematical Association. He was a wonderful man
and the world of mathematics is much poorer for his passing.
3. Ellie Kesselman
I just learned of Nick Berry’s death; he passed away in October 2022. Nick was a brilliant, kindly man. I have mathematics degrees although I’ve mostly worked in applied probability theory. I
read the Data Genetics blog since 2011, and finally met Nick online for the first time in 2021.
Nick was the host of Episode 125 of Carnival of Mathematics in 2015. Thank you for recognizing his talent.
|
{"url":"https://aperiodical.com/about/","timestamp":"2024-11-05T21:34:47Z","content_type":"text/html","content_length":"41036","record_id":"<urn:uuid:77269664-7acc-4e06-ac88-d1110839d249>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00683.warc.gz"}
|
Differential Equations with Discontinuities
Jul 16, 2021 01:30 PM
Jul 16, 2021 01:30 PM
I am solving a system of odes with imposed discontinuites on the right hand side. I can do this by calling a solver up to the time of the discontinuity, imposing the jump condition, and then
restarting the calculation (as in the attached worksheet). But I want to do this for a series of discontinuities. Since all the jumps are the same magnitude and occur at regular time intervals, it
seemed like there should be a way to program it rather than doing it manually, but I could not see how to do it. Can anyone help?
The conditions for the solution of the problem:
to exist and to be unique is that F∈C0t,xF∈Ct,x0 and Lipschitz continuous for t>t0t>t0. Then, the solution of the problem is given by:
Note that for the function FF to be integrable in t>t0t>t0, it must contain finitely many discontinuities so x(t)x(t) remains continuous in t>t0t>t0 whilst, otherwise, x′(t)x′(t) may not, in general.
As an example, consider the problem:
Its solution is then given by:
which is continuous but not differentiable at t=1t=1.
Jul 18, 2021 07:05 AM
Jul 18, 2021 07:05 AM
Jul 18, 2021 02:32 PM
Jul 18, 2021 02:32 PM
Jul 20, 2021 02:30 PM
Jul 20, 2021 02:30 PM
Jul 20, 2021 08:11 PM
Jul 20, 2021 08:11 PM
Aug 26, 2021 06:05 AM
Aug 26, 2021 06:05 AM
Jul 21, 2021 03:52 PM
Jul 21, 2021 03:52 PM
Jul 23, 2021 05:38 PM
Jul 23, 2021 05:38 PM
Aug 20, 2021 01:39 PM
Aug 20, 2021 01:39 PM
|
{"url":"https://community.ptc.com/t5/Mathcad/Differential-Equations-with-Discontinuities/td-p/738873","timestamp":"2024-11-04T14:44:00Z","content_type":"text/html","content_length":"389483","record_id":"<urn:uuid:4c39f2f2-de3e-4bf5-ab1e-8c4cbdde574e>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00333.warc.gz"}
|
I got the numbers to make this chart from CbChui, one of the dragons.
basicly, you always use 2.4 power,
If that's your power bar the slash is where your shooting.
So at that power, 75 degrees should hit the center screen, and my character should us an angle of 76 to hit Yayeska.
Now as for the wind chart, I'm not totally sure on this, but I believe this is how it works, you times the wind strength by the number on the chart and you add or subtract that number from your angle
depending on if your shooting with the wind or against it.
confusing? yeah, I thought so.
Maybe an example would help,
in my picture the wind is blowing in between the .2 and .4 marks, so let's say it's .3, you times 17 ( the wind strength) by .3, you get 5, so since my shot is going with the wind, I'd subtract 5
from the angle which the enemy is at (76 - 5) And then 71 should hit my target.
Are you really going to remember all this in a battle? Probably not, unless your super smart and can do math like a freakin wizard.
But you can play like me, and I take my games to far, so I don't recommend it. But anyways, here's what I would do if I had a printer, print out the picture at your exact monitor size, fold the ruler
over, so you can measure how far away your opponent is from you, check the wind chart, and try to estimate in your head, or get a calculator. Although I wouldn't recommend using a calculator, cause
mental math is FUN!
Ok, obviously I've been playing this game for far to long and I am losing my fricken mind.
So thanks to cbchui for the information.
Originally posted by Caleb on dotC Forum
|
{"url":"http://creedo.gbgl-hq.com/cbchui.htm","timestamp":"2024-11-11T14:55:13Z","content_type":"text/html","content_length":"5813","record_id":"<urn:uuid:836f563f-e53e-41a0-b4b9-8df560d0dba5>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00688.warc.gz"}
|
What is Kurtosis?
Kurtosis is a statistical measure that describes the shape of a distribution's tails in relation to its overall shape. Specifically, kurtosis quantifies whether the tails of a given distribution
contain extreme values (outliers) that are either more or less common than those of a normal distribution. It is a crucial tool for understanding the probability and likelihood of extreme deviations
in datasets.
Understanding Kurtosis
The concept of kurtosis is often associated with the 'peakedness' or 'flatness' of a distribution; however, this is a common misconception. Kurtosis is actually more about the tails of the
distribution than its peak. A distribution with high kurtosis tends to have heavy tails, or outliers, which means there is a higher chance of extreme positive or negative events occurring.
Conversely, a distribution with low kurtosis has light tails, indicating a lower chance of extreme events.
Types of Kurtosis
There are three types of kurtosis that statisticians typically refer to:
• Mesokurtic: This is the kurtosis of a normal distribution, and it has a kurtosis value of 3. Mesokurtic distributions have tails that are similar to the normal distribution.
• Leptokurtic: A leptokurtic distribution has kurtosis greater than 3. These distributions are characterized by fatter tails, which suggests a higher likelihood of extreme values. Financial returns
often exhibit leptokurtic behavior, indicating a higher risk of investment.
• Platykurtic: A platykurtic distribution has kurtosis less than 3. These distributions have thinner tails, which implies a lower likelihood of extreme values occurring.
Calculating Kurtosis
Kurtosis is calculated using the fourth moment about the mean. The formula for kurtosis is:
Kurtosis = (N * Σ(xi - x̄)⁴) / ((N - 1) * (N - 2) * (N - 3) * σ⁴) - 3
• N is the number of observations,
• xi is each individual observation,
• x̄ is the mean of all observations,
• σ is the standard deviation.
The "-3" at the end of the formula is used to adjust the kurtosis value so that the normal distribution has a kurtosis of 0 (or 3 depending on whether the adjustment is made). This makes it easier to
compare the kurtosis of other distributions against the normal distribution.
Excess Kurtosis
Excess kurtosis is the kurtosis of a distribution minus the kurtosis of a normal distribution (3). It provides a reference to the normal distribution and is often used in statistical tests to
determine if a dataset has a normal distribution. Positive excess kurtosis indicates a leptokurtic distribution, while negative excess kurtosis indicates a platykurtic distribution.
Applications of Kurtosis
Kurtosis is widely used in various fields, including finance, meteorology, and quality control. In finance, kurtosis is used to assess the risk of investments, as high kurtosis can indicate a higher
probability of extreme returns. In meteorology, kurtosis can help in understanding the likelihood of extreme weather events. In quality control, kurtosis can be used to detect anomalies in
manufacturing processes.
Importance of Kurtosis in Data Analysis
Understanding kurtosis is essential for data analysis because it provides insights into the likelihood of outliers, which can significantly affect statistical models and predictions. It is also
important for hypothesis testing and in the development of strategies to mitigate risk in various scenarios.
Kurtosis is a statistical measure that provides valuable information about the distribution of data, particularly in the tails. By analyzing kurtosis, one can gain insights into the probability of
extreme values occurring within a dataset. This can be crucial for risk assessment, decision-making, and understanding the underlying characteristics of a data set.
|
{"url":"https://deepai.org/machine-learning-glossary-and-terms/kurtosis","timestamp":"2024-11-13T17:36:07Z","content_type":"text/html","content_length":"163125","record_id":"<urn:uuid:1f3a7135-fddc-4456-a430-07b05f236b37>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00739.warc.gz"}
|
ball mills crushing glass
of new glass GRIS (2006). Glass can be recycled a million times over to produce bottles and jars of the same high quality every time which translate to converting waste to wealth leading to zero
waste Ogunro et al, (2018). Recycling of glass may involve the crushing of the glass into desired particle sizes with the use of hammer mills and ...
WhatsApp: +86 18838072829
To make use of the very extensive experimental results of Bergstrom (1963), Bergstrom et al., 1961, Bergstrom et al., 1963, and Bergstrom and Sollenberger (1962) on the breakage of sodalime
glass, material of the same composition and same source was dryground in our instrumented laboratory ball mill (Yang et al., 1967).
WhatsApp: +86 18838072829
Ball mill, Raw material mill, Ball mills Crusher, Jaw .. Ball mill (dry or wet) Usage: Ball . fertilizer, ferrous metal, nonferrous metal and glass ceramics and can be used for the dry and wet
grinding for all kinds of ores and other ..
WhatsApp: +86 18838072829
Ball Mill Liming Heavy Industry Crusher Mill, Jaw . The ball mill is a key equipment to grind the crushed materials, and the ball mill . ferrous metal and nonferrous metal, glass ceramics, etc,
and the ball mill can grind .
WhatsApp: +86 18838072829
The basic equipment required for glass art projects is uniform no matter the type of crushed glass art project you prefer. Crush Orange Soda 24 Bottles /12 Oz.: : Grocery .. Crush Orange Soda 24
Bottles /12 Oz.: .. Strawberry Crush 12oz Glass Bottles (Pack of 12) Crush Orange Soda, 12 oz Can (Pack of 24)
WhatsApp: +86 18838072829
Ball mill, Ball Machine, Grinder mill, crusher, Grinding Mill .. The ball mill is a key equipment to grind the crushed materials, and the ball mill is widely used in . glass ceramics, etc, and
the ball mill can grind ..
WhatsApp: +86 18838072829
The ability to simulate the Bond work index test also allows examination of truncated ball mill feed size distributions on the work index. For grinding circuits where the feed to a ball mill is
sent directly to the classifier and the cyclone underflow feeds the ball mill (see Figure ), a question arises as to whether this practice will alter the ball mill work index (BW i) of the
material ...
WhatsApp: +86 18838072829
Pulverizer /ball Mill, You Can Buy Various High Quality Pulverizer /ball Mill Products from Global Pulverizer /ball Mill Suppliers and Pulverizer /ball Mill . glass mill and pulverizer_Sand
Making Plant
WhatsApp: +86 18838072829
Ekko Glass Crush and Collect Service : Glass Crushers and .. Our glass crushing machine is small, safe, reliable and fun to use! It reduces glass waste volume by up to 80% and can be installed
into almost any pub, ..
WhatsApp: +86 18838072829
Ball mills are also able to accommodate batch or continuous processing, while grind size can be adjusted by altering the balls' diameter. Industrial ball mills can coarsely crush relatively large
material, while labgrade ball mills are suitable for finely milling glass to the micron level and further.
WhatsApp: +86 18838072829
CERAMIC LINED BALL MILL. Ball Mills can be supplied with either ceramic or rubber linings for wet or dry grinding, for continuous or batch type operation, in sizes from 15″ x 21″ to 8′ x 12′.
High density ceramic linings of uniform hardness male possible thinner linings and greater and more effective grinding volume.
WhatsApp: +86 18838072829
The Standard and NonClog Industrial Hammermills are designed to reduce the material to a nominal 3" to 5" (75mm to 25mm) output. These are primary stage crushers, commonly followed with
Centerfeed Mills or other types of secondary stage crushers. The HammerMaster is also a secondary stage crusher in that the maximum feed size is 6" (150mm).
WhatsApp: +86 18838072829
Our hammer mill crushers are designed to crush any brittle material for size reduction/particulate liberation. Glass is a great example of one of the variou...
WhatsApp: +86 18838072829
Pebble mills, a type of ball mill, rely on natural pebbles instead of spherical balls that roll about in the medium, crushing materials put inside. They are used to grind hard materials such as
minerals, glass, advanced ceramics, and semiconductor materials down to 1 micron or less in size.
WhatsApp: +86 18838072829
A ball mill is a form of grinder that is used to blend or grind materials for use. It is a cylindrical device mainly for grinding material such as iron ores, ceramic raw materials, glass, steel,
etc. The ball mill works on impact and attrition principle. Its impact is the size reduction given that the balls drop from almost the top of the shell.
WhatsApp: +86 18838072829
MF Impact grinding head. USD 2, USD 2, Ident. No. . Mills. IKA introduces the world's first disposable grinding system for safe, instant and precise milling results. Its unique and compact design
makes the unit space saving and ultraportable. The disposable grinding chamber eliminates the possibility of cross ...
WhatsApp: +86 18838072829
This Table of Ball Mill Bond Work Index of Minerals is a summary as tested on 'around the world sample'. You can find the SG of each mineral samples on the other table. Source 1. Source 2. Source
3. Source 3. Source 4.
WhatsApp: +86 18838072829
The hammer mill or ball mill takes the <3/4″ discharge from the jaw crusher and pulverizes it to liberate the values in the ore (usually gold), and one of them is a component of our TurnKey Ore
Processor. The size of the powder from a hammer mill is controlled by the size of the openings in the screen, and the discharge is processed on the ...
WhatsApp: +86 18838072829
The load exerted by grinding balls on lignocellulosic biomass in a vibratory ball mill is significantly larger than the other mill types, resulting in the highest enzymatic hydrolysis yield. The
glucose yields achieved were,,, and %, for vibratory ball mill, tumbler ball mill, jet mill, and centrifugal mill, respectively ...
WhatsApp: +86 18838072829
purpose, a glass crusher machine based on the ball mill concept is designed to transform glass waste into powder of 2mm particle size. The main enhanced features of this machine with respect to
stateoftheart designs are the continuous feed aspect and the powder discharge technique. The design methodology consisted of mathematical modeling ...
WhatsApp: +86 18838072829
As a bonus, this ball mill can also be used as a rock tumbler, or a glass tumbler to make your own "sea glass" at home. To use the mill as a rock tumbler, just leave out the steel balls, add
rocks, tumbling grit and water, and let it spin. ... The ball mill is powered by a fairly robust 12V DC motor salvaged from a junked printer. It had a ...
WhatsApp: +86 18838072829
Similarly, in ball milling sodalime glass, using the calculated surface areas from the singleparticle experiments of Bergstrom as a reference, the energy efficiency of the ball mill again is in
the range of 15%. ... Davis,, 1925. Ballmill crushing in closed circuit with screens, Minnesota School of Mines Experiment Station, Bulletin 10 ...
WhatsApp: +86 18838072829
Ball Mill Crushing by tubemills were first introduced into the crushing departments of cyanide plants when it was found that for crushing finer than 30mesh other types of crushing machinery were
not efficient. In order to crush with one pass, these mills were made 18 to 22 ft. ( to m.) in length. Pebbles were used as a grinding medium ...
WhatsApp: +86 18838072829
Introduction: The ball mill is a tumbling mill that uses steel balls as grinding media. Ball mills can be used in wet or dry systems for bulk and continuous milling, and are most widely used in
small or largescale ore beneficiation plant. Dry grinding: suitable for materials that react with water, such as building stones such as cement and marble.
WhatsApp: +86 18838072829
|
{"url":"https://agencja-afisz.pl/9024/ball-mills-crushing-glass.html","timestamp":"2024-11-12T07:18:19Z","content_type":"application/xhtml+xml","content_length":"22962","record_id":"<urn:uuid:01eab35f-576c-497c-8296-e2bf704e2c6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00671.warc.gz"}
|
45 research outputs found
Anomalous transport in one dimensional translation invariant Hamiltonian systems with short range interactions, is shown to belong in general to the KPZ universality class. Exact asymptotic forms for
density-density and current-current time correlation functions and their Fourier transforms are given in terms of the Pr\"ahofer-Spohn scaling functions, obtained from their exact solution for the
Polynuclear growth model. The exponents of corrections to scaling are found as well, but not so the coefficients. Mode coupling theories developed previously are found to be adequate for weakly
nonlinear chains, but in need of corrections for strongly anharmonic interparticle potentials.Comment: Further corrections to equations have been made. A few comments have been added, e.g. on the
non-applicability to exactly solved model
The short time behavior of nucleation probabilities is studied by representing nucleation as diffusion in a potential well with escape over a barrier. If initially all growing nuclei start at the
bottom of the well, the first nucleation time on average is larger than the inverse nucleation frequency. Explicit expressions are obtained for the short time probability of first nucleation. For
very short times these become independent of the shape of the potential well. They agree well with numerical results from an exact enumeration scheme. For a large number N of growing nuclei the
average first nucleation time scales as 1/\log N in contrast to the long-time nucleation frequency, which scales as 1/N. For linear potential wells closed form expressions are obtained for all
times.Comment: 8 pages, submitted to J. Stat. Phy
The topological pressure is evaluated for a dilute random Lorentz gas, in the approximation that takes into account only uncorrelated collisions between the moving particle and fixed, hard sphere
scatterers. The pressure is obtained analytically as a function of a temperature-like parameter, beta, and of the density of scatterers. The effects of correlated collisions on the topological
pressure can be described qualitatively, at least, and they significantly modify the results obtained by considering only uncorrelated collision sequences. As a consequence, for large systems, the
range of beta-values over which our expressions for the topological pressure are valid becomes very small, approaching zero, in most cases, as the inverse of the logarithm of system size.Comment: 15
pages RevTeX with 2 figures. Final version with some typo's correcte
The connection between the rate of entropy production and the rate of phase space contraction for thermostatted systems in nonequilibrium steady states is discussed for a simple model of heat flow in
a Lorentz gas, previously described by Spohn and Lebowitz. It is easy to show that for the model discussed here the two rates are not connected, since the rate of entropy production is non-zero and
positive, while the overall rate of phase space contraction is zero. This is consistent with conclusions reached by other workers. Fractal structures appear in the phase space for this model and
their properties are discussed. We conclude with a discussion of the implications of this and related work for understanding the role of chaotic dynamics and special initial conditions for an
explanation of the Second Law of Thermodynamics.Comment: 14 pages, 1 figur
The phase diagram of the staggered six vertex, or body centered solid on solid model, is investigated by transfer matrix and finite size scaling techniques. The phase diagram contains a critical
region, bounded by a Kosterlitz-Thouless line, and a second order line describing a deconstruction transition. In part of the phase diagram the deconstruction line and the Kosterlitz-Thouless line
approach each other without merging, while the deconstruction changes its critical behaviour from Ising-like to a different universality class. Our model has the same type of symmetries as some other
two-dimensional models, such as the fully frustrated XY model, and may be important for understanding their phase behaviour. The thermal behaviour for weak staggering is intricate. It may be relevant
for the description of surfaces of ionic crystals of CsCl structure.Comment: 13 pages, RevTex, 1 Postscript file with all figures, to be published in Phys. Rev.
We study the Lyapunov exponents for a moving, charged particle in a two-dimensional Lorentz gas with randomly placed, non-overlapping hard disk scatterers placed in a thermostatted electric field, $\
vec{E}$. The low density values of the Lyapunov exponents have been calculated with the use of an extended Lorentz-Boltzmann equation. In this paper we develop a method to extend these results to
higher density, using the BBGKY hierarchy equations and extending them to include the additional variables needed for calculation of Lyapunov exponents. We then consider the effects of correlated
collision sequences, due to the so-called ring events, on the Lyapunov exponents. For small values of the applied electric field, the ring terms lead to non-analytic, field dependent, contributions
to both the positive and negative Lyapunov exponents which are of the form ${\tilde{\epsilon}}^{2} \ln\tilde{\epsilon}$, where $\tilde{\epsilon}$ is a dimensionless parameter proportional to the
strength of the applied field. We show that these non-analytic terms can be understood as resulting from the change in the collision frequency from its equilibrium value, due to the presence of the
thermostatted field, and that the collision frequency also contains such non-analytic terms.Comment: 45 pages, 4 figures, to appear in J. Stat. Phy
We consider a general method for computing the sum of positive Lyapunov exponents for moderately dense gases. This method is based upon hierarchy techniques used previously to derive the generalized
Boltzmann equation for the time dependent spatial and velocity distribution functions for such systems. We extend the variables in the generalized Boltzmann equation to include a new set of
quantities that describe the separation of trajectories in phase space needed for a calculation of the Lyapunov exponents. The method described here is especially suitable for calculating the sum of
all of the positive Lyapunov exponents for the system, and may be applied to equilibrium as well as non-equilibrium situations. For low densities we obtain an extended Boltzmann equation, from which,
under a simplifying approximation, we recover the sum of positive Lyapunov exponents for hard disk and hard sphere systems, obtained before by a simpler method. In addition we indicate how to improve
these results by avoiding the simplifying approximation. The restriction to hard sphere systems in $d$-dimensions is made to keep the somewhat complicated formalism as clear as possible, but the
method can be easily generalized to apply to gases of particles that interact with strong short range forces.Comment: submitted to CHAOS, special issue, T. Tel. P. Gaspard, and G. Nicolis, ed
We study the Lyapunov exponents of a two-dimensional, random Lorentz gas at low density. The positive Lyapunov exponent may be obtained either by a direct analysis of the dynamics, or by the use of
kinetic theory methods. To leading orders in the density of scatterers it is of the form $A_{0}\tilde{n}\ln\tilde{n}+B_{0}\tilde{n}$, where $A_{0}$ and $B_{0}$ are known constants and $\tilde{n}$ is
the number density of scatterers expressed in dimensionless units. In this paper, we find that through order $(\tilde{n}^{2})$, the positive Lyapunov exponent is of the form $A_{0}\tilde{n}\ln\tilde
{n}+B_{0}\tilde{n}+A_{1}\tilde{n}^{2}\ln\tilde{n} +B_{1}\tilde{n}^{2}$. Explicit numerical values of the new constants $A_{1}$ and $B_{1}$ are obtained by means of a systematic analysis. This takes
into account, up to $O(\tilde{n}^{2})$, the effects of {\it all\/} possible trajectories in two versions of the model; in one version overlapping scatterer configurations are allowed and in the other
they are not.Comment: 12 pages, 9 figures, minor changes in this version, to appear in J. Stat. Phy
A large class of technically non-chaotic systems, involving scatterings of light particles by flat surfaces with sharp boundaries, is nonetheless characterized by complex random looking motion in
phase space. For these systems one may define a generalized, Tsallis type dynamical entropy that increases linearly with time. It characterizes a maximal gain of information about the system that
increases as a power of time. However, this entropy cannot be chosen independently from the choice of coarse graining lengths and it assigns positive dynamical entropies also to fully integrable
systems. By considering these dependencies in detail one usually will be able to distinguish weakly chaotic from fully integrable systems.Comment: Submitted to Physica D for the proceedings of the
Santa Fe workshop of November 6-9, 2002 on Anomalous Distributions, Nonlinear Dynamics and Nonextensivity. 8 pages and two figure
|
{"url":"https://core.ac.uk/search/?q=author%3A(van%20Beijeren%2C%20Henk)","timestamp":"2024-11-03T22:51:28Z","content_type":"text/html","content_length":"156467","record_id":"<urn:uuid:b68fbe3b-4418-43bf-91a8-33563c8be6ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00656.warc.gz"}
|
Bestiary - Light Bearman (Ahman) - SWARFARM
Increase the Defense of ally monsters with Light attribute by 30%
Strikes and provokes the enemy for 1 turn with a 50% chance.
Slams the enemy to deal damage and decreases the Attack Power for 2 turns. This attack will deal more damage according to your MAX HP. (Reusable in 4 turns).
Recovers all allies by 20% of my MAX HP if you get a critical hit when you attack on your turn. [Automatic Effect]
|
{"url":"https://swarfarm.com/bestiary/13704-light-bearman-ahman/","timestamp":"2024-11-06T23:13:46Z","content_type":"text/html","content_length":"24949","record_id":"<urn:uuid:9c12689c-5d85-4207-9695-b2039afb9c1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00813.warc.gz"}
|
Printable Calendars AT A GLANCE
Linear Or Nonlinear Functions Worksheet Answer Key
Linear Or Nonlinear Functions Worksheet Answer Key - Graphing proportional relationships from a table. X + y = 20. Either the data can be plotted as a line, or it can not. Web identify linear and
nonlinear functions from tables. 5.0 (2 reviews) get a hint. Web try this set of linear vs nonlinear functions worksheet pdfs to determine whether a function is linear or not. 8th grade > unit 3.
Interpreting graphs of functions worksheet. Find the slope of the line through each pair of points. Determine whether each equation represents a linear or nonlinear function.
Students encounter some nonlinear functions (such as the inverse proportions that they studied in grade 7 as well as basic quadratic and exponential functions) whose. Just as with systems of linear
equations, a solution. Linear and nonlinear functions handout. Graphing proportional relationships from an equation. Web try this set of linear vs nonlinear functions worksheet pdfs to determine
whether a function is linear or not. Web some of the worksheets for this concept are linear or nonlinear functions 1, notes linear nonlinear functions, comparing linear and nonlinear functions, c
linear and nonlinear. Prove your claim by graphing the equation using a table of values.
½ x + ¼ y = ¾. After reviewing examples of linear and. Graphing proportional relationships from an equation. X + y = 20. Use this reference worksheet to help students learn about linear and
nonlinear functions!
Linear And Functions Worksheet
Find the slope of a line parallel to. Determine whether each equation represents a linear or nonlinear function. There are only two possibilities there. A system of nonlinear equations is a system
where at least one of the equations is not linear. Web try this set of linear vs nonlinear functions worksheet pdfs to determine whether a function is linear.
Linear And Functions Worksheet
Web review of linear functions (lines) find the slope of each line. Web some of the worksheets for this concept are linear or nonlinear functions 1, notes linear nonlinear functions, comparing linear
and nonlinear functions, c linear and nonlinear. Interpreting graphs of functions worksheet. Find the slope of a line parallel to. Prove your claim by graphing the equation using.
Linear & Functions NOTES & PRACTICE by Teach Simple
Find the slope of a line parallel to. Web try this set of linear vs nonlinear functions worksheet pdfs to determine whether a function is linear or not. Y = 3x + 5. 8th grade > unit 3. After
reviewing examples of linear and.
Linear Vs Functions Worksheet Printable Word Searches
Graphing proportional relationships from an equation. There are only two possibilities there. Students encounter some nonlinear functions (such as the inverse proportions that they studied in grade 7
as well as basic quadratic and exponential functions) whose. It can not be both. They are mutually exclusive definitions.
Linear Equations Word Problems Worksheet
Are you ready to embark on an epic journey through the realms of linear and nonlinear equations? Which of the following equations in not linear? A system of nonlinear equations is a system where at
least one of the equations is not linear. 8th grade > unit 3. Use 1, 4, 9, 16, 25 for x.
Linear And Functions Worksheet Answer Key
Give students an opportunity to demonstrate their understanding of linear vs. ½ x + ¼ y = ¾. Find the slope of each line. Prove your claim by graphing the equation using a table of values. Just as
with systems of linear equations, a solution.
Linear And Equations Worksheets
Which of the following equations in not linear? Interpreting graphs of functions worksheet. There are only two possibilities there. Find the slope of a line parallel to. Find the slope of each line.
Linear And Functions Worksheet
Find the slope of a line parallel to. Web some of the worksheets for this concept are linear or nonlinear functions 1, notes linear nonlinear functions, comparing linear and nonlinear functions, c
linear and nonlinear. Graphing proportional relationships from an equation. Find the slope of the line through each pair of points. X + y = 20.
PPT Linear and Functions PowerPoint Presentation, free
Web try this set of linear vs nonlinear functions worksheet pdfs to determine whether a function is linear or not. Determine whether each equation represents a linear or nonlinear function.
Interpreting graphs of functions worksheet. Either the data can be plotted as a line, or it can not. They are mutually exclusive definitions.
Linear Or Nonlinear Functions Worksheet Answer Key - Web try this set of linear vs nonlinear functions worksheet pdfs to determine whether a function is linear or not. Prove your claim by graphing
the equation using a table of values. Find the slope of each line. Just as with systems of linear equations, a solution. 8th grade > unit 3. After reviewing examples of linear and. There are only two
possibilities there. Students encounter some nonlinear functions (such as the inverse proportions that they studied in grade 7 as well as basic quadratic and exponential functions) whose. 5.0 (2
reviews) get a hint. Interpreting graphs of functions worksheet.
Web some of the worksheets for this concept are linear or nonlinear functions 1, notes linear nonlinear functions, comparing linear and nonlinear functions, c linear and nonlinear. After reviewing
examples of linear and. Use 1, 4, 9, 16, 25 for x. Find the slope of the line through each pair of points. Either the data can be plotted as a line, or it can not.
Just as with systems of linear equations, a solution. It can not be both. Determine whether each equation represents a linear or nonlinear function. Find the slope of each line.
Find The Slope Of The Line Through Each Pair Of Points.
Determine whether each equation represents a linear or nonlinear function. 8th grade > unit 3. Y = 3x + 5. Students encounter some nonlinear functions (such as the inverse proportions that they
studied in grade 7 as well as basic quadratic and exponential functions) whose.
After Reviewing Examples Of Linear And.
Click the card to flip 👆. Give students an opportunity to demonstrate their understanding of linear vs. There are only two possibilities there. Are you ready to embark on an epic journey through
the realms of linear and nonlinear equations?
It Can Not Be Both.
½ x + ¼ y = ¾. Use 1, 4, 9, 16, 25 for x. A system of nonlinear equations is a system where at least one of the equations is not linear. Web some of the worksheets for this concept are linear or
nonlinear functions 1, notes linear nonlinear functions, comparing linear and nonlinear functions, c linear and nonlinear.
They Are Mutually Exclusive Definitions.
Linear and nonlinear functions handout. Prove your claim by graphing the equation using a table of values. X + y = 20. Graphing proportional relationships from a table.
Related Post:
|
{"url":"https://ataglance.randstad.com/viewer/linear-or-nonlinear-functions-worksheet-answer-key.html","timestamp":"2024-11-02T11:56:47Z","content_type":"text/html","content_length":"36185","record_id":"<urn:uuid:8f985c08-a36b-43f7-8ed5-4107fa8c5d81>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00269.warc.gz"}
|
Relativistic effects of laser doppler velocimeters using one light beam with two frequencies
Relativistic effects of laser Doppler velocimeters (LDV's) are discussed and novel LDV systems are proposed. If the direction of the scattered light makes a right angle with the flow direction,
relativistic effects completely disappear no matter how high the velocity of a moving particle becomes. The proposed LDV's involve that the velocity can be measured from one scattered light beam with
two different single frequencies. It is also predicted that the usual optical heterodyne-detection techniques can be made applicable to measure even ultra-high velocities up to the region where
relativistic effects should be taken into account.
Optics Communications
Pub Date:
February 1975
□ Beat Frequencies;
□ Flow Measurement;
□ Laser Doppler Velocimeters;
□ Light Beams;
□ Relativistic Effects;
□ Frequency Response;
□ Light Scattering;
□ Optical Heterodyning;
□ Velocity Measurement;
□ Instrumentation and Photography
|
{"url":"https://ui.adsabs.harvard.edu/abs/1975OptCo..13..194O/abstract","timestamp":"2024-11-05T17:34:31Z","content_type":"text/html","content_length":"36798","record_id":"<urn:uuid:2ed35df1-1c87-4865-8379-0d4190f4d612>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00687.warc.gz"}
|
Adaptive spline interpolation for Hamilton–Jacobi–Bellman equations
Title data
Bauer, Florian ; Grüne, Lars ; Semmler, Willi:
Adaptive spline interpolation for Hamilton–Jacobi–Bellman equations.
In: Applied Numerical Mathematics. Vol. 56 (2006) Issue 9 . - pp. 1196-1210.
ISSN 1873-5460
DOI: https://doi.org/10.1016/j.apnum.2006.03.011
Abstract in another language
We study the performace of adaptive spline interpolation in semi--Lagrangian discretization schemes for Hamilton--Jacobi--Bellman equations. We investigate the local approximation properties of cubic
splines on locally refined grids by a theoretical analysis. Numerical examples show how this method performs in practice. Using those examples we also illustrate numerical stability issues.
Further data
|
{"url":"https://eref.uni-bayreuth.de/id/eprint/63586/","timestamp":"2024-11-09T14:23:53Z","content_type":"application/xhtml+xml","content_length":"21880","record_id":"<urn:uuid:2f9c3e05-fcf1-4cad-b451-92fd0648551f>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00085.warc.gz"}
|